How do you structure unit tests for cross-compiled code? - c++

My new project is targeting an embedded ARM processor. I have a build system that uses a cross-compiler running on an Ubuntu linux box. I like to use unit testing as much as possible, but I'm a little bit confused about how to proceed with this setup.
I can't see how to run unit tests on the ARM device itself (somebody correct me if I'm wrong). I think that my best option is to compile the code on the build machine using its own native compiler for the unit tests. Is this approach fundamentally flawed? Is unit testing on a different platform a waste of time?
I'm planning to use CppUnit on the build machine using the native compiler for the unit tests. Then I'll cross compile the code for the ARM processor and do integration and system testing on the target device itself. How would you structure the source code and the test code to keep this from turning into a tangled mess?

With embedded device it depends on what interfaces (hardware) you have.
For example the motion control cards I deal with uses a command line interface. The IDE they ship uses it as it primary method of interacting with the cards. It works the same way regardless if I am using PCI, IDE, Serial, or Ethernet.
The DLL they ship for programming give access to the command line interface. So I can send a string, and read back the response. So what I do for my unit tests is have a physical card hooked (or in) my development machine. I send it commands after uploading the software, read the response and if they are correct it passes the test.
I also have extra hardware, a black box if you will, that simulates a machine that motion control card is normally hooked up too. It helps with the automated sets but there is a manual phase as I have to set switches to simulate different setups on the machine.
I have achieved a greater degree of automation by taking a digital I/O card and using it outputs to feed into the inputs of the motion control card and the same in reverse.
I found that for most hardware you have to have some type of simulator hardware.
The exception being the rare package that comes with a software simulator.
I know this isn't probably ideal as not every developer can have one of these on their desk. My hardware simulator so I can give it to whoever it working on the motion control software at the time. If it can't be portable then having a dedicated testing or hardware development computer would be in order.
Finally it boils down on the specifics of your hardware and what support the manufacturer gives in terms of software and simulators. To help you more you will need to post more specifics.

In ten-plus years in the embedded industry, I've seen it done quite a few ways. At my current company:
one of our products has enough horsepower (and space) to run tests on the target board. It's somewhat slow, and we can't stick all the python on the box we'd like, but it works well.
one of our products doesn't have the space, so we compile all the libs we can in x86 (anything that isn't hardware-dependent) and run unit tests on desktops. It's not perfect, but far better than nothing.
one of our components is a super-lightweight power-miser on exotic hardware, so virtually no unit tests are possible. Core algorithms (DES, etc.) are tested on x86 as above, but much of the code simply has to be tested as a whole, in situ. This entails lot of code reviews.

Related

Unit testing for embedded system

I am having a product which runs on Vxworks on end product hardware. But development is done in Visual studio using cross compiler and downloaded to hardware for testing. I am planning to write unit test cases for product. My question is because my development is done on windows and how can I run unit test cases as it is not resemebling real scenario?
Any inputs are welcome
I suspect that you have code that interacts a lot with VxWorks through system calls. Putting a layer of abstraction in there will be hard.
Are you using c or c++?
If you are using c++ and you can identify parts of the system that:
are subject to frequent change; and
are mostly handling internal data; or
just relating to a predefined/formalized subset of the surrounding system (e.g. protocol handling or individual PLC control logic modules).
Then you should first inject c++ interface(s) between the module and the rest of the system. This module should only relate to the interface(s)/adapter(s). Then you have an isolated piece that can be strapped into a visual studio test harness.
Then you should try to identify areas in your system that are error prone to bugs, subject to (frequent) change or audit. You will probably never achieve even 50% coverage on the target system, but you can achieve a system where 90% of the daily coding happens within the covered 40% of the code base.
Not possible without additional effort (new project; compile twice; for your host and let it run on your host) In this case search for an development environment that support your target for unit tests, for example http://www.parasoft.com/jsp/products/embedded_cpptest.jsp
Go on reading at parasoft C++ unit test question .
You do it just like any other system:
Write the unit tests
compile and load into target system
run unit tests
verify results
Where it runs is immaterial. The biggest difficulty is on an embedded system with limited output capability. But even if there is only a single LED, it should still be possible to signal success and failure. Only it is a little more abstract than showing "passed".

Unit test for uC/OS - II

I am a graduate student and I am trying to propose a project for a advanced testing course.
Since I am a embedded guy, I do want to test something challenging related to embedded systems.
uC/OS-II is an very nice open source light-weighted OS for embedded systems. So I want to propose the testing for it for my course project.
But I don't know the feasibility of testing uC/OS. Is it doable?
I am using Blackfin and SHARC (are from Analog Devices) now and they are compatible with uC/OS (said on the uC/OS website).
In terms of testing tools, I think CUnit might work. Also we have a unit testing tool call EmbeddedUnit, which runs on VDSP (development environment for Analog Devices processors).
I have no experience with uC/OS, but my understanding is we should compile it and then include the .obj files and the header files into the project then we can use and test the functions in uC/OS.
Am I right?
Is it doable? Yes it is. We had a project that needed to be portable to many different environments uCos-II, Linux and VxWorks. In order to do that we wrote as simple abstraction layer that gave us a common API on all platforms to the OS features we chose to have enabled. We then wrote a Unit Test to test the abstraction layer, and had a unit test case for each OS feature we wanted to test (Msg Queues, Semaphores, Event Flags etc). We used that to verify our abstraction layer was functional and working on all 3 host environments.
uCos-II is delivered as very clean c-code that can easily be used in any number of tools such as code coverage etc.
Good luck.

Continuous Integration/ Unit testing in embedded C++ systems

What tools are generally used for unit testing and especially continuous integration for embedded systems?
I am especially thinking that you usually have to cross-compile and deploy, and also that you can't easily visualize the target platform. Also it can be difficult to run test-code and frameworks.
What could I use too alleviate these difficulties?
(I think it should be some kind of dual targeting, where the build server runs its tests on a easier target)
For unit testing, take a look at Unity.
http://sourceforge.net/apps/trac/unity/wiki
It is a really lightweight test harness (2 x .h and 1 x .c file) supported by Ruby scripts. We have been using in an embedded ARM7 target system for unit testing (redirecting test reporting over a serial port).
It is also supported by CMock for (surprise, surprise) Mocking. Even though not extensive, the great thing about these is they are so easy to use.
Regarding CI, then Hudson (now Jenkins) is very good if you're Linux based.
Also look at CppUTest and check out James Grenning's book "TDD for Embedded C" at http://renaissancesoftware.net/
At work I use the embUnit framework:
http://embunit.sourceforge.net/embunit/index.html
The nice thing about this framework is, that it's lean. It does not require any external libraries (not even libc). You can hook your own output function with ease so if you work on a system where the only connection to the outside world is jtag or an UART, then embUnit will still work.
I have used RCUNIT and CANTATA++ for unit testing embedded code on the PC. Any Nunit should easily integrate into any continuous test platform. We found it an lot easier to just simulate the hardware on the PC and only test on the target during final integration.
Hardware interface abstraction is crucial for unit testing embedded code on the PC. This works well with continuous integration since it is run on a pc with just the hardware access simulated. With a little effort we could test 95% of the code on a PC for continues integration.
You could also look at these questions:
Unit Testing C Code
Testing Frameworks for C
Unit Testing Embedded Software
I've seen C(pp)unit used on a system that let you launch to the target via JTAG.
Helps to have console comms etc sorted.
But it can work.
I'm working on an open-source tool for that, that is either a minimal test or can provide detailed information when run through the debugger.

Unit testing device drivers

I have a situation where I need to write some unit tests for some device drivers for embedded hardware. The code is quite old and big and unfortunately doesn't have many tests. Right now, the only kind of testing that's possible is to completely compile the OS, load it onto the device, use it in real life scenarios and say that 'it works'. There's no way to test individual components.
I came across an nice thread here which discusses unit testing for embedded devices from which I got a lot of information. I'd like to be a little more specific and ask if anyone has any 'best practices' for testing device drivers in such a scenario. I don't expect to be able to simulate any of the devices which the board in question is talking to and so will probably have to test them on actual hardware itself.
By doing this, I hope to be able to get unit test coverage data for the drivers and coax the developers to write tests to increase the coverage of their drivers.
One thing that occurs to me is to write embedded applications that run on the OS and exercise the driver code and then communicate the results back to the test harness. The device has a couple of interfaces which I can use to probably drive the application from my test PC so that I can exercise the code.
Any other suggestions or insights would be very much appreciated.
Update: While it may not be exact terminology, when I say unit testing, I meant being able to test/exercise code without having to compile the entire OS+drivers and load it onto the device. If I had to do that, I'd call it integration/system testing.
The problem is that the pieces of hardware we have are limited and they're often used by the developers while fixing bugs etc. To keep one dedicated and connected to the machine where the CI server and automated testing is done might be a no no at this stage. That's why I'm looking for ways to test the driver without having to actually build the whole thing and upload it onto the device.
Summary
Based on the excellent answers below, I think a reasonable way to approach the problem would be to expose driver functionality using IOCTLs and then write tests in the application space of the embedded device to actually exercise the driver code.
It would also make sense to have a small program residing in the application space on the device which exposes an API that can exercise the driver via serial or USB so that the meat of the unit test can be written on a PC which will communicate to the hardware and run the test.
If the project was just being started, I think we'd have more control over the way in which the components are isolated so that testing can be done mostly at the PC level. Given the fact that the coding is already done and we're trying to retrofit the test harness and cases onto the system, I think the above approach is more practical.
Thanks everyone for your answers.
In the old days, that was how we tested and debugged device drivers. The very best way to debug such a system was for engineers to use the embedded system as a development system and—once adequate system maturity was reached— take away the original cross-development system!
For your situation, several approaches come to mind:
Add ioctl handlers: each code exercises a particular unit test
With conditional compilation, add a main() to the driver which conducts functional unit tests in the driver and outputs results to stdout.
For initial ease in debugging, maybe this could be made multi-platform operable so you don't have to debug on the target hardware.
Perhaps conditional code can also emulate a loopback-style device.
The code that really is dependent on the hardware (the lowest level of the driver stack in a layered architecture) can't really be tested anywhere except on the hardware, or a high-quality simulation of the hardware.
If your driver has some component of higher-level functionality that doesn't rely directly to the hardware (e.g., a protocol handler for sending messages to hardware in a particular format) and if that part is nicely self-contained in the code, then you could unit-test that separately in a PC-based unit-test framework.
Going back to the lowest level—if it's dependent on the hardware, then the test jig needs to include the hardware. You can make a test jig that includes the hardware, the driver, and some test software. The main thing, I think, is to get the normal product's application code out of the test, and put in some test code instead. The test code can systematically test all the driver's features and corner-cases (which the application code may not), and can also really hammer the driver intensively in a short amount of time (which the application probably doesn't). Thus it's more efficient use of your limited hardware than just running the application, and gives you better results.
If you can get a PC into the loop, then the PC might help with the testing. E.g. if you're writing a serial port driver for an embedded device, then you could:
Write test code for the embedded device that sends various known data streams.
Connect it to a PC's serial port, running test code that verifies the transmitted data streams.
Same in the other direction—PC sends data; embedded device receives it and verifies it, and notifies the PC of any errors.
The tests can stream data at full speed, and play with a range of different byte timings (I once found a microcontroller UART silicon bug that only appeared if bytes were sent with a ~5 ms delay between bytes).
You could do a similar thing with an Ethernet driver, a Wi-Fi driver.
If you're testing a storage device driver, such as for an EEPROM or Flash chip, then the PC couldn't get involved in the same way. In that case, your test harness could test all sorts of write conditions (single-byte, block...), and verify data integrity using all sorts of read conditions.
I had a similar problem two or three years ago. I've ported a device driver from VxWorks to Integrity. We had changed only operating system dependent parts of the driver but it was a safety critical project, so all the unit tests, integration tests are redone. We have used a automated testing tool called LDRA testbed for our unit tests. 99% of our unit tests are done on Windows machines with Microsoft Compilers. Now I'll explain how to do this
Well first of all, when you are doing unit testing you are testing a software. When you include the real device in your tests, you are also testing the device. Sometimes there may be issues with hardware or documentation of the hardware. When you are designing the software, if you have described the behaviour of the each function clearly, it is very easy to make unit testing, for example, Think about the function;
readMessageTime(int messageNo, int* time);
//This function calculates the message location, if the location is valid,
//it reads the time information
address=calculateMessageAddr(messageNo);
if(address!=NULL) {
read(address+TIME_OFFSET,time);
return success;
}
else {
return failure;
}
Well, here you are just testing if readMessageTime is doing what it is supposed to do. You do not have to test if calculateMessageAddr is calculating the right result or, read reads the right address. That is the responsibility of some other unit tests.. So what you have to do is write stubs for calculateMessageAddr and read(OS function) and check if it calls the functions with correct parameters. This is the case If you are not accessing the memory directly from your driver. You can test any kind of driver code without any OS or device with this mentality.
If you have mapped the device memory directly into your memory space and device driver reads and writes to device memory as it is its own memory, it gets a little bit complicated. Using automated testing tools, now you have to watch values of pointers and define pass/fall criterieas according to the values of these pointers. if you are reading a value from memory, you have to define the expected value. This may be hard in some cases.
There is also one more issue, developers always confuse in unit testing of drivers such like:
readMessageTime(int messageNo, int* time);
//This function calculates the message location, if the location is valid,
//it does some jobs to make the device ready to read then
//it reads the time information
address=calculateMessageAddr(messageNo);
if(address!=NULL) {
do_smoething(); // Get the device ready to read!
do_something_else() // do some other stuff so you can read the result in 3us.
status=NOT_READY;
while(status==NOT_READY) // mustn't be longer than 3us.
status=read(address+TIME_OFFSET,time);
return success;
} else
{
return failure;
}
Here do_something and do_something_else does some jobs on device to make it ready to read. Developers always ask themselves "What if the device do not get ready forever and my code have a deadlock here" and they tend to test this kind of stuff on device.
Well, you have to trust the device manufacturer and the technical author. If they are saying that device will be ready in 1-2us, you do not need to worry about this. If your code fails here, you have to report it to device manufacturer, it is not your job to find a workaround to overhelm this problem. Did you see my point?
I hope this helps….
I had this exact task just two months ago.
Let me guess:
You probably have "snippets" of code that speak low level details to the device. You know that these snippets work, but you can't get coverage on them because they have a dependency to the device drivers.
Likewise, it does not make sense to test every single line of it individually. They are never run in isolation, and your unit test would end up looking like a mirror reflection of the production code.
For example, if you wish to start the device, you need to create a connection, pass it a specific low level reset command, then an initialize parameter struct etc etc.
And if you need to add a piece of configuration, this may require you to take it off line, add the configuration and then take it online.
Stuff like that.
You do NOT want to test low level stuff. Your unit tests would then only reflect how you assume that the device work without confirming anything.
The key here is to create three items: a controller, an abstraction and an adapter implementation of that abstraction. In Cpp, Java or C# you would create either a base class or an interface to represent this abstraction. I will assume that you created an interface.
You break up the snippets into atomic operations. For example you create a method called "start" and "add(parameter)" in the interface. You put your snippets in the device adapter.
The controller acts on the adapter through the interface.
Identify pieces of logic within the snippets that you have placed in the adapter. Then you need to decide wether this logic is low level (protocol handling details etc) or wether this is logic that should belong in the controller.
You can then test in two stages:
* Have a simple test panel application that acts on the concrete adapter. This is used to confirm that the adapter actually works. That it starts when you press "start". That, for example, if you press "go offline", "transmit(192)" and "go online" in sequence, that the device responds as expected. This is your integration test.
You do not unit test the details in the adapter. You test it manually because the only success criteria is how the device responds.
However, the controller is completely unit tested. It only has a dependency to the abstraction, which is mocked out in your test code. Thus, your code has no dependency to your device driver because the concrete adapter is not involved.
Then you write unit tests to confirm that, for instance, the method "Add(1)" actually invokes "Go offline" then "Transmit(1)" and then "Go online" on the mocked out abstraction.
The challenge here is to draw the distinction between the adapter and the controller. What goes where? What worked for me was to create the aforementioned test panel first and then manipulate the device through it.
The adapater should hide the details you will only have to change if the device changes.
If the control panel is cumbersome to operate with lots of sequences that needs to be repeated again and again, or that very device specific knowledge is required to operate the panel, then you have too high granularity and should bulk some of them together. The test panel should make sense.
If end user requirements changing have impact on the adapter code, then you probably have too low granularity and should split the operations up, so that the requirements change can be accommodated with test driven development in the controller class.
I'd recommend for application-based testing. Even if the scaffolding can be hard and costly to build, there is a lot to gain here:
crash only once process as opposed to one system
ability to use standard tool set (debugger, memory checker ...)
overcome the hardware availability limitation
faster feedback: no installation in device, just compile and test
...
As far as naming is concerned, this can be called component testing.
The application can either initialize the device driver the same way the target OS does, or use directly the interns of the driver. The former is more expensive but leads to more coverage. Then the linker will tell which functions are missing, stub them, possibly using exploding stubs.
Vocabulary
I don't expect to be able to simulate any of the devices which the board in question is talking to and so will probably have to test them on actual hardware itself.
Then, you are stepping out of unit testing. Maybe you could use one of these expressions instead?
Automated testing : testing happen without user input (the contrary of Manual Testing).
Integration testing : testing several components together (the contrary of Unit testing).
On a bigger scale, if you test a whole system and not just a few components together, it is called System testing.
ADDED after comments and updates in the question :
Component testing : like integration testing or System testing, but on an even smaller scale.
Note : All three Component-Integration-System Testings share the same set of problems, on different scales. On the contrary, Unit Testing does not (see lower).
Advantages of "real" Unit Testing
With Integration- (or System- or Component-) Testing, it is certainly interesting to get some feedback, like test coverage. It is certainly useful to do.
But it is very hard (read "very costly") to make progress beyond some point, so
I suggest you use complementary approaches, like adding some real Unit Tests. Why? :
It is very hard to simulate the edge or error conditions. (Examples : the computer clock crosses a day or year during a transaction ; the network cable is unplugged ; power went down then up on some component, or the whole system ; the disk is full). Using Unit Testing, because you simulate these conditions rather than try to reproduce them, it is much easier. Unit Testing is your only chance to get a really good code coverage.
Integration testing takes time (because of access to external resources). You could execute thousands of unit test during the execution of one Integration Test. So testing many combinations are only possible with Unit Tests...
Requiring access to specific resources (hardware, Licence etc...), Integration Testing is often limited in time or scale. If the resources are shared by other projects, each project might use them only during a few hours per day. Even with exclusive access, maybe only one machine can use it, so you can't run tests in parallel. Or, your company may buy a resource (Licence or Hardware) for production, but not have it (or early enough) for development...

Test Automation with Embedded Hardware

Has anyone had success automating testing directly on embedded hardware?
Specifically, I am thinking of automating a battery of unit tests for hardware layer modules. We need to have greater confidence in our hardware layer code. A lot of our projects use interrupt driven timers, ADCs, serial io, serial SPI devices (flash memory) etc..
Is this even worth the effort?
We typically target:
Processor: 8 or 16 bit microcontrollers (some DSP stuff)
Language: C (sometimes c++).
Sure. In the automotive industry we use $100,000 custom built testers for each new product to verify the hardware and software are operating correctly.
The developers, however, also build a cheaper (sub $1,000) tester that includes a bunch of USB I/O, A/D, PWM in/out, etc and either use scripting on the workstation, or purpose built HIL/SIL test software such as MxVDev.
Hardware in the Loop (HIL) testing is probably what you mean, and it simply involves some USB hardware I/O connected to the I/O of your device, with software on the computer running tests against it.
Whether it's worth it depends.
In the high reliability industry (airplane, automotive, etc) the customer specifies very extensive hardware testing, so you have to have it just to get the bid.
In the consumer industry, with non complex projects it's usually not worth it.
With any project where there's more than a few programmers involved, though, it's really nice to have a nightly regression test run on the hardware - it's hard to correctly simulate the hardware to the degree needed to satisfy yourself that the software testing is enough.
The testing then shows immediately when a problem has entered the build.
Generally you perform both black box and white box testing - you have diagnostic code running on the device that allows you to spy on signals and memory in the hardware (which might just be a debugger, or might be code you wrote that reacts to messages on a bus, for instance). This would be white box testing where you can see what's happening internally (and even cause some things to happen, such as critical memory errors which can't be tested without introducing the error yourself).
We also run a bunch of 'black box' tests where the diagnostic path is ignored and only the I/O is stimulated/read.
For a much cheaper setup, you can get $100 microcontroller boards with USB and/or ethernet (such as the Atmel UC3 family) which you can connect to your device and run basic testing.
It's especially useful for product maintenance - when the project is done, store a few working boards, the tester, and a complete set of software on CD. When you need to make a modification or debug a problem, it's easy to set it all back up and work on it with some knowledge (after testing) that the major functionality was not affected by your changes.
-Adam
Yes. I have had success, but it is not a stragiht-forward problem to solve. In a nutshell here is what my team did:
Defined a variety of unit tests using a home-built C unit-testing framework. Basically, just a lot of macros, most of which were named TEST_EQUAL, TEST_BITSET, TEST_BITVLR, etc.
Wrote a boot code generator that took these compiled tests and orchestrated them into an execution environment. It's just a small driver that executes our normal startup routine - but instead of going into the control loop, it executes a test suite. When done, it stores the last suite to run in flash memory, then it resets the CPU. It will then run then next suite. This is to provide isolation incase a suite dies. (However, you may want to disable this to make sure your modules cooperate. But that's an integration test, not a unit test.)
Individual tests would log their output using the serial port. This was OK for our design because the serial port was free. You will have to find a way to store your results if all your IO is consumed.
It worked! And it was great to have. Using our custom datalogger, you could hit the "Test" button, and a couple minutes later, you would have all the results. I highly recommend it.
Updated to clarify how the test driver works.
Yes.
The difficulty depends on the type of hardware that you're trying to test. As others have said earlier the issue is going to be the complexity of the external stimulus that you need to apply. External stimulus is probably best achieved with some external test rig (as Adam Davis has described).
One thing to consider, though, is exactly what it is that you're trying to verify.
It's tempting to assume that to verify the interaction of the hardware and the firmware then you've really no option but to directly apply external stimulus (ie. applying DACs to all of your ADC inputs, etc.). In these cases, though, the corner cases that you really want to test are often going to be subject to issues of timing (eg. interrupts arriving when you're executing function foo()) which are going to be incredibly difficult to test in a meaningful way - and even harder to get meaningful results from. (ie. The first 100K times we ran this test it was fine. The last time we ran it it failed. Why?!?)
But the verification of the hardware should be done separately. Once this is done, unless it's changing regularly (through downloadable fpga images or the like) then you should be able to assume that the hardware works and purely test your firmware.
So in this case you can concentrate on verifying the algorithms that are used for processing your external stimulii. For example, calling your ADC conversion routines with a fixed value as if they came from your ADC directly. These tests are repeatable and therefore of benefit. They will require special test builds though.
Testing the communications paths of your device is going to be relatively straightforward and shouldn't require special code builds.
We have had good results with automated testing on our embedded systems. We have test written in high level (easy to program and debug) languages that run on dedicated test machines. These test generally do sanity checking or generate random inputs into the devices, then check for correct behavior. There is a lot of work to generate and maintain these tests. We designed a framework and then let interns work on the tests themselves.
It's not a perfect solution, and the tests are certainly prone to errors, but the most important part is to improve on your existing coverage holes. Find the biggest hole and design something to cover it in an automated fashion, even if it isn't perfect or won't cover the entire feature. Later when all of your stuff is covered somewhat, you can come back and address the worst coverage or the most critical features.
Some things to consider:
What is the penalty of a firmware bug? How easier is it to update firmware in the field.
What kind of coverage do my test provide? Is it a simple sanity check? Is it configurable enough that it can test many different scenarios?
Once a test has failed, how will you reproduce that value in order to debug it? Did you log all the device and test settings so you can eliminate as many variables as possible? Device configuration, firmware version, test software version, all external inputs, all observed behavior?
What are you testing against? Is the spec clear enough on what the expected behavior of the device you are testing or are you validating against what you think the code should do?
If your goal is to test your low-level driver code you will likely need to create some sort of test fixture, using loopback cables or multiple interconnected units to allow you to exercise each driver. Pairing a board with known-good software with a board running a development build will allow you to test for regressions in communication protocols, etc.
Specific test strategies depend on the hardware you wish to test. For example, ADCs can be tested by presenting a known waveform and converting a series of samples, then checking for the proper range, frequency, average value, etc.
I have found this type of testing to be very valuable in the past, allowing me to confidently modify and improve driver code without fear of breaking existing applications.
Yes, I do this, although I've always had a serial port available for test I/O.
It is frequently difficult to leave the unit totally unmodified. Some tests require a line commented out or a call added e.g. to deal with a watchdog.
IMHO, this is better than no unit testing at all. And of course you need to be doing complete integration/system testing, too.
Unit testing embedded projects is quite diffucult, as it usually requires a external stimulus and external measurment.
We have been successful in developing a external serial protocol (either rs232 or udp or tcpip messages) with basic commands for exercising the hw with debug logging in the low level drivers looking for erroneous conditions or even slightly abnormal conditions(espcially for limit checking)
But once developed we then can run the testing after every build if required. It will definitly allow you to deliver a better quality product.
If your goal is manufacturing test (ensuring that the modules are properly assembled, no inadvertent shorts/opens/etc), you should focus first on testing cables and connectors, followed by socketed and soldered connections, then the PCB itself. These items can all be tested for shorts & opens by finding access patterns that drive each individual line high while its neighbors are low and vice-versa, then reading back the lines' values.
Without knowing more details of your hardware it's difficult to be more specific, but most embedded processors can set I/O pins to a GPIO mode that simplifies this sort of testing.
If you are not performing bed-of-nails testing on your PCAs, this testing should be considered a mandatory first step for newly manufactured boards.
I know this is old now, but maybe it will help. Yes, you can do it but it depends on how much you want to invest in the solution you want. More than two years I have worked on test and validation for the MCAL layer of AUTOSAR. This is kind of the lowest you can get when it comes to software testing. It was a sort of component level testing. Some may call it unit level but it was slightly higher than that because we were testing the APIs of the MCAL components. Things like: ADC, SPI, ICU, DIO and so on.
The solution used involved:
- a test framework that was running on the target micro
- a dSPACE box to provide and read signals to and from the target when required
- XCP access through Vector CANape to trigger the test execution and results collection
- a python framework to perform the test control and validation of the results
The test cases were written in C and they were flashed on the target along with the software under test. It was a black box test cause we didn't alter in any way the implementation of the MCAL. And I think not even the startup sequence was touched. An Idle task was used to continuously check the state of a flag that was the signal to start executing a test. A 10 ms task was used to actually run the test. A test case was in fact a switch case. Every case in this switch was a test step. Python was triggering the test execution at the test step level. A good thing with this approach was the reusing of steps with different parameters. This test control, what to execute and how, was done by Python through a test control data structure acting as an API in between the test implementation and the test triggering and evaluation mechanism. This is what CANape was used for. To set the test to be executed and to read the results of the test. Every value obtained by a test step was stored in an array part of the data structure. The test step itself wasn't involved in any validation because the target was considered a non trust-able component of the test environment. The validation was done by Python based on the test specifications. Python was parsing these specifications and was able to automatically create test triggering scripts including the validation criteria for every test step. The specification of every test case was a series of test steps descriptions together with their validation criteria. Some of these steps were dSPACE related steps. As an example, one step was initializing something and was calling for some capturing some edges on an already configured channel, and the next step was applying the signal on that channel by commanding the dSPACE equipment.
A cheaper solution would involve using an in-house board instead of the dSPACE equipment. To some extent, even a programmable signal generator can be used, but that would not help if you need to validate signals output-ed by the target.