C# How to Test Hardware Components - unit-testing

We´ve got the challenge to test our hardware service, it turns out to be more difficult. Mostly our hardware are RFID Reader which can perfom Mifare DESFire, Mifare Classic and Legic. We want to start easy with DESFire. The first shot was to write a complete mock reader, which understands Mifare DESFire commands but thats to much overload. Then I´m asking the community, how do you test your hardware components?
I don´t know whether a integration Test is the right way or just unit-tests, which partly mock the reader for exactly one test I´m working on. But then I never go deep down all the code till the point I can´t mock any more.
Is there a common strategy?

Related

UE4 Custom Dedicated Server (collision, hitboxes)

So I am currently developing a custom dedicated server in UE4 from scratch. I use RakNet as a networking engine and want to achieve the typical dedicated server that can manage adding players to the world, hitboxes, collision and verifying packets in general. What I mean by verifying here is e.g. if the server receives a movement packet it can decide whether the packet's sender can move there or not. I see the biggest trouble in verifying the e.g. movement packets and hitboxes because the server has to have access to the world and collision methods of unreal which I don't know how to do from scratch. Is it possible to get the base server from UE4 and equip it with your own networking engine and packet handling? Or is there a better way to achieve my goal?
AFAIK there is no out of the box solution for that. You need something custom. I can think of 3 solutions:
Just ignore collisions. Replicate what you get from clients, and that's it. Of course this is a cheap, low effort solution prone to hacking, but maybe it's sufficient for your case ?
Write custom collision handling on your server. This is a lot of work. Server would need to know all the colliders positions, dimensions, etc. You could then use a physics library ( i.e. Bullet Physics ) to check collisions or write a custom collision checks.
Compile UE4 as a dedicated server. You can find more info here:
https://wiki.unrealengine.com/Dedicated_Server_Guide_(Windows_%26_Linux)
Just use Raknet instead of a native UE4 networking.
PS. UE4 source code is over 2 milion lines of sophisticated C++ code so telling someone to "just download the source code and change anything you like" is not a viable advice.
PS.PS. I'm not sure if RakNet is a good choice at the present moment, this project was not updated since 5 years.

Testing client/server in gTest

I'm trying to write tests for my chat program based on client/server interaction. After reading some sources I came up with idea of using Google Mock but the whole concept seems a little bit abstract to me. Let's say I want to test client. In order to do that, should i write a server mock that imitates the work of an actual server or I got something wrong?

Where can I start with programmable Hardware?

I've had a desire to learn at least a tiny bit about programming hardware for quite some time now and thought I'd ask here to get some starting points. I am a reasonably accomplished programmer with Delphi and Objective-c experience but have never even listened to a device port / interupt (I dont even know the terminology) let alone programmed a piece of hardware.
To start with what I would like to be able to do is,
Buy a simple bit of kit with 2,3 or 10 buttons
Plug the device into my pc via USB
Listen to the device and write some code to do something once the button is pressed.
I reckon this is a good place to start, anyone got pointers on hardware to buy or how I could start this?
I like the Arduino, easy to use, open source and a great community!
Good to get started with, and uses a subset of C/C++.
Also, has alot of addon hardware available, like GPS, Bluetooth, Wifi etc
My experiences with Arduino have been nothing but good, from the point you get it out of it's box (and install the free compiler on either Windows / Mac / Linux), to building your first 'sketch' (a project or application for the Arduino).
Making an application is easy, you have a Setup Method, which is called on startup, and then a loop method which is looped while the Arduino is running.
Then all you have to do is hook either inputs or outputs up to the pins on the Arduino board, tell the code what they are and hopefully you'll get the desired output.
One other really good thing about the Arduino (and others I'm sure) is that you now have a use for those old broken printers, or 2x CD-Rom's that no one wants, and every other little bit of out dated technology. It's amazing what you can find in a server room!
Now, I have only worked on small projects, like plugging in an LCD, and reading the room temp and various projects like that. But based on what I have done, I am happy with the Ardunio, it gives a good base to embedded programming and if it's not enough, you can always go bigger!
My 2 cents!
There's also the hot-off-the press netduino which uses the .NET Micro Framework and Microsoft Visual C# Express. I don't know that's it's better than the Arduino but it's another option.
Why don't you start with AVR programming for microprocessors. Yeh it might be a bit too low level. but I know many people that have started with it for hardware programming. you could find a compiler here. http://winavr.sourceforge.net/ and a good tutorial here: http://www.ladyada.net/learn/avr/
The previous poster mentioned the Arduino, but you should also consider a Teensy. It's basically the same thing, but price is a little better. You also have the option of using it in "Arduino" mode, or raw AVR mode. I don't know if Arduinos give you both options.
There is a comparison page where you can see the Teensy has some better hardware. The built-in USB gives it much better performance.
I would definitely suggest trying out various microcontrollers. Arduino Controllers are nice and have a number of tutorials.
However, its not your only option. In school, I worked with Microchip PICs, which are also quite nice for the hobbyist scene. The nice thing about the PIC was that our microcontrollers textbook supported it, so we got to see the application as we were learning the theory.
If I understand your question right, you are not interested in embedded programming. You want to buy something that works from the begining and control it from Windows.
When it comes to buttons, there is not much to do in Windows. These are HID controls and Windows handles all the interfacing for you. Nothing too exciting there.
In that case you can grab any Joystick and use the DirectInput (a part of the DirectX tech.) to interact with it. With force feedback you can do some cool stuff.
A more fun project would be to buy a Wii control and write some fun applicartions.
Look at this site to get some ideas of what I mean:
http://johnnylee.net/projects/wii/
Since Windows has no support for a Wii contrller, you really get to do some work here :)
I see that you like Delphi, so you can take a look at AvrCo Multitasking Pascal for AVR. You can try it at http://www.e-lab.de. MEGA8/88 version is free. There are tons of drivers, simulator, JTAG online debugger and programmer with visualization of all standard devices (for a startup, you can make a simple LPT programmer with just a few resistors). It can also make programs for all Arduino devices out there, since AVR is in their hearth. Atmel's STK500 is a good beginner board, with leds, switches, and few other peripherals. If you prefer open source, then WinAVR with GCC could be your path.
As already mentioned, Arduino is a good choice. The community is large and helpful. The nice thing is that you can transition right to a "real" language by using the GCC port for AVR micros, if you want. On my latest project I did this - prototype most of it with Arduino, then re-write it in C.
Starting with buttons and LEDs is a great idea. Build some confidence in working with basic hardware first, before modding the Wii!
Some links:
Windows GCC cross compiler (1 step install) for AVR: WinAVR
A Arduino clone kit
Adafruit is another good source of starter hardware and tech advice
The embedded StackOverflow
Any program you already write interacts with hardware, there's the monitor, keyboard, mouse, speaker etc. Getting a simple setup where your program can deal with buttons on a USB device will not teach you that much about working with hardware. It's partly a question of how low you want to go in the software stack and how much you want to learn about what happens at the point where the software ends.
Get yourself a copy of "The Art of
Electronics". It's a relatively easy read and covers everything between Ohm's law and the microprocessor and will give you a good idea of what the complete system does.
Read it.
Check out Digikey. You can buy anything hardware related from resistors, capacitors, IC's, low cost boards easily online and for reasonable prices.
Other replies mentioned Microchip PIC and Atmel AVR which are small and simple microcontrollers. Both companies have a wealth of application notes, check out their web site, read through some app notes. You can get low cost evaluation boards for the above or something like the Arduino mentioned in other replies. Consider designing and building your own board to force yourself to learn the basics. Find a friend who is an EE or serious hobbyist who wouldn't mind helping you with some tips.
If you want to learn more about PC hardware you can take a look at some simple device drivers (e.g. printer or serial port) under Windows (download the WinDDK), Linux or even DOS. Programming under something like DOS on a PC allows for relatively easy interaction with the PC hardware, you can use a printer port to read push buttons etc.
Links (I'm a new user so I can't link directly):
www.amazon.com/Art-Electronics-Paul-Horowitz/dp/0521370957
www.digikey.com/
www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=2879
www.atmel.com/products/avr/

Unit testing device drivers

I have a situation where I need to write some unit tests for some device drivers for embedded hardware. The code is quite old and big and unfortunately doesn't have many tests. Right now, the only kind of testing that's possible is to completely compile the OS, load it onto the device, use it in real life scenarios and say that 'it works'. There's no way to test individual components.
I came across an nice thread here which discusses unit testing for embedded devices from which I got a lot of information. I'd like to be a little more specific and ask if anyone has any 'best practices' for testing device drivers in such a scenario. I don't expect to be able to simulate any of the devices which the board in question is talking to and so will probably have to test them on actual hardware itself.
By doing this, I hope to be able to get unit test coverage data for the drivers and coax the developers to write tests to increase the coverage of their drivers.
One thing that occurs to me is to write embedded applications that run on the OS and exercise the driver code and then communicate the results back to the test harness. The device has a couple of interfaces which I can use to probably drive the application from my test PC so that I can exercise the code.
Any other suggestions or insights would be very much appreciated.
Update: While it may not be exact terminology, when I say unit testing, I meant being able to test/exercise code without having to compile the entire OS+drivers and load it onto the device. If I had to do that, I'd call it integration/system testing.
The problem is that the pieces of hardware we have are limited and they're often used by the developers while fixing bugs etc. To keep one dedicated and connected to the machine where the CI server and automated testing is done might be a no no at this stage. That's why I'm looking for ways to test the driver without having to actually build the whole thing and upload it onto the device.
Summary
Based on the excellent answers below, I think a reasonable way to approach the problem would be to expose driver functionality using IOCTLs and then write tests in the application space of the embedded device to actually exercise the driver code.
It would also make sense to have a small program residing in the application space on the device which exposes an API that can exercise the driver via serial or USB so that the meat of the unit test can be written on a PC which will communicate to the hardware and run the test.
If the project was just being started, I think we'd have more control over the way in which the components are isolated so that testing can be done mostly at the PC level. Given the fact that the coding is already done and we're trying to retrofit the test harness and cases onto the system, I think the above approach is more practical.
Thanks everyone for your answers.
In the old days, that was how we tested and debugged device drivers. The very best way to debug such a system was for engineers to use the embedded system as a development system and—once adequate system maturity was reached— take away the original cross-development system!
For your situation, several approaches come to mind:
Add ioctl handlers: each code exercises a particular unit test
With conditional compilation, add a main() to the driver which conducts functional unit tests in the driver and outputs results to stdout.
For initial ease in debugging, maybe this could be made multi-platform operable so you don't have to debug on the target hardware.
Perhaps conditional code can also emulate a loopback-style device.
The code that really is dependent on the hardware (the lowest level of the driver stack in a layered architecture) can't really be tested anywhere except on the hardware, or a high-quality simulation of the hardware.
If your driver has some component of higher-level functionality that doesn't rely directly to the hardware (e.g., a protocol handler for sending messages to hardware in a particular format) and if that part is nicely self-contained in the code, then you could unit-test that separately in a PC-based unit-test framework.
Going back to the lowest level—if it's dependent on the hardware, then the test jig needs to include the hardware. You can make a test jig that includes the hardware, the driver, and some test software. The main thing, I think, is to get the normal product's application code out of the test, and put in some test code instead. The test code can systematically test all the driver's features and corner-cases (which the application code may not), and can also really hammer the driver intensively in a short amount of time (which the application probably doesn't). Thus it's more efficient use of your limited hardware than just running the application, and gives you better results.
If you can get a PC into the loop, then the PC might help with the testing. E.g. if you're writing a serial port driver for an embedded device, then you could:
Write test code for the embedded device that sends various known data streams.
Connect it to a PC's serial port, running test code that verifies the transmitted data streams.
Same in the other direction—PC sends data; embedded device receives it and verifies it, and notifies the PC of any errors.
The tests can stream data at full speed, and play with a range of different byte timings (I once found a microcontroller UART silicon bug that only appeared if bytes were sent with a ~5 ms delay between bytes).
You could do a similar thing with an Ethernet driver, a Wi-Fi driver.
If you're testing a storage device driver, such as for an EEPROM or Flash chip, then the PC couldn't get involved in the same way. In that case, your test harness could test all sorts of write conditions (single-byte, block...), and verify data integrity using all sorts of read conditions.
I had a similar problem two or three years ago. I've ported a device driver from VxWorks to Integrity. We had changed only operating system dependent parts of the driver but it was a safety critical project, so all the unit tests, integration tests are redone. We have used a automated testing tool called LDRA testbed for our unit tests. 99% of our unit tests are done on Windows machines with Microsoft Compilers. Now I'll explain how to do this
Well first of all, when you are doing unit testing you are testing a software. When you include the real device in your tests, you are also testing the device. Sometimes there may be issues with hardware or documentation of the hardware. When you are designing the software, if you have described the behaviour of the each function clearly, it is very easy to make unit testing, for example, Think about the function;
readMessageTime(int messageNo, int* time);
//This function calculates the message location, if the location is valid,
//it reads the time information
address=calculateMessageAddr(messageNo);
if(address!=NULL) {
read(address+TIME_OFFSET,time);
return success;
}
else {
return failure;
}
Well, here you are just testing if readMessageTime is doing what it is supposed to do. You do not have to test if calculateMessageAddr is calculating the right result or, read reads the right address. That is the responsibility of some other unit tests.. So what you have to do is write stubs for calculateMessageAddr and read(OS function) and check if it calls the functions with correct parameters. This is the case If you are not accessing the memory directly from your driver. You can test any kind of driver code without any OS or device with this mentality.
If you have mapped the device memory directly into your memory space and device driver reads and writes to device memory as it is its own memory, it gets a little bit complicated. Using automated testing tools, now you have to watch values of pointers and define pass/fall criterieas according to the values of these pointers. if you are reading a value from memory, you have to define the expected value. This may be hard in some cases.
There is also one more issue, developers always confuse in unit testing of drivers such like:
readMessageTime(int messageNo, int* time);
//This function calculates the message location, if the location is valid,
//it does some jobs to make the device ready to read then
//it reads the time information
address=calculateMessageAddr(messageNo);
if(address!=NULL) {
do_smoething(); // Get the device ready to read!
do_something_else() // do some other stuff so you can read the result in 3us.
status=NOT_READY;
while(status==NOT_READY) // mustn't be longer than 3us.
status=read(address+TIME_OFFSET,time);
return success;
} else
{
return failure;
}
Here do_something and do_something_else does some jobs on device to make it ready to read. Developers always ask themselves "What if the device do not get ready forever and my code have a deadlock here" and they tend to test this kind of stuff on device.
Well, you have to trust the device manufacturer and the technical author. If they are saying that device will be ready in 1-2us, you do not need to worry about this. If your code fails here, you have to report it to device manufacturer, it is not your job to find a workaround to overhelm this problem. Did you see my point?
I hope this helps….
I had this exact task just two months ago.
Let me guess:
You probably have "snippets" of code that speak low level details to the device. You know that these snippets work, but you can't get coverage on them because they have a dependency to the device drivers.
Likewise, it does not make sense to test every single line of it individually. They are never run in isolation, and your unit test would end up looking like a mirror reflection of the production code.
For example, if you wish to start the device, you need to create a connection, pass it a specific low level reset command, then an initialize parameter struct etc etc.
And if you need to add a piece of configuration, this may require you to take it off line, add the configuration and then take it online.
Stuff like that.
You do NOT want to test low level stuff. Your unit tests would then only reflect how you assume that the device work without confirming anything.
The key here is to create three items: a controller, an abstraction and an adapter implementation of that abstraction. In Cpp, Java or C# you would create either a base class or an interface to represent this abstraction. I will assume that you created an interface.
You break up the snippets into atomic operations. For example you create a method called "start" and "add(parameter)" in the interface. You put your snippets in the device adapter.
The controller acts on the adapter through the interface.
Identify pieces of logic within the snippets that you have placed in the adapter. Then you need to decide wether this logic is low level (protocol handling details etc) or wether this is logic that should belong in the controller.
You can then test in two stages:
* Have a simple test panel application that acts on the concrete adapter. This is used to confirm that the adapter actually works. That it starts when you press "start". That, for example, if you press "go offline", "transmit(192)" and "go online" in sequence, that the device responds as expected. This is your integration test.
You do not unit test the details in the adapter. You test it manually because the only success criteria is how the device responds.
However, the controller is completely unit tested. It only has a dependency to the abstraction, which is mocked out in your test code. Thus, your code has no dependency to your device driver because the concrete adapter is not involved.
Then you write unit tests to confirm that, for instance, the method "Add(1)" actually invokes "Go offline" then "Transmit(1)" and then "Go online" on the mocked out abstraction.
The challenge here is to draw the distinction between the adapter and the controller. What goes where? What worked for me was to create the aforementioned test panel first and then manipulate the device through it.
The adapater should hide the details you will only have to change if the device changes.
If the control panel is cumbersome to operate with lots of sequences that needs to be repeated again and again, or that very device specific knowledge is required to operate the panel, then you have too high granularity and should bulk some of them together. The test panel should make sense.
If end user requirements changing have impact on the adapter code, then you probably have too low granularity and should split the operations up, so that the requirements change can be accommodated with test driven development in the controller class.
I'd recommend for application-based testing. Even if the scaffolding can be hard and costly to build, there is a lot to gain here:
crash only once process as opposed to one system
ability to use standard tool set (debugger, memory checker ...)
overcome the hardware availability limitation
faster feedback: no installation in device, just compile and test
...
As far as naming is concerned, this can be called component testing.
The application can either initialize the device driver the same way the target OS does, or use directly the interns of the driver. The former is more expensive but leads to more coverage. Then the linker will tell which functions are missing, stub them, possibly using exploding stubs.
Vocabulary
I don't expect to be able to simulate any of the devices which the board in question is talking to and so will probably have to test them on actual hardware itself.
Then, you are stepping out of unit testing. Maybe you could use one of these expressions instead?
Automated testing : testing happen without user input (the contrary of Manual Testing).
Integration testing : testing several components together (the contrary of Unit testing).
On a bigger scale, if you test a whole system and not just a few components together, it is called System testing.
ADDED after comments and updates in the question :
Component testing : like integration testing or System testing, but on an even smaller scale.
Note : All three Component-Integration-System Testings share the same set of problems, on different scales. On the contrary, Unit Testing does not (see lower).
Advantages of "real" Unit Testing
With Integration- (or System- or Component-) Testing, it is certainly interesting to get some feedback, like test coverage. It is certainly useful to do.
But it is very hard (read "very costly") to make progress beyond some point, so
I suggest you use complementary approaches, like adding some real Unit Tests. Why? :
It is very hard to simulate the edge or error conditions. (Examples : the computer clock crosses a day or year during a transaction ; the network cable is unplugged ; power went down then up on some component, or the whole system ; the disk is full). Using Unit Testing, because you simulate these conditions rather than try to reproduce them, it is much easier. Unit Testing is your only chance to get a really good code coverage.
Integration testing takes time (because of access to external resources). You could execute thousands of unit test during the execution of one Integration Test. So testing many combinations are only possible with Unit Tests...
Requiring access to specific resources (hardware, Licence etc...), Integration Testing is often limited in time or scale. If the resources are shared by other projects, each project might use them only during a few hours per day. Even with exclusive access, maybe only one machine can use it, so you can't run tests in parallel. Or, your company may buy a resource (Licence or Hardware) for production, but not have it (or early enough) for development...

Test Automation with Embedded Hardware

Has anyone had success automating testing directly on embedded hardware?
Specifically, I am thinking of automating a battery of unit tests for hardware layer modules. We need to have greater confidence in our hardware layer code. A lot of our projects use interrupt driven timers, ADCs, serial io, serial SPI devices (flash memory) etc..
Is this even worth the effort?
We typically target:
Processor: 8 or 16 bit microcontrollers (some DSP stuff)
Language: C (sometimes c++).
Sure. In the automotive industry we use $100,000 custom built testers for each new product to verify the hardware and software are operating correctly.
The developers, however, also build a cheaper (sub $1,000) tester that includes a bunch of USB I/O, A/D, PWM in/out, etc and either use scripting on the workstation, or purpose built HIL/SIL test software such as MxVDev.
Hardware in the Loop (HIL) testing is probably what you mean, and it simply involves some USB hardware I/O connected to the I/O of your device, with software on the computer running tests against it.
Whether it's worth it depends.
In the high reliability industry (airplane, automotive, etc) the customer specifies very extensive hardware testing, so you have to have it just to get the bid.
In the consumer industry, with non complex projects it's usually not worth it.
With any project where there's more than a few programmers involved, though, it's really nice to have a nightly regression test run on the hardware - it's hard to correctly simulate the hardware to the degree needed to satisfy yourself that the software testing is enough.
The testing then shows immediately when a problem has entered the build.
Generally you perform both black box and white box testing - you have diagnostic code running on the device that allows you to spy on signals and memory in the hardware (which might just be a debugger, or might be code you wrote that reacts to messages on a bus, for instance). This would be white box testing where you can see what's happening internally (and even cause some things to happen, such as critical memory errors which can't be tested without introducing the error yourself).
We also run a bunch of 'black box' tests where the diagnostic path is ignored and only the I/O is stimulated/read.
For a much cheaper setup, you can get $100 microcontroller boards with USB and/or ethernet (such as the Atmel UC3 family) which you can connect to your device and run basic testing.
It's especially useful for product maintenance - when the project is done, store a few working boards, the tester, and a complete set of software on CD. When you need to make a modification or debug a problem, it's easy to set it all back up and work on it with some knowledge (after testing) that the major functionality was not affected by your changes.
-Adam
Yes. I have had success, but it is not a stragiht-forward problem to solve. In a nutshell here is what my team did:
Defined a variety of unit tests using a home-built C unit-testing framework. Basically, just a lot of macros, most of which were named TEST_EQUAL, TEST_BITSET, TEST_BITVLR, etc.
Wrote a boot code generator that took these compiled tests and orchestrated them into an execution environment. It's just a small driver that executes our normal startup routine - but instead of going into the control loop, it executes a test suite. When done, it stores the last suite to run in flash memory, then it resets the CPU. It will then run then next suite. This is to provide isolation incase a suite dies. (However, you may want to disable this to make sure your modules cooperate. But that's an integration test, not a unit test.)
Individual tests would log their output using the serial port. This was OK for our design because the serial port was free. You will have to find a way to store your results if all your IO is consumed.
It worked! And it was great to have. Using our custom datalogger, you could hit the "Test" button, and a couple minutes later, you would have all the results. I highly recommend it.
Updated to clarify how the test driver works.
Yes.
The difficulty depends on the type of hardware that you're trying to test. As others have said earlier the issue is going to be the complexity of the external stimulus that you need to apply. External stimulus is probably best achieved with some external test rig (as Adam Davis has described).
One thing to consider, though, is exactly what it is that you're trying to verify.
It's tempting to assume that to verify the interaction of the hardware and the firmware then you've really no option but to directly apply external stimulus (ie. applying DACs to all of your ADC inputs, etc.). In these cases, though, the corner cases that you really want to test are often going to be subject to issues of timing (eg. interrupts arriving when you're executing function foo()) which are going to be incredibly difficult to test in a meaningful way - and even harder to get meaningful results from. (ie. The first 100K times we ran this test it was fine. The last time we ran it it failed. Why?!?)
But the verification of the hardware should be done separately. Once this is done, unless it's changing regularly (through downloadable fpga images or the like) then you should be able to assume that the hardware works and purely test your firmware.
So in this case you can concentrate on verifying the algorithms that are used for processing your external stimulii. For example, calling your ADC conversion routines with a fixed value as if they came from your ADC directly. These tests are repeatable and therefore of benefit. They will require special test builds though.
Testing the communications paths of your device is going to be relatively straightforward and shouldn't require special code builds.
We have had good results with automated testing on our embedded systems. We have test written in high level (easy to program and debug) languages that run on dedicated test machines. These test generally do sanity checking or generate random inputs into the devices, then check for correct behavior. There is a lot of work to generate and maintain these tests. We designed a framework and then let interns work on the tests themselves.
It's not a perfect solution, and the tests are certainly prone to errors, but the most important part is to improve on your existing coverage holes. Find the biggest hole and design something to cover it in an automated fashion, even if it isn't perfect or won't cover the entire feature. Later when all of your stuff is covered somewhat, you can come back and address the worst coverage or the most critical features.
Some things to consider:
What is the penalty of a firmware bug? How easier is it to update firmware in the field.
What kind of coverage do my test provide? Is it a simple sanity check? Is it configurable enough that it can test many different scenarios?
Once a test has failed, how will you reproduce that value in order to debug it? Did you log all the device and test settings so you can eliminate as many variables as possible? Device configuration, firmware version, test software version, all external inputs, all observed behavior?
What are you testing against? Is the spec clear enough on what the expected behavior of the device you are testing or are you validating against what you think the code should do?
If your goal is to test your low-level driver code you will likely need to create some sort of test fixture, using loopback cables or multiple interconnected units to allow you to exercise each driver. Pairing a board with known-good software with a board running a development build will allow you to test for regressions in communication protocols, etc.
Specific test strategies depend on the hardware you wish to test. For example, ADCs can be tested by presenting a known waveform and converting a series of samples, then checking for the proper range, frequency, average value, etc.
I have found this type of testing to be very valuable in the past, allowing me to confidently modify and improve driver code without fear of breaking existing applications.
Yes, I do this, although I've always had a serial port available for test I/O.
It is frequently difficult to leave the unit totally unmodified. Some tests require a line commented out or a call added e.g. to deal with a watchdog.
IMHO, this is better than no unit testing at all. And of course you need to be doing complete integration/system testing, too.
Unit testing embedded projects is quite diffucult, as it usually requires a external stimulus and external measurment.
We have been successful in developing a external serial protocol (either rs232 or udp or tcpip messages) with basic commands for exercising the hw with debug logging in the low level drivers looking for erroneous conditions or even slightly abnormal conditions(espcially for limit checking)
But once developed we then can run the testing after every build if required. It will definitly allow you to deliver a better quality product.
If your goal is manufacturing test (ensuring that the modules are properly assembled, no inadvertent shorts/opens/etc), you should focus first on testing cables and connectors, followed by socketed and soldered connections, then the PCB itself. These items can all be tested for shorts & opens by finding access patterns that drive each individual line high while its neighbors are low and vice-versa, then reading back the lines' values.
Without knowing more details of your hardware it's difficult to be more specific, but most embedded processors can set I/O pins to a GPIO mode that simplifies this sort of testing.
If you are not performing bed-of-nails testing on your PCAs, this testing should be considered a mandatory first step for newly manufactured boards.
I know this is old now, but maybe it will help. Yes, you can do it but it depends on how much you want to invest in the solution you want. More than two years I have worked on test and validation for the MCAL layer of AUTOSAR. This is kind of the lowest you can get when it comes to software testing. It was a sort of component level testing. Some may call it unit level but it was slightly higher than that because we were testing the APIs of the MCAL components. Things like: ADC, SPI, ICU, DIO and so on.
The solution used involved:
- a test framework that was running on the target micro
- a dSPACE box to provide and read signals to and from the target when required
- XCP access through Vector CANape to trigger the test execution and results collection
- a python framework to perform the test control and validation of the results
The test cases were written in C and they were flashed on the target along with the software under test. It was a black box test cause we didn't alter in any way the implementation of the MCAL. And I think not even the startup sequence was touched. An Idle task was used to continuously check the state of a flag that was the signal to start executing a test. A 10 ms task was used to actually run the test. A test case was in fact a switch case. Every case in this switch was a test step. Python was triggering the test execution at the test step level. A good thing with this approach was the reusing of steps with different parameters. This test control, what to execute and how, was done by Python through a test control data structure acting as an API in between the test implementation and the test triggering and evaluation mechanism. This is what CANape was used for. To set the test to be executed and to read the results of the test. Every value obtained by a test step was stored in an array part of the data structure. The test step itself wasn't involved in any validation because the target was considered a non trust-able component of the test environment. The validation was done by Python based on the test specifications. Python was parsing these specifications and was able to automatically create test triggering scripts including the validation criteria for every test step. The specification of every test case was a series of test steps descriptions together with their validation criteria. Some of these steps were dSPACE related steps. As an example, one step was initializing something and was calling for some capturing some edges on an already configured channel, and the next step was applying the signal on that channel by commanding the dSPACE equipment.
A cheaper solution would involve using an in-house board instead of the dSPACE equipment. To some extent, even a programmable signal generator can be used, but that would not help if you need to validate signals output-ed by the target.