Has anyone used a package like CppUnit to cross-compile C++ unit tests to run on an embedded platform?
I'm using G++ on a Linux box to compile executables that must be run on a LynxOS board. I can't seem to get any of the common unit test packages to configure and build something that will create unit tests.
I see a lot of unit test packages, CppUnit, UnitTest++, GTest, CppUTest, etc., but very little about using these packages in a cross-compiler scenario. The ones with a "configure" script imply that this is possible, but I can't seem to get them to configure and build.
My practice when unit testing code that is cross compiled is to compile the unit tests themselves using the native toolchain -- usually some flavor of x86 compiler. These unit tests execute on the build machine rather than on the embedded target. If you're writing strict unit tests (as opposed to integration tests) with stubs and mocks you shouldn't have dependencies on embedded hardware. If not... it's never too late to start.
One added benefit of this approach is that for non-x86 embedded targets, this type of unit testing helps flush out endianness issues, uninitialized variables and other interesting bugs.
./configure --prefix=/sandBox --build=`config.guess` --host=sh4-linux
sh4-linux is the platform where you want to run the program.
You might want to look at CxxTest. I have not used it for cross compilation, but it is based entirely on headers and a Python script - no compiled library. It might be easier to adapt than others.
Im not providing an answer here, but i wouldn't take the advice of NOT running your unit tests on different targets : you still need to, preferably both system and unit tests.
Otherwise simple things like alignment errors on ARM/other embedded CPUs will not get caught.
To cross-compile CppUTest (v3.3), I had to override the LD, CXX and CC make variables.
To get both the CppUTest and CppUTestExt (for CppUMock) libraries and their tests built I used the following commands from the CPPUTEST_HOME directory:
To build libCppUTest.a:
make all LD=sh4-linux-g++ CXX=sh4-linux-g++ CC=sh4-linux-gcc
To build libCppUTestExt.a (for CppUMock):
make extensions LD=sh4-linux-g++ CXX=sh4-linux-g++ CC=sh4-linux-gcc
You can then copy the CppUTest_tests and CppUTestExt_tests executables that are produced in your CPPUTEST_HOME to your target device and execute them there.
Assuming CppUTest passes it's own tests on your target, you're then ready to develop your tests with CppUTest. Just link your test code with the cross-compiled CppUTest libraries and copy the resulting executable to your target. Then run to get unit test results from the target platform itself.
It sounds like you need to have unit test library compiled for your OS and architecture as well as what's on your dev/build machine(s). I prefer Boost++ unit test framework for this. You can either download something that's prebuilt for your architecture, but will usually have to compile it yourself. I found a few solutions by googling for how to cross-compile boost (e.g. http://goodliffe.blogspot.com/2008/05/cross-compiling-boost.html). CppUnit might be easier to cross-compile, haven't tried. The general principle is the same, you compile the same library version for your development architecture and for your target machine
My setup for new targets is to compile the necessary Boost++ libraries for my target OS/arch and then write tests to link against both Boost++ libraries and the code to be tested.
The benefit is that you can link against your x86 Linux Boost++ libs or against your target Boost++ libs, thus you can run the tests on both your target and your dev/build machine(s).
My general setup looks like this:
libs/boost/<arch>/<boost libs>
src/foo.{cpp,h}
tests/test_foo.cpp
build/foo
build/test_foo.<arch>
I put compiled Boost++ libs under different architectures that I need in libs/ dir for all my projects and reference those libs in my Makefiles. The source and the tests get build with an arch variable specified to make command that way I can run test_foo.x86 on my dev machine and test_foo.{arm,mips,ppc,etc.} on my targets.
Related
I'm setting up project in C++ using CMake and Catch/gmock for unit testing (it's not very important, unit testing frameworks are working similarly) . I'm targeting Windows (MSVC compiler) and Linux platforms. I'd like to put all the tests in single executable as desribed in Catch's tutorial . For unit testing purposes I will probably make some fake implementations/mocks. I'm afraid when I will build everything into an one executable (files with test source, project sources, fake implementations) I will get linker multiple definition errors, since there will be multiple function definitions: true and fake ones. Possible solutions I see now:
Build multiple executables, every with right implementations - I see this worst solution, because I will end up with bunch of files, which I will have to execute to test program (probably via some script, but it's not convenient option).
Compile each test with it's dependencies to shared library which all will be linked with main test runner executable. I think it will be quite hard to achieve, especially I'm not familiar with Linux shared libraries. For Windows it should be doable with some dllexports, but not really easy.
How should are solve this problem? What are real world solutions for this? Maybe I don't see something really simple and I'm looking for a nonexistent problem? I'd like quite easy, multi platform solution.
I used work on Java. I was using Ant/Maven for making the build process easier.
I used to write the unit tests in the JUnit framework. When I say build all unit test area lso used to run.
So there means, Build includes the compiling the source code and running the unit test against that compiled source code.
Now I started working with C++ on new project in Visual C++. Here, When I say build, Only source is compiled and linked with libs. But unit test are not ran.
So, now I got confused with actual definition of build process.
Does build process includes running tests also? Or it is just compiling and linking of source code?
Ant and Maven "know" about unit tests. If an executable is marked as test, it will be run during the build process.
Visual Studio, AFAIK, does not have this concept. Even if an executable is written using some *Unit framework, Visual Studio will build it but not run it (by default). It is possible to run tests during the build process, but you need to add a post-build step to the test program's project to run the executable.
Disclaimer: The last version I used was VS 2003.NET; things might have improved (or gotten worse) since then.
Nowadays, I use the build tool with the greatest flexibility: make. In the project I'm currently working on, there's a build target named 'all' which builds every executable and test program, a target named 'check' which runs every test program (and builds them if needed), a target named 'coverage' which computes code coverage, and a target named 'run' which does all of the above. This way I get to choose what to run.
I hope that you can see from this that "what does the build process include" depends on how the build system is set up. There isn't a universally defined concept of "build process".
I am looking to integrate a non-trivial cross-platform build system for a project predominately written in C++. I've evaluated Cmake and Scons, so far, and while they both represent an improvement over (GNU) make, neither approach seemed either elegant or transparent in the
context I was trying to use these tools. This brought me to Boost Build (Bjam) and I am encouraged that, given my project is dependent upon Boost, bjam should be available for any viable target platform
already.
I've run into difficulty trying to neatly integrate code-coverage for unit tests of a library... with a view to eventual integration into a build server such as Jenkins. While I'm willing to be guided by Bjam best/standard practice, I think I need three distinct "variants":
release - to build optimised static library only
debug - to build non-optimised static library and unit tests
coverage - to build coverage-enabled library and link with non-coverage enabled unit tests.
Essentially, in addition to the standard debug and release builds, I'd like a special purpose debug build that also collects coverage data.
I need to build with (at least) g++ and msvc... and use gcov switches only with g++. This means my library target needs different "compilerflags" to the unit-test executable target... and only for one of my compiler suites... and for only one variant.
I am unclear how best to achieve this with Bjam - though, I suspect, it should be a fairly common use case. Does Bjam have explicit support for gcov coverage analysis (possibly presenting results
using lcov)? If not, can anyone recommend a strategy which would support the above (simplified) scenario?
I'm pretty confident that the answer to your first question--whether bjam has explicit support for gcov--is a definite no, because like debug and release build configurations, bjam would consider that to be a feature variant for the user to define.
For bjam, it looks like there are a couple ways to do what you want:
Define your own feature variant and then update the CONFIG_COMMAND for any custom flags.
Define/redefine a toolset.
For CMake, consider following the pattern that ITK does:
http://cmake.org/Wiki/ITK/Policy_and_Procedure_for_Adding_Dashboards#Configuring_GCOV_Coverage
I have the same need and I basically added the lines below to define my own coverage variant in my Jamroot file.
variant coverage : debug : <cxxflags>--profile-arcs <cxxflags>--test-coverage <cxxflags>--coverage <link>shared ;
lib gcov : : <name>gcov : ;
unit-test mytest : tests/mytest.cpp libboost_unit_test : <variant>coverage:<library>gcov ;
The coverage data is created when the test is run and I exploit it afterward outside of bjam using gcov.
I have a small, but non-trivial project, which for architectural reasons is built as three separete projects, they are inter-dependent, so unless I'm particularly focused, or improving test-coverage having spotted a hole, it makes sense for me to work from the proejct root.
The layout is as such:
/CMakeLists.txt
/build/
/src/command-line-application/
/src/command-line-application/CMakeLists.txt
/src/command-line-application/build/
/src/command-line-application/src/
/src/command-line-application/tests/
/src/command-line-application/include/
/src/vlc-plugin/
/src/vlc-plugin/src/
/src/libmyproject/
/src/libmyproject/CMakeLists.txt
/src/libmyproject/build/
/src/libmyproject/src/
/src/libmyproject/tests/
/src/libmyproject/include/
/src/libmyotherproject/
/src/libmyotherproject/CMakeLists.txt
/src/libmyotherproject/build/
/src/libmyotherproject/src/
/src/libmyotherproject/tests/
/src/libmyotherproject/include/
A word on the architecture, libmyproject is the real meat of my application, it's built this way because a CLI is a horrible way to ship code to end-users, as a library, it is also used from C# and Objective-C applications. (and all that works as expected)
The libmyotherproject is some platform specific support code, not directly connected to libmyproject, it has a few unit tests.
The vlc-plugin isn't important here, except to show that not everything in /src/*/ has unit tests.
My workflow is typically to hack on the CLI app until something useful crops up, and then refactor it into the library, and make sure it's portable.
When I'm working in /src/*/build/, typically running cmake ../ && make && ctest --output-on-failure, everything works.
When I'm working in /build, and run cmake, the individual components are built correctly (using add_subdirectories()) from CMake, but CTest does not recursively find the tests.
The documentation for CTest is a little unhelpful in what you should do:
USAGE
ctest [options]
DESCRIPTION
The "ctest" executable is the CMake test driver program. CMake-generated build trees created for
projects that use the ENABLE_TESTING and ADD_TEST commands have testing support. This program will
run the tests and report results.
I would have expected since the ADD_TEST() calls live in /src/libmyotherproject/tests/CMakeLists.txt, that they would be run? (They are at least compiled when I run cmake from /build/)
I hope I have been able to provide enough informaton, thank you.
Put
include(CTest)
in your top level CMakeLists.txt file before you make any add_subdirectory calls.
That will call enable_testing for you, and also set things up if you ever want to run a ctest dashboard script on the project to send results to a CDash server.
What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.