I am looking to integrate a non-trivial cross-platform build system for a project predominately written in C++. I've evaluated Cmake and Scons, so far, and while they both represent an improvement over (GNU) make, neither approach seemed either elegant or transparent in the
context I was trying to use these tools. This brought me to Boost Build (Bjam) and I am encouraged that, given my project is dependent upon Boost, bjam should be available for any viable target platform
already.
I've run into difficulty trying to neatly integrate code-coverage for unit tests of a library... with a view to eventual integration into a build server such as Jenkins. While I'm willing to be guided by Bjam best/standard practice, I think I need three distinct "variants":
release - to build optimised static library only
debug - to build non-optimised static library and unit tests
coverage - to build coverage-enabled library and link with non-coverage enabled unit tests.
Essentially, in addition to the standard debug and release builds, I'd like a special purpose debug build that also collects coverage data.
I need to build with (at least) g++ and msvc... and use gcov switches only with g++. This means my library target needs different "compilerflags" to the unit-test executable target... and only for one of my compiler suites... and for only one variant.
I am unclear how best to achieve this with Bjam - though, I suspect, it should be a fairly common use case. Does Bjam have explicit support for gcov coverage analysis (possibly presenting results
using lcov)? If not, can anyone recommend a strategy which would support the above (simplified) scenario?
I'm pretty confident that the answer to your first question--whether bjam has explicit support for gcov--is a definite no, because like debug and release build configurations, bjam would consider that to be a feature variant for the user to define.
For bjam, it looks like there are a couple ways to do what you want:
Define your own feature variant and then update the CONFIG_COMMAND for any custom flags.
Define/redefine a toolset.
For CMake, consider following the pattern that ITK does:
http://cmake.org/Wiki/ITK/Policy_and_Procedure_for_Adding_Dashboards#Configuring_GCOV_Coverage
I have the same need and I basically added the lines below to define my own coverage variant in my Jamroot file.
variant coverage : debug : <cxxflags>--profile-arcs <cxxflags>--test-coverage <cxxflags>--coverage <link>shared ;
lib gcov : : <name>gcov : ;
unit-test mytest : tests/mytest.cpp libboost_unit_test : <variant>coverage:<library>gcov ;
The coverage data is created when the test is run and I exploit it afterward outside of bjam using gcov.
Related
I'm setting up project in C++ using CMake and Catch/gmock for unit testing (it's not very important, unit testing frameworks are working similarly) . I'm targeting Windows (MSVC compiler) and Linux platforms. I'd like to put all the tests in single executable as desribed in Catch's tutorial . For unit testing purposes I will probably make some fake implementations/mocks. I'm afraid when I will build everything into an one executable (files with test source, project sources, fake implementations) I will get linker multiple definition errors, since there will be multiple function definitions: true and fake ones. Possible solutions I see now:
Build multiple executables, every with right implementations - I see this worst solution, because I will end up with bunch of files, which I will have to execute to test program (probably via some script, but it's not convenient option).
Compile each test with it's dependencies to shared library which all will be linked with main test runner executable. I think it will be quite hard to achieve, especially I'm not familiar with Linux shared libraries. For Windows it should be doable with some dllexports, but not really easy.
How should are solve this problem? What are real world solutions for this? Maybe I don't see something really simple and I'm looking for a nonexistent problem? I'd like quite easy, multi platform solution.
I found similar topic: What are the differences between Autotools, Cmake and Scons? , but my question is a little bit other and I think the answers could be other too.
I found a lot of articles telling that waf is unstalbe (API changes), is not yet ready for production etc (but all of these articles are 2 or 3 years old).
Which of these build tools should be used if I want to:
create big C++ (11) project - lets say a complex compiler
use with LLVM
be sure it will be flexible and simple to use
be sure it will be fast enought
compile under all standard platforms (the base platform is Linux, but I want to compile under Windows and MacOSX also)
Reading a lot of articles I found out Cmake and waf the "best" tools available, but I have no expirence with them and it is really hard to find out any comparison, which is not very biased (like comparison of the scons author) and not very old.really
waf covers nearly all your requirements ...
API change: not a problem as waf shall be included in the source tarball (<100Ko)
big project: you can split your configuration in subdirectories (the contexts could be inherited). I've worked on projects with more than 10k files in C/C++/fortran77/fortran90/python/Cython including documentation in doxygen/sphinx.
flexibility and easyness: you can add extra modules en python (http://code.google.com/p/waf/wiki/AddingNewToolsToWaf)
fast: the tasks could be run in parallel: http://www.retropaganda.info/~bohan/work/psycle/branches/bohan/wonderbuild/benchmarks/time.xml
multi plat-form: you can run Waf everywhere python is available, that includes Windows and MacOs. Waf is compatible with mscvc, gcc, icc, and other compilers. You can produce visual/eclipse projects.
... but waf seems to have an issue with llvm: http://code.google.com/p/waf/issues/detail?id=1252
EDIT: as said by Wojciech Danilo, LLVM issue has been fixed
I'm currently using CMake for my own language implementation via C++11 and LLVM.
I like CMake for it's easy to use syntax. LLVM can be loaded with an easy 'load_package' command. After that you can use all the headers and libraries you need. CMake lets child scripts inherit variables from parent scripts. So you do not need to set variables and load packages in every sub directory.
The C++11 support depends on your compiler you want to use. All in all CMake is just a layout to create your 'real' build script.
When you're using make you can use make's --jobs=N to speed up compilation on multicore-platforms. On Windows you could generate Visual Studio 2012-project files and use Microsoft's build system and use their build-jobs to speed up the compilation process.
You should always create a subfolder for build-files (myproject/build or something). This way you keep your source tree clean (cd build; cmake ..; cd ..).
I can't speak for all the other tools out there.
i have source code of vc++ project. Now I am using linux.
i know how compile a single file .cpp not a whole project. So how to compile a VC project using g++ ?
A slight advantage of Makefiles would be possible integration with autotools (cough - It might prove handy to get the starting point for feature macros).[2]
There is a tool as part of winemaker that is EXCEEDINGLY helpful with fixing up a source tree that was assuming case insensitive names to work on a case-sensitive filesystem. (_it was intended mainly in order to build against winelib but that is not required)
If you want to keep using windows API's for some parts of the code, you can consider compiling with winelib (and use winegcc, producing WIN32 executables; I'm not sure whether this is what you want)
[2]: SCons is a very nice tool though
First step would be to generate Makefile out of vcproj file.
There are (obviously) some tools for that:
http://www.codeproject.com/KB/cross-platform/sln2mak.aspx
There is no easy way to do it. As others have suggested you can figure out how the build process works for this project (maybe by reading the build output in VS) and recreate that using your favorite linux build tool (scons, cmake, autotools etc.). The alternative is to use a converter tool. Aside from the below mentioned sln2mak, there is also winemaker. The docs for winemaker have a lot of old info like most linux tools docs but it can convert a .sln to a makefile. I am not sure about newer vs .sln files.
Has anyone used a package like CppUnit to cross-compile C++ unit tests to run on an embedded platform?
I'm using G++ on a Linux box to compile executables that must be run on a LynxOS board. I can't seem to get any of the common unit test packages to configure and build something that will create unit tests.
I see a lot of unit test packages, CppUnit, UnitTest++, GTest, CppUTest, etc., but very little about using these packages in a cross-compiler scenario. The ones with a "configure" script imply that this is possible, but I can't seem to get them to configure and build.
My practice when unit testing code that is cross compiled is to compile the unit tests themselves using the native toolchain -- usually some flavor of x86 compiler. These unit tests execute on the build machine rather than on the embedded target. If you're writing strict unit tests (as opposed to integration tests) with stubs and mocks you shouldn't have dependencies on embedded hardware. If not... it's never too late to start.
One added benefit of this approach is that for non-x86 embedded targets, this type of unit testing helps flush out endianness issues, uninitialized variables and other interesting bugs.
./configure --prefix=/sandBox --build=`config.guess` --host=sh4-linux
sh4-linux is the platform where you want to run the program.
You might want to look at CxxTest. I have not used it for cross compilation, but it is based entirely on headers and a Python script - no compiled library. It might be easier to adapt than others.
Im not providing an answer here, but i wouldn't take the advice of NOT running your unit tests on different targets : you still need to, preferably both system and unit tests.
Otherwise simple things like alignment errors on ARM/other embedded CPUs will not get caught.
To cross-compile CppUTest (v3.3), I had to override the LD, CXX and CC make variables.
To get both the CppUTest and CppUTestExt (for CppUMock) libraries and their tests built I used the following commands from the CPPUTEST_HOME directory:
To build libCppUTest.a:
make all LD=sh4-linux-g++ CXX=sh4-linux-g++ CC=sh4-linux-gcc
To build libCppUTestExt.a (for CppUMock):
make extensions LD=sh4-linux-g++ CXX=sh4-linux-g++ CC=sh4-linux-gcc
You can then copy the CppUTest_tests and CppUTestExt_tests executables that are produced in your CPPUTEST_HOME to your target device and execute them there.
Assuming CppUTest passes it's own tests on your target, you're then ready to develop your tests with CppUTest. Just link your test code with the cross-compiled CppUTest libraries and copy the resulting executable to your target. Then run to get unit test results from the target platform itself.
It sounds like you need to have unit test library compiled for your OS and architecture as well as what's on your dev/build machine(s). I prefer Boost++ unit test framework for this. You can either download something that's prebuilt for your architecture, but will usually have to compile it yourself. I found a few solutions by googling for how to cross-compile boost (e.g. http://goodliffe.blogspot.com/2008/05/cross-compiling-boost.html). CppUnit might be easier to cross-compile, haven't tried. The general principle is the same, you compile the same library version for your development architecture and for your target machine
My setup for new targets is to compile the necessary Boost++ libraries for my target OS/arch and then write tests to link against both Boost++ libraries and the code to be tested.
The benefit is that you can link against your x86 Linux Boost++ libs or against your target Boost++ libs, thus you can run the tests on both your target and your dev/build machine(s).
My general setup looks like this:
libs/boost/<arch>/<boost libs>
src/foo.{cpp,h}
tests/test_foo.cpp
build/foo
build/test_foo.<arch>
I put compiled Boost++ libs under different architectures that I need in libs/ dir for all my projects and reference those libs in my Makefiles. The source and the tests get build with an arch variable specified to make command that way I can run test_foo.x86 on my dev machine and test_foo.{arm,mips,ppc,etc.} on my targets.
What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.