Building a project with unit tests in one executable - c++

I'm setting up project in C++ using CMake and Catch/gmock for unit testing (it's not very important, unit testing frameworks are working similarly) . I'm targeting Windows (MSVC compiler) and Linux platforms. I'd like to put all the tests in single executable as desribed in Catch's tutorial . For unit testing purposes I will probably make some fake implementations/mocks. I'm afraid when I will build everything into an one executable (files with test source, project sources, fake implementations) I will get linker multiple definition errors, since there will be multiple function definitions: true and fake ones. Possible solutions I see now:
Build multiple executables, every with right implementations - I see this worst solution, because I will end up with bunch of files, which I will have to execute to test program (probably via some script, but it's not convenient option).
Compile each test with it's dependencies to shared library which all will be linked with main test runner executable. I think it will be quite hard to achieve, especially I'm not familiar with Linux shared libraries. For Windows it should be doable with some dllexports, but not really easy.
How should are solve this problem? What are real world solutions for this? Maybe I don't see something really simple and I'm looking for a nonexistent problem? I'd like quite easy, multi platform solution.

Related

Unit Testing legacy C++ Code with CPPUnit

I am tasked with managing a large code base written in vc++ 6.0, I need to start building unit test for portions of the code. I have set up CPPUnit and it works with my projects DLL's the problem I am facing is as follows. The legacy application is made up of 10 static libraries and one huge executable MFC application that contains 99% of the code. My unit test framework is running in another project within the same workspace and will test the 10 libraries no problem all include and references are ok, when I try to do the same for the large MFC application I get a linker error as I do not have a dll for the application. Is there any way to unit test the application without putting the test code directly inside the application.
You should carry on as you are:
You have one test application that references libraries.
You have one main application that also references those libraries.
Either move code from the main application into the existing libraries, or, preferably, move code into new libraries. Then your test application can access more code without ever referring to the application.
You know when you are done when the source for the application consists of one module which defines main() and everything else in in libraries which are tested by the test application.
My experience with unit testing is usually the opposite. Create a project for your test then import code from your other projects.
You can't link to the MFC application probably because your functions aren't exported. They exist, but have no mean to communicate with other applications unlike DLLs.
I know of no way to link against an executable file. Refactoring the code by moving the business logic to a DLL and leaving the application as a "Front-end" would be the most obvious solution. However, since it is legacy code it is likely more appropriate to simply duplicate the code for purposes of unit testing. This is not ideal, and since it is an MFC applicaiton may not be trivially easy.
To test your main application you can set up a test project which includes the source files you want to test - not sure how easy it is to achieve with VC6, do not have it at hand, but in VS2005 and later this is quite straightforward.
So in your solution you end up with a structure like this:
MyLegacySystem.sln
MyApplication.proj
Main.cpp
BusinessRules.cpp
MyApplicationUnitTests.proj
UnitTestsMain.cpp
BusinessRules.cpp
BusinessRulesTests.cpp
If for whatever reason you cannot include your source files in 2 projects, you can pull the sources into your test project by invoking the preprocessor magic:
BusinessRulesStub.cpp:
#include "..\src\BusinessRules.cpp"
However, this is essentially a temporary fix. As already suggested, in the end most of the code should be extracted into separate libraries.
If you can't refactor your project to move the business logic into a new static library, try linking your test project against your project's intermediate object files, which you can probably find in BigProject\debug or BigProject\debug\obj . You can't link to the .EXE as you've discovered.
This achieves the same results as the copy process that Chad suggested while avoiding the actual duplication of source code, which would be a really bad thing.

How do you run your unit tests? Compiler flags? Static libraries?

I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++.
My general approach so far has been to do one of three things:
Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library.
Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit.
Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application.
None of these solutions are particularly elegant...
How do you do it?
Your approach no. 1 is the way I've always done it in C/C++ and Java. Most of the application code is in the static library and I try to keep the amount of extra code needed for the application to a minimum.
The way I approach TDD in Python and other dynamic languages is slightly different in that I leave the source code for the application and tests lying around and a test runner finds the tests and runs them.
I tend to favour static libs over dlls so most of my C++ code ends up in static libs anyway and, as you've found, they're as easy to test as dlls.
For code that builds into an exe I either have a separate test project which simply includes the source files that are under test and that are usually built into the exe OR I build a new static lib that contains most of the exe and test that in the same way that I test all of my other static libs. I find that I usually take the 'most code in a library' approach with new projects and the 'pull the source files from the exe project into the test project' approach when I'm retro fitting tests to existing applications.
I don't like your options 2 and 3 at all. Managing the build configurations for 2 is probably harder than having a separate test project that simply pulls in the sources it needs and including all of the tests into the exe as you suggest in 3 is just wrong ;)
I use two approaches, for dlls I just link my unit tests with the dll, easy. For executables I include the source files that are being tested in both the executable project and the unit test project. This adds slightly to the build time but means I don't need to separate the executable in to a static lib and a main function.
I use boost.test for unit testing and cmake to generate my project files and I find this the easiest approach. Also I am slowly introducing unit-testing to a large legacy code base so I am trying to introduce the least amount of changes, in case I inconvenience other developers and discourage them from unit testing. I would worry that using a static library just for unit testing might be seen as an excuse not adopt it.
Having said this, I think the static library approach is a nice one especially if you are starting from scratch.
For C/C++ apps I try to have as much code as possible in one or more dlls, with the main application being the bare minimum to start-up and hand-off to the dll. Dlls are much easier to test because they can export as many entry points as I like for a test application to use.
I use a seperate test application that links to the Dll(s). I'm strongly in favour of keeping test code and "product" code in seperate modules.
I go with #1, some reasons are
It allows to check that each lib links correctly
You don't want extra code in the product
It's easier to debug individual small test programs
You may need multiple executables for some tests (like communication tests)
For C++ build and test, I like to use CMake which can run a selection of the target executables as tests and print a summary of the results.
Personnally, I use another approach that relies a bit on yours:
I keep the project-to-test intact. If it's an executable, it should stay an executable. You simply create a post build action in order to aggregate all obj files into a static library.
Then, you can create you test project, linking the test framework and your previously generated static library.
Here are some topics corresponding to your question:
Visual Studio C++: Unit test exe project with google test?
Linker error - linking two "application" type projects in order to use Google Test
I'm using a third-party test-runners with their framework and including testing in build script. Tests are outside of production code (external dll).

Cross compiling unit tests with CppUnit or similar

Has anyone used a package like CppUnit to cross-compile C++ unit tests to run on an embedded platform?
I'm using G++ on a Linux box to compile executables that must be run on a LynxOS board. I can't seem to get any of the common unit test packages to configure and build something that will create unit tests.
I see a lot of unit test packages, CppUnit, UnitTest++, GTest, CppUTest, etc., but very little about using these packages in a cross-compiler scenario. The ones with a "configure" script imply that this is possible, but I can't seem to get them to configure and build.
My practice when unit testing code that is cross compiled is to compile the unit tests themselves using the native toolchain -- usually some flavor of x86 compiler. These unit tests execute on the build machine rather than on the embedded target. If you're writing strict unit tests (as opposed to integration tests) with stubs and mocks you shouldn't have dependencies on embedded hardware. If not... it's never too late to start.
One added benefit of this approach is that for non-x86 embedded targets, this type of unit testing helps flush out endianness issues, uninitialized variables and other interesting bugs.
./configure --prefix=/sandBox --build=`config.guess` --host=sh4-linux
sh4-linux is the platform where you want to run the program.
You might want to look at CxxTest. I have not used it for cross compilation, but it is based entirely on headers and a Python script - no compiled library. It might be easier to adapt than others.
Im not providing an answer here, but i wouldn't take the advice of NOT running your unit tests on different targets : you still need to, preferably both system and unit tests.
Otherwise simple things like alignment errors on ARM/other embedded CPUs will not get caught.
To cross-compile CppUTest (v3.3), I had to override the LD, CXX and CC make variables.
To get both the CppUTest and CppUTestExt (for CppUMock) libraries and their tests built I used the following commands from the CPPUTEST_HOME directory:
To build libCppUTest.a:
make all LD=sh4-linux-g++ CXX=sh4-linux-g++ CC=sh4-linux-gcc
To build libCppUTestExt.a (for CppUMock):
make extensions LD=sh4-linux-g++ CXX=sh4-linux-g++ CC=sh4-linux-gcc
You can then copy the CppUTest_tests and CppUTestExt_tests executables that are produced in your CPPUTEST_HOME to your target device and execute them there.
Assuming CppUTest passes it's own tests on your target, you're then ready to develop your tests with CppUTest. Just link your test code with the cross-compiled CppUTest libraries and copy the resulting executable to your target. Then run to get unit test results from the target platform itself.
It sounds like you need to have unit test library compiled for your OS and architecture as well as what's on your dev/build machine(s). I prefer Boost++ unit test framework for this. You can either download something that's prebuilt for your architecture, but will usually have to compile it yourself. I found a few solutions by googling for how to cross-compile boost (e.g. http://goodliffe.blogspot.com/2008/05/cross-compiling-boost.html). CppUnit might be easier to cross-compile, haven't tried. The general principle is the same, you compile the same library version for your development architecture and for your target machine
My setup for new targets is to compile the necessary Boost++ libraries for my target OS/arch and then write tests to link against both Boost++ libraries and the code to be tested.
The benefit is that you can link against your x86 Linux Boost++ libs or against your target Boost++ libs, thus you can run the tests on both your target and your dev/build machine(s).
My general setup looks like this:
libs/boost/<arch>/<boost libs>
src/foo.{cpp,h}
tests/test_foo.cpp
build/foo
build/test_foo.<arch>
I put compiled Boost++ libs under different architectures that I need in libs/ dir for all my projects and reference those libs in my Makefiles. The source and the tests get build with an arch variable specified to make command that way I can run test_foo.x86 on my dev machine and test_foo.{arm,mips,ppc,etc.} on my targets.

unit test build files

What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.

Adding unit tests to an existing project

My question is quite relevant to something asked before but I need some practical advice.
I have "Working effectively with legacy code" in my hands and I 'm using advice from the book as I read it in the project I 'm working on. The project is a C++ application that consists of a few libraries but the major portion of the code is compiled to a single executable. I 'm using googletest for adding unit tests to existing code when I have to touch something.
My problem is how can I setup my build process so I can build my unit tests since there are two different executables that need to share code while I am not able to extract the code from my "under test" application to a library. Right now I have made my build process for the application that holds the unit tests link against the object files generated from the build process of the main application but I really dislike it. Are there any suggestions?
Working Effectively With Legacy Code is the best resource for how to start testing old code. There are really no short term solutions that won't result in things getting worse.
I'll sketch out a makefile structure you can use:
all: tests executables
run-tests: tests
<commands to run the test suite>
executables: <file list>
<commands to build the files>
tests: unit-test1 unit-test2 etc
unit-test1: ,files that are required for your unit-test1>
<commands to build unit-test1>
That is roughly what I do, as a sole developer on my project
If your test app is only linking the object files it needs to test then you are effectively already treating them as a library, it should be possible to group those object files into a separate library for the main and the test app. If you can't then I don't see that what you are doing is too bad an alternative.
If you are having to link other object files not under test then that is a sign of dependencies that need to be broken, for which you have the perfect book.
We have similar problems and use a system like the one suggested by Vlion
I personally would continue doing as you are doing or consider having a build script that makes the target application and the unit tests at the same time (two resulting binaries off the same codebase). Yes it smells fishy but it is very practical.
Kudos to you and good luck with your testing.
I prefer one test executable per test. This enables link-time seams and also helps allow TDD as you can work on one unit and not worry about the rest of your code.
I make the libraries depend on all of the tests. Hopefully this means your tests are only run when the code actually changes.
If you do get a failure the tests will interrupt the build process at the right place.