Is there a way to run C++ Unit Tests tests in parallel? - c++

I'm using Boost Test for a long time now and I ends up having my tests running too slowly. As each test is highly parallel, I want them to run concurrently with all my cores.
Is there a way to do that using the Boost Test Library ? I didn't found any solution. I tried to look a how to write custom test runner, but I didn't much documentation on that point :(
If there is no way, does someone know a good C++ Test Framework to achieve that goal ? I was thinking that Google Test would do the job but apparently it cannot run test in parallel either. Even if the framework has less features than other more known framework, it is not a problem, I just need simple assertions and multi-threaded execution.
Thanks

You could use CTest for this.
CTest is the test driver which accompanies CMake (the build system generator), so you'd need to use CMake to create the build system using your existing files and tests, and in doing so you would then be able to use CTest to run the test executables.
I haven't personally used Boost.Test with CMake (we use GoogleTest), but this question goes into a little more detail on the process.
Once you have the tests added in your CMakeLists file, you can make use of CTest's -j argument to specify how many jobs to run in parallel.

What google is hinting at in the gtest documentation is test sharding - letting multiple machines run the tests by just using command line parameters and environment variables. You could run them all on one machines in separated processes, where you set the GTEST_SHARD_INDEX and GTEST_TOTAL_SHARDS environment variables appropriately.
In principle, nothing is preventing you from starting multiple processes of the test executable with a different filtering parameter (Boost.test, gtest)
Update 2014: https://github.com/google/gtest-parallel

Split the suite to consist of multiple smaller sets each launched with individual binary, and add .PHONY target test to your build sysem depending on all of them. Run as (assuming you are using make) make -jN test

Given that the third bullet point on the open issues list is currently thread safety, I don't believe there's a way to tell Boost test to run tests in multiple threads.
Instead, you will need to find an external test runner that supports running tests in parallel (I would expect this to work by forking off new processes).

Related

Using googletest to run unit-tests for multiple tested modules at once

I'm working on a large C++ project which contains more than 50 libraries and executables. I am starting to add googletest tests for each of these modules. I read that google recommends putting the tests in an executables and not in libraries to make life easier. Creating a separate executable for each separate components I would get more than 50 test executables and in order to run them all at once I would need to create an external script which would also need to combine their output to a single one.
Is that the recommended thing to do?
Or should I create a library for tests of each separate module and link all these libs to a single tests executable? But then running tests for a single module becomes less convinient. I would need to build all the tests and specify to the main test executable through the gtest_filter flag which tests should be executed at this time.
It would really help me to hear how other people do this and what is the best practice here.
Thank you
[...] and in order to run them all at once I would need to create an
external script which would also need to combine their output to a
single one.
Maybe it's not actually necessary to combine the output into a single file. For example with Jenkins you can specify a wildcard pattern for the Google Test output files.
So if you actually just want to see the Google Test results in Jenkins (or Hudson or whatever CI tool you use), this might be a possible solution:
You would run all of your test executables from a simple script (or even from a Make rule), with parameter --gtest_output=xml: followed by a directory name (ie. ending with a slash). Every test executable will then write an own XML file into that directory, and you can then configure your CI tool to read all files from that directory.

Build test code as a library or as an executable?

In unit-testing and test driven development, why is it better to build test code as a library rather than as an executable for testing a c++ program? I have heard arguments for both.
You can build a separate executable for your test code and run it as a post-build event of your main application. This way, if the tests fail, the build fails. Most C++ IDEs (e.g. Visual Studio, Eclipse, QtCreator) support this.
The arguments for library vs executable depend on how you want your developers to use the tests.
If you want to have the tests integrated into the build process, you probably want a command-line executable. If you want to have the tests runnable from some kind of standalone GUI app, you might want a window based executable. If you want the tests to be run by a metrics gathering server, they may need to be hosted in a service.
If you want more than one of these methods, you could choose to compile the tests into a library, then link them into each of the executable frameworks. But if you only ever have need of command-line execution, then there'd be no need for the GUI or service options, and no advantage gained from building a separate static library.
Neither approach is "better". Choose the approach you need based on your team's particular situation, and your team's standards. It's also probably not that important right now. If you start with just an executable test harness, you can always split the tests into a static library later.
It's far more important that you start writing and running automated testing now than to pause and quibble about the test implementation details.

Running Nested Tests with CTest

I have a small, but non-trivial project, which for architectural reasons is built as three separete projects, they are inter-dependent, so unless I'm particularly focused, or improving test-coverage having spotted a hole, it makes sense for me to work from the proejct root.
The layout is as such:
/CMakeLists.txt
/build/
/src/command-line-application/
/src/command-line-application/CMakeLists.txt
/src/command-line-application/build/
/src/command-line-application/src/
/src/command-line-application/tests/
/src/command-line-application/include/
/src/vlc-plugin/
/src/vlc-plugin/src/
/src/libmyproject/
/src/libmyproject/CMakeLists.txt
/src/libmyproject/build/
/src/libmyproject/src/
/src/libmyproject/tests/
/src/libmyproject/include/
/src/libmyotherproject/
/src/libmyotherproject/CMakeLists.txt
/src/libmyotherproject/build/
/src/libmyotherproject/src/
/src/libmyotherproject/tests/
/src/libmyotherproject/include/
A word on the architecture, libmyproject is the real meat of my application, it's built this way because a CLI is a horrible way to ship code to end-users, as a library, it is also used from C# and Objective-C applications. (and all that works as expected)
The libmyotherproject is some platform specific support code, not directly connected to libmyproject, it has a few unit tests.
The vlc-plugin isn't important here, except to show that not everything in /src/*/ has unit tests.
My workflow is typically to hack on the CLI app until something useful crops up, and then refactor it into the library, and make sure it's portable.
When I'm working in /src/*/build/, typically running cmake ../ && make && ctest --output-on-failure, everything works.
When I'm working in /build, and run cmake, the individual components are built correctly (using add_subdirectories()) from CMake, but CTest does not recursively find the tests.
The documentation for CTest is a little unhelpful in what you should do:
USAGE
ctest [options]
DESCRIPTION
The "ctest" executable is the CMake test driver program. CMake-generated build trees created for
projects that use the ENABLE_TESTING and ADD_TEST commands have testing support. This program will
run the tests and report results.
I would have expected since the ADD_TEST() calls live in /src/libmyotherproject/tests/CMakeLists.txt, that they would be run? (They are at least compiled when I run cmake from /build/)
I hope I have been able to provide enough informaton, thank you.
Put
include(CTest)
in your top level CMakeLists.txt file before you make any add_subdirectory calls.
That will call enable_testing for you, and also set things up if you ever want to run a ctest dashboard script on the project to send results to a CDash server.

run tests automatically after sources changed

I use Cxxtest for unit-testing my C++ code. I would like that each time I change and save my code, tests are run (simply, by make tests). I know that for Python there is Nosy that enables that. Is there some generic program that would allow this for Cxxtest or any other testing unit?
Simply, I just need running a single command of files` change. It wouldn't be difficult to write script like that, but maybe there is some tool already : )
On Linux you can use a filesystem monitoring daemon like incron to run a command (e.g. make tests) every time a file is changed in a directory (a so-called IN_CLOSE_WRITE event).
You could use TeamCity for this. It can monitor your code repository and run automated builds + unit tests when changes are detected.. Includes a decent web style interface and emailing capability to notify of build/test failures..
It can also be configured for both windows and linux builds..
http://www.jetbrains.com/teamcity/
If thats a bit heavyweight for you, then you should be able to configure your build process to run the tests for you (e.g. edit your makefile on linux), but obviously this would still mean you manually kicking off a build when you make changes (which I guess you'd probably do anyway)..
Annoyed that there is no ready and simple solution, i just created a simple tool: changerun to run commands when files' change. I hope someone will find it useful : )

Adding unit tests to an existing project

My question is quite relevant to something asked before but I need some practical advice.
I have "Working effectively with legacy code" in my hands and I 'm using advice from the book as I read it in the project I 'm working on. The project is a C++ application that consists of a few libraries but the major portion of the code is compiled to a single executable. I 'm using googletest for adding unit tests to existing code when I have to touch something.
My problem is how can I setup my build process so I can build my unit tests since there are two different executables that need to share code while I am not able to extract the code from my "under test" application to a library. Right now I have made my build process for the application that holds the unit tests link against the object files generated from the build process of the main application but I really dislike it. Are there any suggestions?
Working Effectively With Legacy Code is the best resource for how to start testing old code. There are really no short term solutions that won't result in things getting worse.
I'll sketch out a makefile structure you can use:
all: tests executables
run-tests: tests
<commands to run the test suite>
executables: <file list>
<commands to build the files>
tests: unit-test1 unit-test2 etc
unit-test1: ,files that are required for your unit-test1>
<commands to build unit-test1>
That is roughly what I do, as a sole developer on my project
If your test app is only linking the object files it needs to test then you are effectively already treating them as a library, it should be possible to group those object files into a separate library for the main and the test app. If you can't then I don't see that what you are doing is too bad an alternative.
If you are having to link other object files not under test then that is a sign of dependencies that need to be broken, for which you have the perfect book.
We have similar problems and use a system like the one suggested by Vlion
I personally would continue doing as you are doing or consider having a build script that makes the target application and the unit tests at the same time (two resulting binaries off the same codebase). Yes it smells fishy but it is very practical.
Kudos to you and good luck with your testing.
I prefer one test executable per test. This enables link-time seams and also helps allow TDD as you can work on one unit and not worry about the rest of your code.
I make the libraries depend on all of the tests. Hopefully this means your tests are only run when the code actually changes.
If you do get a failure the tests will interrupt the build process at the right place.