I have a small, but non-trivial project, which for architectural reasons is built as three separete projects, they are inter-dependent, so unless I'm particularly focused, or improving test-coverage having spotted a hole, it makes sense for me to work from the proejct root.
The layout is as such:
/CMakeLists.txt
/build/
/src/command-line-application/
/src/command-line-application/CMakeLists.txt
/src/command-line-application/build/
/src/command-line-application/src/
/src/command-line-application/tests/
/src/command-line-application/include/
/src/vlc-plugin/
/src/vlc-plugin/src/
/src/libmyproject/
/src/libmyproject/CMakeLists.txt
/src/libmyproject/build/
/src/libmyproject/src/
/src/libmyproject/tests/
/src/libmyproject/include/
/src/libmyotherproject/
/src/libmyotherproject/CMakeLists.txt
/src/libmyotherproject/build/
/src/libmyotherproject/src/
/src/libmyotherproject/tests/
/src/libmyotherproject/include/
A word on the architecture, libmyproject is the real meat of my application, it's built this way because a CLI is a horrible way to ship code to end-users, as a library, it is also used from C# and Objective-C applications. (and all that works as expected)
The libmyotherproject is some platform specific support code, not directly connected to libmyproject, it has a few unit tests.
The vlc-plugin isn't important here, except to show that not everything in /src/*/ has unit tests.
My workflow is typically to hack on the CLI app until something useful crops up, and then refactor it into the library, and make sure it's portable.
When I'm working in /src/*/build/, typically running cmake ../ && make && ctest --output-on-failure, everything works.
When I'm working in /build, and run cmake, the individual components are built correctly (using add_subdirectories()) from CMake, but CTest does not recursively find the tests.
The documentation for CTest is a little unhelpful in what you should do:
USAGE
ctest [options]
DESCRIPTION
The "ctest" executable is the CMake test driver program. CMake-generated build trees created for
projects that use the ENABLE_TESTING and ADD_TEST commands have testing support. This program will
run the tests and report results.
I would have expected since the ADD_TEST() calls live in /src/libmyotherproject/tests/CMakeLists.txt, that they would be run? (They are at least compiled when I run cmake from /build/)
I hope I have been able to provide enough informaton, thank you.
Put
include(CTest)
in your top level CMakeLists.txt file before you make any add_subdirectory calls.
That will call enable_testing for you, and also set things up if you ever want to run a ctest dashboard script on the project to send results to a CDash server.
Related
We created multiple additional functions for CMake. They became quite a lot, and we need to (unit) test them.
There are simple ones, that are only variable bases like:
function(join_list LIST GLUE)
These can be tested with a custom CMake Script, that checks the results.
For this we also wrote a set of assert-macros.
This becomes way harder when the functions are target based:
function(target_my_custom_property_set TARGET VALUE)
We need a multiple CMakeLists.txt Files that need to be configured. Configuration must succeed or fail with specified messages. Also the result files must be checked.
I wonder, is there an easier way? Is there a existing framework? How does Kitware test the shipped modules?
ctest is the framework for running all sorts of tests. There are many tests for cmake that get run as part of the CMake Testing Process. These tests are part of the source code in the Tests folder and are part of CMakeLists.txt.
The specific tests you want to look at are located in the RunCMake folder. These tests utilize RunCMake.cmake. A good example is the tests in message. What these tests do is utilize execute_process to capture the output from cmake and compare the output from the cmake configure step to the contents for a file with the expected output. The return value from cmake is also returned and can be tested.
You don't specify what "results files" are. There are examples that are more complicated that the perform a configuration and build and scan some files to verify there contents.
It may be easier if you separate out checking messages in a failed configure vs a failed build vs a passing configure and build and a specific output message.
I am building an application with CMake, which produces libraries and executables, in text mode and GUI mode (Qt5), and out course unit testing.
I have the experience, that if I modify any but the test sources, and want to run, CMake builds first new executable(s). If I modify any of the test sources,
CMakes runs the old executable immediately, so I need to compile explicitly the new tester before running it. The tests are in a separate subdirectory, the structure is similar to that of the other components, the sources are defined by a
set(MY_SRCS list of individual sources)
Any idea, what could cause that difference? (although it is a nuance).
The make test target generated by CTest only executes the tests you added using add_test(), it does not build them. As it does not build them, it also does not check for changes in source files.
You can solve this issue by adding a custom target (e.g. make check) that first builds your tests and then executes them: CMake & CTest : make test doesn't build tests.
Not sure if this answers the question, since you do not specify how you create and execute your unit tests.
As far as autotest is concerned, how do you do autotest for C++ programs? are there any autotest framework that can be utilized to do unit test and integration test?
Are you talking Autotest ala Ruby Autotest? If so, maybe Watchr would work for you. Yes, you would need to install the Ruby runtime on your development machine, but it looks like it can trigger pretty much anything that can be done on the command line when the file system changes. For example, if you wanted Watchr to build and run your C++ tests anytime a .c/.cpp/.h/.hpp file in your source tree changed you could do something like this:
watch('src/(.*)\.[h|cpp|hpp|c]') {system "build/buildAndRunTests.bat"}
This particular command obviously makes some assumptions about how your build process is set up (and obviously that you're on Windows), but that should be the gist of it. Our team configures our unit test projects with a post-build event that automatically runs the built unit test binary, so we can just trigger that part of our build process within the buildAndRunTests.bat script and have it print the results to the command-line. It might take some tweaking but it looks like Watchr may be a good choice. I'll update this response when I give it a shot (hopefully early next week).
UPDATE: I just tried this with one of my C# projects and got it working there. So I theoretically it should work with C++ projects as well.
autotest.watchr:
watch('./.*/.*\.cs$') {system "cd build && buildAndRunTests.bat && cd ..\\"}
Note the $ at the end of the regular expression. This is important because there are a lot of artifacts generated in the source tree at build time and if any of them match the string .cs it will trigger another run, effectively causing an infinite loop. Conceivably the same thing will happen if you generate/modify any source files at build time so you may have to find a way to compensate.
buildAndRunTests.bat:
pushd ..\
rem Build test project
"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" Tests.Unit\Tests.Unit.csproj /rebuild Release
popd
rem Navigate to the directory containing the built files
pushd ..\Tests.Unit\bin\Release
rem Run the tests through nunit-console
..\..\..\Dependencies\NUnit-2.5.5-bin\net-2.0\nunit-console.exe Tests.Unit.dll /run=Tests.Unit
popd
Then, in a seperate console window just navigate to your project directory and run the following command (assumes autotest.watchr is at the top of your project tree, see below):
watchr autotest.watchr
Now, when any .cs files change in the source tree it will run the buildAndRunTests.bat script automatically. This is just an example from my local machine so it likely won't work verbatim on yours, but you should be able to tweak it to your needs.
This is the directory structure for reference:
/Project
/build
buildAndRunTests.bat
/Tests.Unit
/Dependencies
/NUnit-2.5.5-bin
/net-2.0
nunit-console.exe
autotest.watchr
I hope this helps.
You can use NUnit to achieve this, but there may be better ways. With NUnit you are writing test classes in managed C++/CLI which is calling your C++ code, which presumably runs as unmanaged. So for this option, some of your C++ code now runs as managed just for the sake of using NUnit. One may debate the "purity" of this approach. Another problem with this is attaching a debugger to NUnit (of course with both managed/native enabled) and trying to step through the managed C++/CLI bits in a sensible manner. Despite this, our office has been using NUnit for C++ unit and integration testing for a while now.
Just saw #Patrick's answer about CPPUnit, I will have to look at that.
The xUnit family can be used for unit tests. It exists for plain C++ code (CPPUNIT) and for .Net code (NUnit).
Boost have a test library you can have a look at among many others around.
Last time when I did some work in Qt, I've used Qt's QTestLib for unit tests. It did work well for my lo-fi needs. http://doc.qt.nokia.com/4.6/qtestlib-manual.html
What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.
My question is quite relevant to something asked before but I need some practical advice.
I have "Working effectively with legacy code" in my hands and I 'm using advice from the book as I read it in the project I 'm working on. The project is a C++ application that consists of a few libraries but the major portion of the code is compiled to a single executable. I 'm using googletest for adding unit tests to existing code when I have to touch something.
My problem is how can I setup my build process so I can build my unit tests since there are two different executables that need to share code while I am not able to extract the code from my "under test" application to a library. Right now I have made my build process for the application that holds the unit tests link against the object files generated from the build process of the main application but I really dislike it. Are there any suggestions?
Working Effectively With Legacy Code is the best resource for how to start testing old code. There are really no short term solutions that won't result in things getting worse.
I'll sketch out a makefile structure you can use:
all: tests executables
run-tests: tests
<commands to run the test suite>
executables: <file list>
<commands to build the files>
tests: unit-test1 unit-test2 etc
unit-test1: ,files that are required for your unit-test1>
<commands to build unit-test1>
That is roughly what I do, as a sole developer on my project
If your test app is only linking the object files it needs to test then you are effectively already treating them as a library, it should be possible to group those object files into a separate library for the main and the test app. If you can't then I don't see that what you are doing is too bad an alternative.
If you are having to link other object files not under test then that is a sign of dependencies that need to be broken, for which you have the perfect book.
We have similar problems and use a system like the one suggested by Vlion
I personally would continue doing as you are doing or consider having a build script that makes the target application and the unit tests at the same time (two resulting binaries off the same codebase). Yes it smells fishy but it is very practical.
Kudos to you and good luck with your testing.
I prefer one test executable per test. This enables link-time seams and also helps allow TDD as you can work on one unit and not worry about the rest of your code.
I make the libraries depend on all of the tests. Hopefully this means your tests are only run when the code actually changes.
If you do get a failure the tests will interrupt the build process at the right place.