run tests automatically after sources changed - c++

I use Cxxtest for unit-testing my C++ code. I would like that each time I change and save my code, tests are run (simply, by make tests). I know that for Python there is Nosy that enables that. Is there some generic program that would allow this for Cxxtest or any other testing unit?
Simply, I just need running a single command of files` change. It wouldn't be difficult to write script like that, but maybe there is some tool already : )

On Linux you can use a filesystem monitoring daemon like incron to run a command (e.g. make tests) every time a file is changed in a directory (a so-called IN_CLOSE_WRITE event).

You could use TeamCity for this. It can monitor your code repository and run automated builds + unit tests when changes are detected.. Includes a decent web style interface and emailing capability to notify of build/test failures..
It can also be configured for both windows and linux builds..
http://www.jetbrains.com/teamcity/
If thats a bit heavyweight for you, then you should be able to configure your build process to run the tests for you (e.g. edit your makefile on linux), but obviously this would still mean you manually kicking off a build when you make changes (which I guess you'd probably do anyway)..

Annoyed that there is no ready and simple solution, i just created a simple tool: changerun to run commands when files' change. I hope someone will find it useful : )

Related

Is there a way to run C++ Unit Tests tests in parallel?

I'm using Boost Test for a long time now and I ends up having my tests running too slowly. As each test is highly parallel, I want them to run concurrently with all my cores.
Is there a way to do that using the Boost Test Library ? I didn't found any solution. I tried to look a how to write custom test runner, but I didn't much documentation on that point :(
If there is no way, does someone know a good C++ Test Framework to achieve that goal ? I was thinking that Google Test would do the job but apparently it cannot run test in parallel either. Even if the framework has less features than other more known framework, it is not a problem, I just need simple assertions and multi-threaded execution.
Thanks
You could use CTest for this.
CTest is the test driver which accompanies CMake (the build system generator), so you'd need to use CMake to create the build system using your existing files and tests, and in doing so you would then be able to use CTest to run the test executables.
I haven't personally used Boost.Test with CMake (we use GoogleTest), but this question goes into a little more detail on the process.
Once you have the tests added in your CMakeLists file, you can make use of CTest's -j argument to specify how many jobs to run in parallel.
What google is hinting at in the gtest documentation is test sharding - letting multiple machines run the tests by just using command line parameters and environment variables. You could run them all on one machines in separated processes, where you set the GTEST_SHARD_INDEX and GTEST_TOTAL_SHARDS environment variables appropriately.
In principle, nothing is preventing you from starting multiple processes of the test executable with a different filtering parameter (Boost.test, gtest)
Update 2014: https://github.com/google/gtest-parallel
Split the suite to consist of multiple smaller sets each launched with individual binary, and add .PHONY target test to your build sysem depending on all of them. Run as (assuming you are using make) make -jN test
Given that the third bullet point on the open issues list is currently thread safety, I don't believe there's a way to tell Boost test to run tests in multiple threads.
Instead, you will need to find an external test runner that supports running tests in parallel (I would expect this to work by forking off new processes).

Building on another machine - Workflow

I'm used to building my source in an IDE and having good feedback in the environment. Currently however I'm writing source code in notepad++, ftp'ing it to another machine with specific environment settings, and then building it there and reading the Makefile output to see that it all checks out. After that, I scp the built executable to the actual device to test it.
I'm curious if there are environments that can simplify this. I suppose I could write a script that ftp's changed files and then runs a command through ssh to build them. But I'd like an environment that will parse the makefile output and give me an build report like in most IDE's. I'm not sure how specific this problem is, or if a lot of embedded systems have similar set ups.
Ideally I suppose I would have a machine with the correct build environment, but that isn't the case :/
I tend to put the file transfer, remote make invocation and whatever else is necessary into some script (having a one-click build is important anyway) and then set that as the build command in my editor. I happen to use Sublime Text 2, which works fine with the error messages I get from building C++ code via make; personally, I don't find editors not supporting this kind of workflow worth using. There are lots of editors which do.
Oh, and I'd try replacing the ftp with rsync over ssh. It's probably faster, definitely easier to automate, and safer.

Running Nested Tests with CTest

I have a small, but non-trivial project, which for architectural reasons is built as three separete projects, they are inter-dependent, so unless I'm particularly focused, or improving test-coverage having spotted a hole, it makes sense for me to work from the proejct root.
The layout is as such:
/CMakeLists.txt
/build/
/src/command-line-application/
/src/command-line-application/CMakeLists.txt
/src/command-line-application/build/
/src/command-line-application/src/
/src/command-line-application/tests/
/src/command-line-application/include/
/src/vlc-plugin/
/src/vlc-plugin/src/
/src/libmyproject/
/src/libmyproject/CMakeLists.txt
/src/libmyproject/build/
/src/libmyproject/src/
/src/libmyproject/tests/
/src/libmyproject/include/
/src/libmyotherproject/
/src/libmyotherproject/CMakeLists.txt
/src/libmyotherproject/build/
/src/libmyotherproject/src/
/src/libmyotherproject/tests/
/src/libmyotherproject/include/
A word on the architecture, libmyproject is the real meat of my application, it's built this way because a CLI is a horrible way to ship code to end-users, as a library, it is also used from C# and Objective-C applications. (and all that works as expected)
The libmyotherproject is some platform specific support code, not directly connected to libmyproject, it has a few unit tests.
The vlc-plugin isn't important here, except to show that not everything in /src/*/ has unit tests.
My workflow is typically to hack on the CLI app until something useful crops up, and then refactor it into the library, and make sure it's portable.
When I'm working in /src/*/build/, typically running cmake ../ && make && ctest --output-on-failure, everything works.
When I'm working in /build, and run cmake, the individual components are built correctly (using add_subdirectories()) from CMake, but CTest does not recursively find the tests.
The documentation for CTest is a little unhelpful in what you should do:
USAGE
ctest [options]
DESCRIPTION
The "ctest" executable is the CMake test driver program. CMake-generated build trees created for
projects that use the ENABLE_TESTING and ADD_TEST commands have testing support. This program will
run the tests and report results.
I would have expected since the ADD_TEST() calls live in /src/libmyotherproject/tests/CMakeLists.txt, that they would be run? (They are at least compiled when I run cmake from /build/)
I hope I have been able to provide enough informaton, thank you.
Put
include(CTest)
in your top level CMakeLists.txt file before you make any add_subdirectory calls.
That will call enable_testing for you, and also set things up if you ever want to run a ctest dashboard script on the project to send results to a CDash server.

How to unit test WIX merge modules?

I am building merge modules with WIX. The batch files which calls the WIX tools to generate the merge modules from *.wxs files are run by my daily build.
I am trying to figure out how could I automate the testing of these merge modules. Things I would like to test are, whether the merge module installs required files, whether the versions of the files are correct etc.
One idea I have is to write a script (may be VB Script) to install the merge module at a temporary location and check if it has installed everything correctly. However, I am not sure if this is a correct way to do it.
Are there any standard ways of writing unit tests for merge modules? Any ideas around how to go about this are welcome.
When you test an installer, the primary goals are to verify that
When installing the msi file, msiexec reports success (i.e. return code 0).
After running the installer, your application can be started and works as expected.
The first point should be easy enough to do, though if you want to keep the test automated you can only test the non-interactive install. Here is an example how to do that from a batch file or on the command line:
msiexec /i myinstaller.msi /passive || echo ERROR: non-zero return code!
The second point is a bit more complicated. I think the best way to achieve this is to build some integration tests into your application, and invoke those tests after the install:
"c:\program files\mystuff\app.exe" /selftest || echo ERROR: non-zero return code!
In your case you're testing a merge module rather than an entire installer. The same approach can be used. You just will have to do the additional work of building a "self test" application and its installer which consumes your merge module.
I've often thought about this but haven't come up with anything that I like. First I would call this integration testing not strictly unit testing. Second the problem of "right files" and "right versions" is difficult to define.
I'm often tempted to say WiX/MSI is just data that defines what the installer is to do. It's declarative in nature and therefore by definition correct. It's tempting to want to create yet another set of data that cross checks the implementation of the installer but what exactly does that accomplish that the first data set didn't already represent? It's hard enough sometimes to maintain what files go into an application yet alone to maintain a second list of files.
I continue to think about this and wonder if there's an approach that would make sense but at this point I just do my normal MSI validation.
You could try to use scripts or other small console program that will do the job, just like you suggested.
With your build process you could also build a basic setup that just uses the merge module. Your script could just install this, run the other script or console app that will check if all the files are in place, that they have the correct version, that all the registry keys are installed, etc. After all the output is gathered your main script would just uninstall everything. You could also run the check program after uninstalling to be sure that everything is gone and that the uninstall works correctly. I would recommend this if, for example, you have custom actions set for install and uninstall.
Ideally this whole install / uninstall process should be done on a separate machine, or a virtual one, in order to avoid messing up the build server.
You'll have some work to do with all this scripts but once you have it, you'll be able to use it with little changes for any other future merge module projects or just plain setup projects.
Hope this would help,
Adrian.

have you and how do you do C++ autotest?

As far as autotest is concerned, how do you do autotest for C++ programs? are there any autotest framework that can be utilized to do unit test and integration test?
Are you talking Autotest ala Ruby Autotest? If so, maybe Watchr would work for you. Yes, you would need to install the Ruby runtime on your development machine, but it looks like it can trigger pretty much anything that can be done on the command line when the file system changes. For example, if you wanted Watchr to build and run your C++ tests anytime a .c/.cpp/.h/.hpp file in your source tree changed you could do something like this:
watch('src/(.*)\.[h|cpp|hpp|c]') {system "build/buildAndRunTests.bat"}
This particular command obviously makes some assumptions about how your build process is set up (and obviously that you're on Windows), but that should be the gist of it. Our team configures our unit test projects with a post-build event that automatically runs the built unit test binary, so we can just trigger that part of our build process within the buildAndRunTests.bat script and have it print the results to the command-line. It might take some tweaking but it looks like Watchr may be a good choice. I'll update this response when I give it a shot (hopefully early next week).
UPDATE: I just tried this with one of my C# projects and got it working there. So I theoretically it should work with C++ projects as well.
autotest.watchr:
watch('./.*/.*\.cs$') {system "cd build && buildAndRunTests.bat && cd ..\\"}
Note the $ at the end of the regular expression. This is important because there are a lot of artifacts generated in the source tree at build time and if any of them match the string .cs it will trigger another run, effectively causing an infinite loop. Conceivably the same thing will happen if you generate/modify any source files at build time so you may have to find a way to compensate.
buildAndRunTests.bat:
pushd ..\
rem Build test project
"C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com" Tests.Unit\Tests.Unit.csproj /rebuild Release
popd
rem Navigate to the directory containing the built files
pushd ..\Tests.Unit\bin\Release
rem Run the tests through nunit-console
..\..\..\Dependencies\NUnit-2.5.5-bin\net-2.0\nunit-console.exe Tests.Unit.dll /run=Tests.Unit
popd
Then, in a seperate console window just navigate to your project directory and run the following command (assumes autotest.watchr is at the top of your project tree, see below):
watchr autotest.watchr
Now, when any .cs files change in the source tree it will run the buildAndRunTests.bat script automatically. This is just an example from my local machine so it likely won't work verbatim on yours, but you should be able to tweak it to your needs.
This is the directory structure for reference:
/Project
/build
buildAndRunTests.bat
/Tests.Unit
/Dependencies
/NUnit-2.5.5-bin
/net-2.0
nunit-console.exe
autotest.watchr
I hope this helps.
You can use NUnit to achieve this, but there may be better ways. With NUnit you are writing test classes in managed C++/CLI which is calling your C++ code, which presumably runs as unmanaged. So for this option, some of your C++ code now runs as managed just for the sake of using NUnit. One may debate the "purity" of this approach. Another problem with this is attaching a debugger to NUnit (of course with both managed/native enabled) and trying to step through the managed C++/CLI bits in a sensible manner. Despite this, our office has been using NUnit for C++ unit and integration testing for a while now.
Just saw #Patrick's answer about CPPUnit, I will have to look at that.
The xUnit family can be used for unit tests. It exists for plain C++ code (CPPUNIT) and for .Net code (NUnit).
Boost have a test library you can have a look at among many others around.
Last time when I did some work in Qt, I've used Qt's QTestLib for unit tests. It did work well for my lo-fi needs. http://doc.qt.nokia.com/4.6/qtestlib-manual.html