Autotools: How to run a single test using make check? - unit-testing

I'm currently working on a project that uses autotools to generate a makefile. As usual, this makefile supports make check, to run the entire test suite. However, I'm working on a small part of the program, where only a few unit tests apply. Is it possible, using make check, to run a single unit test or a selection of unit tests, instead of the entire suite?
Edit:
Evan VanderZee suggests that it depends on the directory structure, so let me explain a bit more about the directory structure. The directory structure for this project is pretty flat. Most source files are in the src/ directory, with some grouped in a subdirectory under src/. The unit tests are in src/tests. None of the source directories contain a Makefile. There is only a makefile (generated by configure) at the top level.

If you are using Automake's TESTS variable to list the tests that make check runs, then there is an easy shortcut: make check TESTS='name-of-test1 name-of-test2' The TESTS variable passed on the command line overrides the one in the Makefile.
Alternatively, export TESTS='names-of-tests'; make -e check to take the value of TESTS from the environment.
If you are not using TESTS but instead check-local or some other target, then this won't work.

The answer to that depends on the specific makefiles you are using. If the tests have a directory structure, you can often navigate to a subdirectory and run make check in the subdirectory.
Since you explained later that your directory structure is flat, it is hard to say without actually seeing some of the makefile. If you know the name of the executable that is created for the specific test you want to run, you might be able to run make name_of_test to build the test that you want to run. This will not run the test, it will only build it. The test may reside in the test directory after it is built. After this you can go into the test directory and run the test in the typical way you would run an executable, but if the test depends on libraries, you may need to tell the test where to find those libraries, perhaps by adding some libraries to the LD_LIBRARY_PATH.
If this is something you would want to do often, the makefile can probably be modified to support running the specific tests that you want to run. Typically this would involve editing a Makefile.am or Makefile.in and then reconfiguring, but I don't have enough information yet to advise what edits you would need to make.

Related

Testing our CMake Modules

We created multiple additional functions for CMake. They became quite a lot, and we need to (unit) test them.
There are simple ones, that are only variable bases like:
function(join_list LIST GLUE)
These can be tested with a custom CMake Script, that checks the results.
For this we also wrote a set of assert-macros.
This becomes way harder when the functions are target based:
function(target_my_custom_property_set TARGET VALUE)
We need a multiple CMakeLists.txt Files that need to be configured. Configuration must succeed or fail with specified messages. Also the result files must be checked.
I wonder, is there an easier way? Is there a existing framework? How does Kitware test the shipped modules?
ctest is the framework for running all sorts of tests. There are many tests for cmake that get run as part of the CMake Testing Process. These tests are part of the source code in the Tests folder and are part of CMakeLists.txt.
The specific tests you want to look at are located in the RunCMake folder. These tests utilize RunCMake.cmake. A good example is the tests in message. What these tests do is utilize execute_process to capture the output from cmake and compare the output from the cmake configure step to the contents for a file with the expected output. The return value from cmake is also returned and can be tested.
You don't specify what "results files" are. There are examples that are more complicated that the perform a configuration and build and scan some files to verify there contents.
It may be easier if you separate out checking messages in a failed configure vs a failed build vs a passing configure and build and a specific output message.

Are test sources handled differently by CMake?

I am building an application with CMake, which produces libraries and executables, in text mode and GUI mode (Qt5), and out course unit testing.
I have the experience, that if I modify any but the test sources, and want to run, CMake builds first new executable(s). If I modify any of the test sources,
CMakes runs the old executable immediately, so I need to compile explicitly the new tester before running it. The tests are in a separate subdirectory, the structure is similar to that of the other components, the sources are defined by a
set(MY_SRCS list of individual sources)
Any idea, what could cause that difference? (although it is a nuance).
The make test target generated by CTest only executes the tests you added using add_test(), it does not build them. As it does not build them, it also does not check for changes in source files.
You can solve this issue by adding a custom target (e.g. make check) that first builds your tests and then executes them: CMake & CTest : make test doesn't build tests.
Not sure if this answers the question, since you do not specify how you create and execute your unit tests.

Using googletest to run unit-tests for multiple tested modules at once

I'm working on a large C++ project which contains more than 50 libraries and executables. I am starting to add googletest tests for each of these modules. I read that google recommends putting the tests in an executables and not in libraries to make life easier. Creating a separate executable for each separate components I would get more than 50 test executables and in order to run them all at once I would need to create an external script which would also need to combine their output to a single one.
Is that the recommended thing to do?
Or should I create a library for tests of each separate module and link all these libs to a single tests executable? But then running tests for a single module becomes less convinient. I would need to build all the tests and specify to the main test executable through the gtest_filter flag which tests should be executed at this time.
It would really help me to hear how other people do this and what is the best practice here.
Thank you
[...] and in order to run them all at once I would need to create an
external script which would also need to combine their output to a
single one.
Maybe it's not actually necessary to combine the output into a single file. For example with Jenkins you can specify a wildcard pattern for the Google Test output files.
So if you actually just want to see the Google Test results in Jenkins (or Hudson or whatever CI tool you use), this might be a possible solution:
You would run all of your test executables from a simple script (or even from a Make rule), with parameter --gtest_output=xml: followed by a directory name (ie. ending with a slash). Every test executable will then write an own XML file into that directory, and you can then configure your CI tool to read all files from that directory.

Running Nested Tests with CTest

I have a small, but non-trivial project, which for architectural reasons is built as three separete projects, they are inter-dependent, so unless I'm particularly focused, or improving test-coverage having spotted a hole, it makes sense for me to work from the proejct root.
The layout is as such:
/CMakeLists.txt
/build/
/src/command-line-application/
/src/command-line-application/CMakeLists.txt
/src/command-line-application/build/
/src/command-line-application/src/
/src/command-line-application/tests/
/src/command-line-application/include/
/src/vlc-plugin/
/src/vlc-plugin/src/
/src/libmyproject/
/src/libmyproject/CMakeLists.txt
/src/libmyproject/build/
/src/libmyproject/src/
/src/libmyproject/tests/
/src/libmyproject/include/
/src/libmyotherproject/
/src/libmyotherproject/CMakeLists.txt
/src/libmyotherproject/build/
/src/libmyotherproject/src/
/src/libmyotherproject/tests/
/src/libmyotherproject/include/
A word on the architecture, libmyproject is the real meat of my application, it's built this way because a CLI is a horrible way to ship code to end-users, as a library, it is also used from C# and Objective-C applications. (and all that works as expected)
The libmyotherproject is some platform specific support code, not directly connected to libmyproject, it has a few unit tests.
The vlc-plugin isn't important here, except to show that not everything in /src/*/ has unit tests.
My workflow is typically to hack on the CLI app until something useful crops up, and then refactor it into the library, and make sure it's portable.
When I'm working in /src/*/build/, typically running cmake ../ && make && ctest --output-on-failure, everything works.
When I'm working in /build, and run cmake, the individual components are built correctly (using add_subdirectories()) from CMake, but CTest does not recursively find the tests.
The documentation for CTest is a little unhelpful in what you should do:
USAGE
ctest [options]
DESCRIPTION
The "ctest" executable is the CMake test driver program. CMake-generated build trees created for
projects that use the ENABLE_TESTING and ADD_TEST commands have testing support. This program will
run the tests and report results.
I would have expected since the ADD_TEST() calls live in /src/libmyotherproject/tests/CMakeLists.txt, that they would be run? (They are at least compiled when I run cmake from /build/)
I hope I have been able to provide enough informaton, thank you.
Put
include(CTest)
in your top level CMakeLists.txt file before you make any add_subdirectory calls.
That will call enable_testing for you, and also set things up if you ever want to run a ctest dashboard script on the project to send results to a CDash server.

Beginning Code::blocks and UnitTest++

I'm about to start a C++ project but I'm stuck at the basics.
I want to use the (linux) Code::Blocks IDE, and it's easy to create a normal project. However I want to do TDD using the UnitTest++ framework, and I don't know how to set everything up cleanly.
I've already asked a question about where to put the UnitTest::RunAllTests() command, and they told me the best place is the main() of a separate program.
How do I go about doing this in Code::Blocks? I think I need to create 2 projects:
The "real" project with its own main();
The unit testing project containing the tests and the main() with UnitTest::RunAllTests() inside.
Then somehow have the first project build and run the second during its build process. I don't know how to do that yet but I can find out on my own.
My questions are:
this is the right method?
do I have to create also a project for the UnitTest++ framework, in order to let other people build it on other platforms? Or is dropping the complied library in the project's path enough?
how can I organize the directories of these projects together? It'd be nice to put the tests related to each package in the same directory as that package, but is it ok to have multiple projects in the same directory tree?
I'll partly answer my own questions, as I've managed to get everything working.
Following the instructions on the official documentation page, I've put the UnitTest++ folder with the compiled library and all the source files in my project's path.
Then I created a test project for all the unit testing, with a main function containing the famous UnitTest::RunAllTests(). I put $exe_output as a post-build process here, in order to have the tests executed automatically every time I build this project.
I created the "real" project where my code to be tested will go. In the build settings I specified the test project as a dependency of the real project, so that every time I build the real one, it also builds the test project first.
With these settings I can work on my tests and on the real code, and I only have to build the real one to have the updated tests executed. Any failing test will also make the build fail.
Now two questions remain: "is this the best approach?" and "right now each project lives in a different directory. Is it wiser to leave it this way or should I put each test in the same folder as the real code to be tested?"