Boost Unit testing- where to locate makefiles - c++

I would like to use Boost for unit testing. Writing the unit tests is not a problem. However, I am a little unsure how to structure the makefiles relating to each unit test. I am not an expert on makefiles so I would like to explain how my code base is, what I would like to do and then ask for the best solution.
I have a code base which is a mixture of Python, C++ utility functions in headers and one C++ application/library, but it uses the classes/functions located in other folders (which are not sub folders). So unsurprisingly I only have one makefile for my application.
I would like to add unit tests in each of the various folders my application uses.
I would also like to be able to run all the unit tests (across the multiple folders) by running one executable.
Do I write a boost unit test source file in each folder and add a corresponding makefile, one per each folder I wish to unit test? How do I structure this so that I can run all the unit tests from one executable?
Also, do I need to link my unit test makefiles to my application object file, or just gcc -I flag the headers it requires?
(I am using regular GNU Make)

Related

Best way of including Unit Tests in a C++ open source project?

I'm planning to release a C++ project I've been working on as my first open source project. I use GTest as my unit test framework, and I don't know what's the common procedure to include this dependency in a public project.
Right now I have GTest as a submodule of my main project, but looking at other projects they don't usually have any submodule dependency, and it seems wrong to me to make people clone GTest as well as part of my project, as if they're already using it for their own unit tests they'll end up having duplicated code etc.
What's the common procedure for cases like this?
Thank you very much!
Usually we put test under compilation flags (or cmake variable, or autotools configuration flag, or...) and disable compilation of test when the framework is not detected or not asked to be used by the person that compiles.
That way, the dependency is not required but the user has the ability to add it if he wishes to use the unit tests.

Autotools: How to run a single test using make check?

I'm currently working on a project that uses autotools to generate a makefile. As usual, this makefile supports make check, to run the entire test suite. However, I'm working on a small part of the program, where only a few unit tests apply. Is it possible, using make check, to run a single unit test or a selection of unit tests, instead of the entire suite?
Edit:
Evan VanderZee suggests that it depends on the directory structure, so let me explain a bit more about the directory structure. The directory structure for this project is pretty flat. Most source files are in the src/ directory, with some grouped in a subdirectory under src/. The unit tests are in src/tests. None of the source directories contain a Makefile. There is only a makefile (generated by configure) at the top level.
If you are using Automake's TESTS variable to list the tests that make check runs, then there is an easy shortcut: make check TESTS='name-of-test1 name-of-test2' The TESTS variable passed on the command line overrides the one in the Makefile.
Alternatively, export TESTS='names-of-tests'; make -e check to take the value of TESTS from the environment.
If you are not using TESTS but instead check-local or some other target, then this won't work.
The answer to that depends on the specific makefiles you are using. If the tests have a directory structure, you can often navigate to a subdirectory and run make check in the subdirectory.
Since you explained later that your directory structure is flat, it is hard to say without actually seeing some of the makefile. If you know the name of the executable that is created for the specific test you want to run, you might be able to run make name_of_test to build the test that you want to run. This will not run the test, it will only build it. The test may reside in the test directory after it is built. After this you can go into the test directory and run the test in the typical way you would run an executable, but if the test depends on libraries, you may need to tell the test where to find those libraries, perhaps by adding some libraries to the LD_LIBRARY_PATH.
If this is something you would want to do often, the makefile can probably be modified to support running the specific tests that you want to run. Typically this would involve editing a Makefile.am or Makefile.in and then reconfiguring, but I don't have enough information yet to advise what edits you would need to make.

Building a project with unit tests in one executable

I'm setting up project in C++ using CMake and Catch/gmock for unit testing (it's not very important, unit testing frameworks are working similarly) . I'm targeting Windows (MSVC compiler) and Linux platforms. I'd like to put all the tests in single executable as desribed in Catch's tutorial . For unit testing purposes I will probably make some fake implementations/mocks. I'm afraid when I will build everything into an one executable (files with test source, project sources, fake implementations) I will get linker multiple definition errors, since there will be multiple function definitions: true and fake ones. Possible solutions I see now:
Build multiple executables, every with right implementations - I see this worst solution, because I will end up with bunch of files, which I will have to execute to test program (probably via some script, but it's not convenient option).
Compile each test with it's dependencies to shared library which all will be linked with main test runner executable. I think it will be quite hard to achieve, especially I'm not familiar with Linux shared libraries. For Windows it should be doable with some dllexports, but not really easy.
How should are solve this problem? What are real world solutions for this? Maybe I don't see something really simple and I'm looking for a nonexistent problem? I'd like quite easy, multi platform solution.

How do you run your unit tests? Compiler flags? Static libraries?

I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++.
My general approach so far has been to do one of three things:
Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library.
Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit.
Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application.
None of these solutions are particularly elegant...
How do you do it?
Your approach no. 1 is the way I've always done it in C/C++ and Java. Most of the application code is in the static library and I try to keep the amount of extra code needed for the application to a minimum.
The way I approach TDD in Python and other dynamic languages is slightly different in that I leave the source code for the application and tests lying around and a test runner finds the tests and runs them.
I tend to favour static libs over dlls so most of my C++ code ends up in static libs anyway and, as you've found, they're as easy to test as dlls.
For code that builds into an exe I either have a separate test project which simply includes the source files that are under test and that are usually built into the exe OR I build a new static lib that contains most of the exe and test that in the same way that I test all of my other static libs. I find that I usually take the 'most code in a library' approach with new projects and the 'pull the source files from the exe project into the test project' approach when I'm retro fitting tests to existing applications.
I don't like your options 2 and 3 at all. Managing the build configurations for 2 is probably harder than having a separate test project that simply pulls in the sources it needs and including all of the tests into the exe as you suggest in 3 is just wrong ;)
I use two approaches, for dlls I just link my unit tests with the dll, easy. For executables I include the source files that are being tested in both the executable project and the unit test project. This adds slightly to the build time but means I don't need to separate the executable in to a static lib and a main function.
I use boost.test for unit testing and cmake to generate my project files and I find this the easiest approach. Also I am slowly introducing unit-testing to a large legacy code base so I am trying to introduce the least amount of changes, in case I inconvenience other developers and discourage them from unit testing. I would worry that using a static library just for unit testing might be seen as an excuse not adopt it.
Having said this, I think the static library approach is a nice one especially if you are starting from scratch.
For C/C++ apps I try to have as much code as possible in one or more dlls, with the main application being the bare minimum to start-up and hand-off to the dll. Dlls are much easier to test because they can export as many entry points as I like for a test application to use.
I use a seperate test application that links to the Dll(s). I'm strongly in favour of keeping test code and "product" code in seperate modules.
I go with #1, some reasons are
It allows to check that each lib links correctly
You don't want extra code in the product
It's easier to debug individual small test programs
You may need multiple executables for some tests (like communication tests)
For C++ build and test, I like to use CMake which can run a selection of the target executables as tests and print a summary of the results.
Personnally, I use another approach that relies a bit on yours:
I keep the project-to-test intact. If it's an executable, it should stay an executable. You simply create a post build action in order to aggregate all obj files into a static library.
Then, you can create you test project, linking the test framework and your previously generated static library.
Here are some topics corresponding to your question:
Visual Studio C++: Unit test exe project with google test?
Linker error - linking two "application" type projects in order to use Google Test
I'm using a third-party test-runners with their framework and including testing in build script. Tests are outside of production code (external dll).

Organizing unit testing for existing code

I recently received as a new task to maintain and improve existing code written in C++ with MS Visual Studio. The code builds into an exe file (not a dll). I would like to add unit tests for the code and the problem I encountered is how to organize my testing projects. Basically I want to have 2 projects, one would be the original project I received and the second the testing project.
I saw on the Internet that usually when the subject being tested is built into a dll it's quite easy you have to statically link in your testing project the lib built from the main project and you have access to the function being tested. But how can this be done when the subject under test is an exe file?
Surely you can arrange the solution into projects that share code, where one project outputs to exe and the other(s) to DLL?
Whatever the project deliverable is, unit testing is testing the smallest units: the functions. A unit test typically follows the tripe A pattern: Arrange (create the environment for the test), Act (invoke the method under test), Assert (verify the method behaved as expected).
There are several possible project[s] structures: modify the project so that it compiles into a DLL, a production executable and a unit test program. The executable source has to be as small as possible, possibly just the main() function that create an Application object. It is also possible to have three projects, one for the DLL, one for the application and the third one for the tests.
An alternative is to embed the unit tests inside the executable and to have a mean to invoke them, e.g. with a special --unit-test parameter.