Organizing unit testing for existing code - unit-testing

I recently received as a new task to maintain and improve existing code written in C++ with MS Visual Studio. The code builds into an exe file (not a dll). I would like to add unit tests for the code and the problem I encountered is how to organize my testing projects. Basically I want to have 2 projects, one would be the original project I received and the second the testing project.
I saw on the Internet that usually when the subject being tested is built into a dll it's quite easy you have to statically link in your testing project the lib built from the main project and you have access to the function being tested. But how can this be done when the subject under test is an exe file?

Surely you can arrange the solution into projects that share code, where one project outputs to exe and the other(s) to DLL?

Whatever the project deliverable is, unit testing is testing the smallest units: the functions. A unit test typically follows the tripe A pattern: Arrange (create the environment for the test), Act (invoke the method under test), Assert (verify the method behaved as expected).
There are several possible project[s] structures: modify the project so that it compiles into a DLL, a production executable and a unit test program. The executable source has to be as small as possible, possibly just the main() function that create an Application object. It is also possible to have three projects, one for the DLL, one for the application and the third one for the tests.
An alternative is to embed the unit tests inside the executable and to have a mean to invoke them, e.g. with a special --unit-test parameter.

Related

Is it possible to perform unit testing on dll's methods without an executable during build process?

I have 63 DLL's with various C++ methods in each. I want to validate the output of some of the methods with fixed input values. I'm wondering if it is possible to do unit testing in the DLL itself during compilation build process.
So, the compilation build of DLL gives the results of the Unit Testing in the Output window of Visual Studio.
I know that I can validate this scenario by creating executable file and calling the methods. But, is it possible without executable file?
As others have said - testing "during compilation" does not make sense, so I'm assuming you mean testing during the build process, which is different and of course possible using post build steps etc.
You don't specify which version of Visual Studio you use, but if you have VS2012, there is an MSDN article that describes exactly how to do what you describe. See the link for the full instructions, I've attached a partial screenshot below
Taking your question verbatim, the answer is "no", because you can't test a DLL when you haven't even finished compiling it. Also, you need some kind of executable to load that DLL, so either you load it with a scripting language (Python with ctypes comes to mind) or you create an executable.
Calling that from a post-compile step in Visual Studio, as suggested by shivakumar is probably the only way to get the results into the output window. I personally prefer running this from an external build script, but I'm also cross-compiling a lot and I can't run things from a post-compile step there. This also makes it easier to debug the unit tests when something fails.
You have to wait compilation to complete so that there are no compilation error in the code.
In the post-build event you can add batch files which will run your unit test modules and validate the binaries generated after compilation.
You are asking for a thing that does not make sense. When you say "compiling" that means a very specific thing: invoking the compiler, before invoking the linker. But C++ code (and C++ unit tests) do not work like that. The compiler must finish compiling both your production code and your tests, and the object files must then be linked into libraries, executables, or both. A test framework must then execute the test code which calls your production code in order to get results. None of these steps are optional in C++.
Instead, you probably intended to ask if you could run the unit tests as part of the build (not compile). And the answer to that is an emphatic "yes!"
I'm guessing that your solution is likely structured into 63 or more individual DLL projects. For each production DLL you are going to test, such as Foo.DLL, I recommend you add a new FooTest project, with the unit test code added to the FooTest project. In FooTest, create a project dependency upon the Foo project, which will force FooTest to build after building Foo. In the FooTest project you would have two kinds of code modules: classes containing your unit tests, and a FooTest.cpp that would house the main() entrypoint of the FooTest.EXE program, invoking the testing framework, and outputting the results to the console.
Create your FooTest.cpp so that it's a console program. If you format your test executable's output so that it matches the output of the Visual Studio compiler, as in "filename.cpp(lineNo) : error: description of failure", Visual Studio will automatically navigate to the file and line if you click on it. Unit test frameworks such as CppUnit may already have a "CompilerOutputter" class that will properly format the output to match your compiler's errors.
In your FooTest project, you also need to set the input to the FooTest linker so that it can link in the production code you are trying to test. In the properties of the FooTest project, go to the Linker/Input tab and add the path to your Foo project's OBJ files to the Additional Dependencies. The line I use looks like this: $(SolutionDir)Foo\Debug\obj*.obj
In the Build Events properties of the FooTest project, invoke your new FooTest.EXE as a post-build step. Then, every time you click build, your code will be built and your unit tests will be executed. The project dependency will ensure that if you change your Foo code, you will compile, link, and execute the FooTest tests. And the console output ensures that your test results will appear as clickable output in your IDE.
You could create 63 separate unit test executables, or you could create one all-encompassing unit test executable. That's entirely your choice. If you are looking to make the builds and links happen quicker, you will probably want to have the separate executables; even though it's a bit more individual configuration work, you do it only once, and after that you retain the benefits of quick builds for small changes.
Now you're ready to do some serious coding.

What is an effective way to organize C++ projects that are going to be unit tested?

I am wondering what would be an effective way to organize C++ projects and classes that are going to be unit tested. I have read many SO posts related to unit test but couldn't find practical examples.
Here are some ways I have collected:
Method A
Project A: Application (.exe) project that "include" the classes from Project C
Project B: Unit test (.exe) project that "include" the classes from Project C
Project C: Static library (.lib) project that keeps all classes that Project A uses
Method B
Project A: Application (.exe) project with all classes inside itself.
Project B: Unit test (.exe) project that "links" to classes in Project A
Method C (from Miguel)
only one project, with three configurations:
Debug: builds your Application .exe in debug mode.
Release: builds your Application .exe in release mode.
Test: builds the unit test framework, replaces your app's main() with the unit testing main()
Which is the more appropriate way? Do you have any other suggestions?
I have previously used the first method quite well. Have most of your code in a static library project, have the main executable project just contain the main function, and have your tests and the test main function in a third project. The two executable projects will link to the static library and reuse the code.
The main benefits in doing it this way are:
The code that is being tested is the exact same build as is used in your application.
You can test both the debug and release configurations to ensure that both work as expected. (You can extrapolate debug and release for any configurations that you might require.)
Build time is minimised since the same built library is used in both executable projects.
Can have the build system build both the test and main executable at the same time, and also run the test executable after building.
There's not that much difference actually, as you can always compile the exe as a static library and link against the unit tests. Conceptually, Method A is slightly cleaner, but there's nothing preventing you from using Method B. It basically boils down to your build system what is easier to do.
I don't think you'll gain much by moving the classes of your application to a static library. You should also consider that you may want to modify your classes when you compile them for testing, for example by adding additional convenience methods that are not necessary for the application, so in the end putting the classes in a library may not help at all since you will need a special version of these classes when running tests.
I would like to suggest the following as a better option than your methods A and B:
METHOD C
only one project, with three configurations:
Debug: builds your Application .exe in debug mode.
Release: builds your Application .exe in release mode.
Test: builds the unit test framework, replaces your app's main() with the unit testing main()
If you think you need to, you can split the Test target into Debug and Release as well.

Unit Testing legacy C++ Code with CPPUnit

I am tasked with managing a large code base written in vc++ 6.0, I need to start building unit test for portions of the code. I have set up CPPUnit and it works with my projects DLL's the problem I am facing is as follows. The legacy application is made up of 10 static libraries and one huge executable MFC application that contains 99% of the code. My unit test framework is running in another project within the same workspace and will test the 10 libraries no problem all include and references are ok, when I try to do the same for the large MFC application I get a linker error as I do not have a dll for the application. Is there any way to unit test the application without putting the test code directly inside the application.
You should carry on as you are:
You have one test application that references libraries.
You have one main application that also references those libraries.
Either move code from the main application into the existing libraries, or, preferably, move code into new libraries. Then your test application can access more code without ever referring to the application.
You know when you are done when the source for the application consists of one module which defines main() and everything else in in libraries which are tested by the test application.
My experience with unit testing is usually the opposite. Create a project for your test then import code from your other projects.
You can't link to the MFC application probably because your functions aren't exported. They exist, but have no mean to communicate with other applications unlike DLLs.
I know of no way to link against an executable file. Refactoring the code by moving the business logic to a DLL and leaving the application as a "Front-end" would be the most obvious solution. However, since it is legacy code it is likely more appropriate to simply duplicate the code for purposes of unit testing. This is not ideal, and since it is an MFC applicaiton may not be trivially easy.
To test your main application you can set up a test project which includes the source files you want to test - not sure how easy it is to achieve with VC6, do not have it at hand, but in VS2005 and later this is quite straightforward.
So in your solution you end up with a structure like this:
MyLegacySystem.sln
MyApplication.proj
Main.cpp
BusinessRules.cpp
MyApplicationUnitTests.proj
UnitTestsMain.cpp
BusinessRules.cpp
BusinessRulesTests.cpp
If for whatever reason you cannot include your source files in 2 projects, you can pull the sources into your test project by invoking the preprocessor magic:
BusinessRulesStub.cpp:
#include "..\src\BusinessRules.cpp"
However, this is essentially a temporary fix. As already suggested, in the end most of the code should be extracted into separate libraries.
If you can't refactor your project to move the business logic into a new static library, try linking your test project against your project's intermediate object files, which you can probably find in BigProject\debug or BigProject\debug\obj . You can't link to the .EXE as you've discovered.
This achieves the same results as the copy process that Chad suggested while avoiding the actual duplication of source code, which would be a really bad thing.

How do you run your unit tests? Compiler flags? Static libraries?

I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++.
My general approach so far has been to do one of three things:
Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library.
Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit.
Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application.
None of these solutions are particularly elegant...
How do you do it?
Your approach no. 1 is the way I've always done it in C/C++ and Java. Most of the application code is in the static library and I try to keep the amount of extra code needed for the application to a minimum.
The way I approach TDD in Python and other dynamic languages is slightly different in that I leave the source code for the application and tests lying around and a test runner finds the tests and runs them.
I tend to favour static libs over dlls so most of my C++ code ends up in static libs anyway and, as you've found, they're as easy to test as dlls.
For code that builds into an exe I either have a separate test project which simply includes the source files that are under test and that are usually built into the exe OR I build a new static lib that contains most of the exe and test that in the same way that I test all of my other static libs. I find that I usually take the 'most code in a library' approach with new projects and the 'pull the source files from the exe project into the test project' approach when I'm retro fitting tests to existing applications.
I don't like your options 2 and 3 at all. Managing the build configurations for 2 is probably harder than having a separate test project that simply pulls in the sources it needs and including all of the tests into the exe as you suggest in 3 is just wrong ;)
I use two approaches, for dlls I just link my unit tests with the dll, easy. For executables I include the source files that are being tested in both the executable project and the unit test project. This adds slightly to the build time but means I don't need to separate the executable in to a static lib and a main function.
I use boost.test for unit testing and cmake to generate my project files and I find this the easiest approach. Also I am slowly introducing unit-testing to a large legacy code base so I am trying to introduce the least amount of changes, in case I inconvenience other developers and discourage them from unit testing. I would worry that using a static library just for unit testing might be seen as an excuse not adopt it.
Having said this, I think the static library approach is a nice one especially if you are starting from scratch.
For C/C++ apps I try to have as much code as possible in one or more dlls, with the main application being the bare minimum to start-up and hand-off to the dll. Dlls are much easier to test because they can export as many entry points as I like for a test application to use.
I use a seperate test application that links to the Dll(s). I'm strongly in favour of keeping test code and "product" code in seperate modules.
I go with #1, some reasons are
It allows to check that each lib links correctly
You don't want extra code in the product
It's easier to debug individual small test programs
You may need multiple executables for some tests (like communication tests)
For C++ build and test, I like to use CMake which can run a selection of the target executables as tests and print a summary of the results.
Personnally, I use another approach that relies a bit on yours:
I keep the project-to-test intact. If it's an executable, it should stay an executable. You simply create a post build action in order to aggregate all obj files into a static library.
Then, you can create you test project, linking the test framework and your previously generated static library.
Here are some topics corresponding to your question:
Visual Studio C++: Unit test exe project with google test?
Linker error - linking two "application" type projects in order to use Google Test
I'm using a third-party test-runners with their framework and including testing in build script. Tests are outside of production code (external dll).

Seeking suggestions for Unit Testing C++ app spread over several dlls

New to unit testing and have an app that is spread out over several dlls. What are some suggestions for unit testing the app? If I put the unit tests in their own project, i can only test the published interface for each dll. Is that what most people do? Or should I put the unit tests in with the code in the dlls and test there? What would be the best way to coordinate all the tests at that point?
Most of the examples I've seen for using the various frameworks don't really address this and I've search elsewhere but haven't found much information.
Thanks.
Update:
I guess to clarify a bit I have class A and class B in a dll. Class B is used by Class A, but only Class A is exposed. I'm wanting to get unit tests in place before refactoring this existing code.
So the question is should I put unit tests in with the dll code to test class A and class B directly and/or have the unit tests in a separate project and test class A and class B through the exposed class A?
My home-rolled approach to this is to build two different targets:
the DLL itself
a test executable
The code base is the same except for main.cpp which will contain a DLL main for the DLL and a standard C/C++ int main() for the test executable. All the unit tests are in-line with the actual code, controlled by a preprocessor #define:
#ifdef TESTING
tests here
#endif
The main() for the executable calls all the tests via a registration framework.
Most of the time I work with the executable version with TESTING defined. When I want to build the DLL, I switch targets and undefine TESTING.
Unit testing in C++ means rather class testing. DLL testing would be rather integration testing. Both are necessary, but it's better to test things at as low level as possible. Take a look on v-model: http://en.wikipedia.org/wiki/V-Model_(software_development).
If I put the unit tests in their own project, i can only test the published interface for each dll.
As opposed to features in the DLL that client code cannot access?
If code doesn't contribute to the functions the DLL exposes, then delete it from the code base. If it does, then you can write a test that exercises that code path.
In general, thoroughly testing the public interface of a DLL should be sufficient and, as Pete indicated, should indeed exercise all code paths.
Regarding Neils' answer, choosing a test framework that does not require you to build EXEs but rather directly operates on DLLs could further simplify matters. Being the author of cfix, I'd naturally recommend giving cfix a try: http://www.cfix-testing.org/
I use boost.test to test my dlls and executables. I create a seperate unit test project/executable for each dll and exe to test. Because I test the internals of the dll I do not link them with the test project, but include the source I want to test directly in the project. Finally the automated build runs all the unit test projects.
We use CMake to build our projects and this has a nice component CTest that bundles up all our tests for us and runs them as a group.