Possible to combine code coverage results (VC++) - c++

I am using VC++ 2005 and 2008 on a project. Now I want to see if the unit test cases cover all the code, and a found a problem. We use Boost.Test for unit testing, and each file is designed to test a particular function or method. Each file is compiled into a separate executable.
I am able to view the results per executable in Visual Studio. What I am really interested in is to view the overall code coverage by all the tests combined. Is there a way to combine the code coverage results?

I don't know about Visual Studio's test coverage tools.
Our SD C++ Test Coverage Tool will combine test coverage vectors from a single instrumented set of source code, no matter how many times you compile/link it (as long as you don't change the source of the code being tested). This tool can be obtained for the Visual Studio dialect(s) of C++. SD's test coverage tools for other languages have this same property.

C++ Coverage Validator can combine results from different code coverage sessions. You can combine sessions interactively using the GUI or from the command line (so you can automate things).
Alternatively you could set up the automatic merging to a central session and get every code coverage session automatically merged into the central session.

Related

How to get coverage from Bazel by running the program instead of writing unit tests?

My project is build by Bazel. Except for the unit test for every single function, it sometimes needs testing in some scenes like the simulation of the user's input as a black box testing. To ensure the testing scene is complex enough to find vulnerabilities, I want to know the coverage of some testing scenes.
I try to avoid writing unit tests, like using LLVM flags -fprofile-instr-generate -fcoverage-mapping and running the test program directly, interacting with the program, and then getting coverage.
I read from Bazel's documents know the coverage command runs the unit test to get coverage, but can I directly get coverage from a running program?

Integrating Visual Studio 2012 into Cruise Control.NET

I am trying to easily display unit test results and code coverage reports from Visual Studio 2012 into the CruiseControl.NET Build Reports. The pieces are the following:
MSBuild - to build the project
vstest.console.exe - to execute visual studio unit tests and code coverage tools via the command line
custom console application to convert coverage report to XML
My problem is how can I control the output name for the vstest.console.exe. I am not finding anyway to control this. My only solution at this point is to write some custom script to find the coverage file and TRX file and rename to a known value. Then the cruise control tool can find the files properly.
Any help would be appreciated. Thanks,
The command line looks very limiting:
http://msdn.microsoft.com/en-us/library/vstudio/jj155796.aspx
"The TRX logger doesn't support any parameters (unlike the TFS publisher logger)."
From:
Specifying results filename for vstest.console.exe
Here is a general tip.
One way to think of CC.NET is like this: It's a big, fancy "Msbuild.Exe" executor.
So if you can write up your logic in msbuild (.proj) file, you can get CC.NET to call it.
1. CC.NET calls a source-control retrieve task.
2. In that retrieve, there is a .proj file.
3. You get CC.NET to call "msbuild.exe MySolutionBuild.proj"
4. Have the .proj file run Unit-Tests, create xml, create artifacts (.zip(s) or .msi(s), etc)
5. After the build, have CC.NET pull in the results (usually xml with File-Merge) and have CC.NET send out emails (publishers).
If you do it this way, if you ever go to TFS (or Jenkins or other), you'll minimize the transition effort.
If you rely very heavily on CC.NET proprietary commands, you can get it to work, but its harder to maintain IMHO.
Take a look at this post
http://rubenwillems.blogspot.be/2011/09/setting-up-ccnet-in-combination-with.html
It shows the step needed.
Look at step 4 and step 5 there, these cover unit testing and coverage.

Is it possible to perform unit testing on dll's methods without an executable during build process?

I have 63 DLL's with various C++ methods in each. I want to validate the output of some of the methods with fixed input values. I'm wondering if it is possible to do unit testing in the DLL itself during compilation build process.
So, the compilation build of DLL gives the results of the Unit Testing in the Output window of Visual Studio.
I know that I can validate this scenario by creating executable file and calling the methods. But, is it possible without executable file?
As others have said - testing "during compilation" does not make sense, so I'm assuming you mean testing during the build process, which is different and of course possible using post build steps etc.
You don't specify which version of Visual Studio you use, but if you have VS2012, there is an MSDN article that describes exactly how to do what you describe. See the link for the full instructions, I've attached a partial screenshot below
Taking your question verbatim, the answer is "no", because you can't test a DLL when you haven't even finished compiling it. Also, you need some kind of executable to load that DLL, so either you load it with a scripting language (Python with ctypes comes to mind) or you create an executable.
Calling that from a post-compile step in Visual Studio, as suggested by shivakumar is probably the only way to get the results into the output window. I personally prefer running this from an external build script, but I'm also cross-compiling a lot and I can't run things from a post-compile step there. This also makes it easier to debug the unit tests when something fails.
You have to wait compilation to complete so that there are no compilation error in the code.
In the post-build event you can add batch files which will run your unit test modules and validate the binaries generated after compilation.
You are asking for a thing that does not make sense. When you say "compiling" that means a very specific thing: invoking the compiler, before invoking the linker. But C++ code (and C++ unit tests) do not work like that. The compiler must finish compiling both your production code and your tests, and the object files must then be linked into libraries, executables, or both. A test framework must then execute the test code which calls your production code in order to get results. None of these steps are optional in C++.
Instead, you probably intended to ask if you could run the unit tests as part of the build (not compile). And the answer to that is an emphatic "yes!"
I'm guessing that your solution is likely structured into 63 or more individual DLL projects. For each production DLL you are going to test, such as Foo.DLL, I recommend you add a new FooTest project, with the unit test code added to the FooTest project. In FooTest, create a project dependency upon the Foo project, which will force FooTest to build after building Foo. In the FooTest project you would have two kinds of code modules: classes containing your unit tests, and a FooTest.cpp that would house the main() entrypoint of the FooTest.EXE program, invoking the testing framework, and outputting the results to the console.
Create your FooTest.cpp so that it's a console program. If you format your test executable's output so that it matches the output of the Visual Studio compiler, as in "filename.cpp(lineNo) : error: description of failure", Visual Studio will automatically navigate to the file and line if you click on it. Unit test frameworks such as CppUnit may already have a "CompilerOutputter" class that will properly format the output to match your compiler's errors.
In your FooTest project, you also need to set the input to the FooTest linker so that it can link in the production code you are trying to test. In the properties of the FooTest project, go to the Linker/Input tab and add the path to your Foo project's OBJ files to the Additional Dependencies. The line I use looks like this: $(SolutionDir)Foo\Debug\obj*.obj
In the Build Events properties of the FooTest project, invoke your new FooTest.EXE as a post-build step. Then, every time you click build, your code will be built and your unit tests will be executed. The project dependency will ensure that if you change your Foo code, you will compile, link, and execute the FooTest tests. And the console output ensures that your test results will appear as clickable output in your IDE.
You could create 63 separate unit test executables, or you could create one all-encompassing unit test executable. That's entirely your choice. If you are looking to make the builds and links happen quicker, you will probably want to have the separate executables; even though it's a bit more individual configuration work, you do it only once, and after that you retain the benefits of quick builds for small changes.
Now you're ready to do some serious coding.

unit test build files

What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.

GCOV for multi-threaded apps

Is it possible to use gcov for coverage testing of multi-threaded applications?
I've set some trivial tests of our code-base up, but it would be nice to have some idea of the coverage we're achieving. If gcov isn't appropriate can anyone recommend an alternative tool (possible oprofile), ideally with some good documentation on getting started.
We've certainly used gcov to get coverage information on our multi-threaded application.
You want to compile with gcc 4.3 which can do coverage on dynamic code.
You compile with the -fprofile-arcs -ftest-coverage options, and the code will generate .gcda files which gcov can then process.
We do a separate build of our product, and collect coverage on that, running unit tests and regression tests.
Finally we use lcov to generate HTML results pages.
Gcov works fine for multi-threaded apps. The instrumentation architecture is properly serialized so you will get coverage data of good fidelity.
I would suggest using gcov in conjunction with lcov. This will give you great reports scoped from full project down to individual source files.
lcov also gives you a nicely color coded HTML version of your source so you can quickly evaluate your coverage lapses.
I have not used gcov for multi-threaded coverage work. However, on MacOS the Shark tool from Apple handles multiple threads. It's primarily a profiler, but can do coverage info too.
http://developer.apple.com/tools/sharkoptimize.html