Is it possible to use gcov for coverage testing of multi-threaded applications?
I've set some trivial tests of our code-base up, but it would be nice to have some idea of the coverage we're achieving. If gcov isn't appropriate can anyone recommend an alternative tool (possible oprofile), ideally with some good documentation on getting started.
We've certainly used gcov to get coverage information on our multi-threaded application.
You want to compile with gcc 4.3 which can do coverage on dynamic code.
You compile with the -fprofile-arcs -ftest-coverage options, and the code will generate .gcda files which gcov can then process.
We do a separate build of our product, and collect coverage on that, running unit tests and regression tests.
Finally we use lcov to generate HTML results pages.
Gcov works fine for multi-threaded apps. The instrumentation architecture is properly serialized so you will get coverage data of good fidelity.
I would suggest using gcov in conjunction with lcov. This will give you great reports scoped from full project down to individual source files.
lcov also gives you a nicely color coded HTML version of your source so you can quickly evaluate your coverage lapses.
I have not used gcov for multi-threaded coverage work. However, on MacOS the Shark tool from Apple handles multiple threads. It's primarily a profiler, but can do coverage info too.
http://developer.apple.com/tools/sharkoptimize.html
Related
I'm using meson and ninja as build system in my C++ project and I've configured catch2 as testing framework.
I was wondering how to perform code coverage with the tests that I've written.
I read this page, https://mesonbuild.com/Unit-tests.html but seems pretty unclear to me, can anybody help?
You should use one of coverage related targets: coverage-text, coverage-html, coverage-xml as described here. Or just coverage that tries all of these if possible:
$ ninja coverage -C builddir
Results are written to ./builddir/meson-logs directory.
Note that to produce html coverage reports you need lcov and genhtml binaries which are installed by lcov package.
I have to do unit test coverage analysis on a large project with hundreds of c++ sources. It can be built only on Linux with an .sh script, and it doesn't compile the sources with gcov in mind, and it has multiple makefiles. I know this isn't very much info, but what would be a good approach to this?
I need to do unit testing for drivers in an arm based board with the help of gcov tool.When gcov is used in a x86 architecture it will create .gcda file after executing the program.But when it comes to an arm based board the .gcda files are not getting created.So,without that i couldn't use the gcov tool.My question is how to use that gcov tool in cross compilation.?.Thanks in advance.
gcov code/data structures are tied to host filesystem and cross-compiler toolchains do not have any port or a configuration to change this behavior. If your object file is ~/my-project/abc.o then the gcov in-memory data structures created/updated by the instrumented code point to ~/my-project/abc.gcda, and all these paths are on your host machine. The instrumented code running on the remote system (in your case the ARM board), as you can see, cannot access these paths and this is the main reason you don't see the .gcda files in the ARM board case.
For a general method on getting the .gcda files to get around this above issue, see https://mcuoneclipse.com/2014/12/26/code-coverage-for-embedded-target-with-eclipse-gcc-and-gcov/. This article presents a hacky method to break into gcov functions and manually dump the gcov data structures into on-host .gcda files.
I used the above mentioned blog to do code coverage for my ARM project. However; I faced another issue of a gcc bug in my version of the toolchain (the GNU arm toolchain version available in October/November 2016), where you would not be able to break into gcov functions and complete the process mentioned in the above blog, as the relevant gcov functions hang with an infinite loop. You may or may not face this issue because I am not sure if the bug is fixed. In case you face this issue, a solution is available in my blog https://technfoblog.wordpress.com/2016/11/05/code-coverage-using-eclipse-gnu-arm-toolchain-and-gcov-for-embedded-systems/.
I am using VC++ 2005 and 2008 on a project. Now I want to see if the unit test cases cover all the code, and a found a problem. We use Boost.Test for unit testing, and each file is designed to test a particular function or method. Each file is compiled into a separate executable.
I am able to view the results per executable in Visual Studio. What I am really interested in is to view the overall code coverage by all the tests combined. Is there a way to combine the code coverage results?
I don't know about Visual Studio's test coverage tools.
Our SD C++ Test Coverage Tool will combine test coverage vectors from a single instrumented set of source code, no matter how many times you compile/link it (as long as you don't change the source of the code being tested). This tool can be obtained for the Visual Studio dialect(s) of C++. SD's test coverage tools for other languages have this same property.
C++ Coverage Validator can combine results from different code coverage sessions. You can combine sessions interactively using the GUI or from the command line (so you can automate things).
Alternatively you could set up the automatic merging to a central session and get every code coverage session automatically merged into the central session.
What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.