Using gcov with a large project built by a bash script - c++

I have to do unit test coverage analysis on a large project with hundreds of c++ sources. It can be built only on Linux with an .sh script, and it doesn't compile the sources with gcov in mind, and it has multiple makefiles. I know this isn't very much info, but what would be a good approach to this?

Related

How to perform code coverage in C++ using meson?

I'm using meson and ninja as build system in my C++ project and I've configured catch2 as testing framework.
I was wondering how to perform code coverage with the tests that I've written.
I read this page, https://mesonbuild.com/Unit-tests.html but seems pretty unclear to me, can anybody help?
You should use one of coverage related targets: coverage-text, coverage-html, coverage-xml as described here. Or just coverage that tries all of these if possible:
$ ninja coverage -C builddir
Results are written to ./builddir/meson-logs directory.
Note that to produce html coverage reports you need lcov and genhtml binaries which are installed by lcov package.

How to manage growing C++ project

I am wondering how I should manage a growing C++ project. Now, I am developing a project with Netbeans and it's dirty work generating makefiles. The project has become too big and I have decided to split it up into a few parts. What is the best way of doing this?
I am trying to use Scons as my build system. I have had some success with it, but should I edit the build scripts every time I append or delete files. It's too dull.
So I need your advice.
P.S. By the way, how does a large project like google chrome do this? Does everybody use some kind of IDE to build scripts generated only for software distribution?
I also use Netbeans for C++ and compile with SCons. I use the jVi Netbeans plugin which really works well.
For some reason the Netbeans Python plugin is no longer official, which I dont understand at all. You can still get it though, and it really makes editing the SCons build scripts a nice experience. Even though Netbeans doesnt have a SCons plugin (yet?) you can still configure its build command to execute SCons.
As for maintaining the SCons scripts automatically by the IDE, I dont do that either, I do that by hand. But its not like I have to deal with this on a daily basis, so I dont see that its that important, especially considering how easy to read the scripts are.
Here's the build script in SCons that does the same as mentioned previously for CMake:
env = Environment()
env.EnsurePythonVersion(2, 5)
env.EnsureSConsVersion(2, 1)
libTarget = env.SharedLibrary(target = 'foo', source = ['a.cpp', 'b.cpp', 'c.pp'])
env.Program(target = 'bar', source = ['bar.cpp', libTarget])
The SCons Glob() function is a nice option, but I tend to shy away from automatically building all the files in a directory. The same goes for listing sub-directories to be built. Ive been burned enough times by this, and prefer explicitly specifying the file/dirs to be built.
In case you hear those rumors that SCons is slower than other alternatives, the SCons GoFastButton has some pointers that can help out.
Most large projects stick with a build system that automatically handles all the messy details for them. I'm a huge fan of CMake (which is what KDE uses for all their components) but scons is another popular choice. My editor (KDevelop) supposedly handles CMake projects itself, but I still edit the build scripts myself because it's not that hard.
I'd recommend learning one tool really well and sticking with it (plenty of documentation is available for any tool you'll be interested in). Be sure you also look into version control if you haven't already (I have a soft spot for git, but Mercurial and Subversion are also very popular choices).
A simple CMake example:
project("My Awesome Project" CXX)
cmake_minimum_required(VERSION 2.8)
add_library(foo SHARED a.cpp b.cpp c.cpp) #we'll build an so file
add_executable(bar bar.cpp)
target_link_libraries(bar foo) #link bar to foo
This is obviously a trivial case, but it's very easy to manage and expand as needed.
I am trying to use Scons as build system. I have some success with it, but I should edit
build scripts every time I append or delete file. It's too dull.
Depending on how your files are organized, you can use, for example, Scon's Glob() function to get source files as a list without having to list all files individually. For example, to build all c++ source files into an executable, you can do:
Program('program', Glob('*.cpp'))
You can do the same in CMake using its commands.
And, if you're using SCons, since it's Python you can write arbitrary Python code to make your source file lists.
You can also organize files into multiple folders and have subsidiary SCons (or CMakeList.txt) build files that the master build script can call.

standalone tool for generating makefile(s) from Eclipse's .cproject file?

Is there a standalone tool, that can be ran from a shell script, to generate a makefile from the .cproject? Actually, the same functionality as the CDT itself, but that can be non-interactive.
As is probably obvious, I want to be able to run a script that checkouts and builds the software, comprising from several C++ project. I am trying to avoid moving to a build system like maven, as it seems as like an overhead in this early stage of our project. thanks!
I know that there was discussions on the CDT-dev mailinglist a few months back about having a command-line tool for building CDT projects. Writing such a tool is really not very difficult (there was an example mentioned), it is simply a matter of defining your own Eclipse-application, load the project, and build it. Searching the CDT-dev list on "standalone build" should give you some relevant hits.

unit test build files

What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.

GCOV for multi-threaded apps

Is it possible to use gcov for coverage testing of multi-threaded applications?
I've set some trivial tests of our code-base up, but it would be nice to have some idea of the coverage we're achieving. If gcov isn't appropriate can anyone recommend an alternative tool (possible oprofile), ideally with some good documentation on getting started.
We've certainly used gcov to get coverage information on our multi-threaded application.
You want to compile with gcc 4.3 which can do coverage on dynamic code.
You compile with the -fprofile-arcs -ftest-coverage options, and the code will generate .gcda files which gcov can then process.
We do a separate build of our product, and collect coverage on that, running unit tests and regression tests.
Finally we use lcov to generate HTML results pages.
Gcov works fine for multi-threaded apps. The instrumentation architecture is properly serialized so you will get coverage data of good fidelity.
I would suggest using gcov in conjunction with lcov. This will give you great reports scoped from full project down to individual source files.
lcov also gives you a nicely color coded HTML version of your source so you can quickly evaluate your coverage lapses.
I have not used gcov for multi-threaded coverage work. However, on MacOS the Shark tool from Apple handles multiple threads. It's primarily a profiler, but can do coverage info too.
http://developer.apple.com/tools/sharkoptimize.html