cmocka malloc testing OOM and gcov - unit-testing

I'm having a hard time finding an answer to a nitch case using cmocka, testing malloc for failure (simulating), and using gcov
Update about cmocka+gcov: I noticed I get empty gcda files as soon as I mock a function in my cmocka tests. Why? Googling cmocka and gcov gives results where people talk about using the two together. It seems most people are using CMake, something I will look at later but there should be no reason (that I can think of) that would require me to use cmake. Why can't I just use cmocka with the --coverage/-lgcov flags?
Orignal question:
I've tried a myriad combinations mostly based off of two main ideas:
I tried using -Wl,--wrap=malloc so calls to malloc are wrapped. From my cmocka tests I attempted to use will_return(__wrap_malloc, (void*)NULL) to simulate a malloc failure. In my wrap function I use mock() to determine if I should return __real_malloc() or NULL. This has the ideal effect, however I found that gcov fails to create gcda files, which is part of the reason with wrapping malloc, so I can test malloc failing AND get code coverage results. I feel I've played dirty games with symbols and messed up malloc() calls called form other compilation units (gcov? cmocka?).
Another way I tried was to us gcc -include using a #define for malloc to call "my malloc" and compile my target code to be tested with mymalloc.c (defining the "my malloc"). So a #define malloc _mymalloc helps me call only the "special malloc" from the target test code leaving malloc alone anywhere else it is called (i.e., leave the other compilation unites alone so they just always call real malloc). However I don't know how to use will_return() and mock() correctly to detect failure cases vs success cases. If I am testing malloc() failing I get what I want, I return NULL from "malloc" based on mock() returning NULL- this is all done in a wrapping function for malloc that is only called in the targeted code. However if I want to return the results of the real malloc than cmocka will fail since I didn't return the result from mock(). I wish I could just have cmocka just dequeue the results from the mock() macro and then not care that I didn't return the results since I need real results from malloc() so the code under test can function correctly.
I feel it should be possible to combine malloc testing, with cmocka and get gcov results.
whatever the answer is I'd like to pull of the following or something similar.
int business_code()
{
void* d = malloc(somethingCalculated);
void* e = malloc(somethingElse);
if(!d) return someRecovery();
if(!e) return someOtherRecovery();
return 0;
}
then have cmocka tests like
cmocka_d_fail()
{
will_return(malloc, NULL);
int ret = business_code();
assert_int_equal(ret, ERROR_CODE_D);
}
cmocka_e_fail()
{
will_return(malloc, __LINE__); // someway to tell wrapped malloc to give me real memory because the code under test needs it
will_return(malloc, NULL); // I want "d" malloc to succeed but "e" malloc to fail
int ret = business_code();
assert_int_equal(ret, ERROR_CODE_E);
}
I get close with some of the #define/wrap ideas I tried but in the end I either mess up malloc and cause gcov to not spit out my coverage data or I don't have a way to have cmocka run malloc cases and return real memory i.e., not reeturn from mock() calls. On one hand I could call real malloc from my test driver but and pass that to will_return but my test_code doesn't know the size of the memory needed, only the code under test knows that.
given time constraints I don't want to move away from cmocka and my current test infrastructure. I'd consider other ideas in the future though if what I want isn't possible. What I'm looking for I know isn't new but I'm trying to use a cmocka/gcov solution.
Thanks

This all comes down to what symbols I was messing with, either using -lW,--wrap or clever #defines. In either case I was either clobbering the symbol for other call sites and breaking code or confusing cmocka with not dequeuing queued up returns.
Also the reason my gcda files were not being generated correctly is my attempts to use -Wl,--wrap=fseek and cmocka's mock() was messing me up.
A clever #define on fseek/malloc/etc combined with mock() for a symbol that gets called in your wrapper implementation can in short query the test suite to see if you should return something bogus to cause the test to fail or return the real results. A bit hacky but does the trick.

This workaround works for me: wrap _test_malloc() instead of malloc().
Working example can be found at https://github.com/CESNET/Nemea-Framework/blob/2ef806a0297eddc920dc7ae71731dfb2c0e49a5b/libtrap. tests/test_trap_buffer.c contains an implementation of a wrap function __wrap__test_malloc() (note the 4x '_' in the name)
void *__real__test_malloc(const size_t size, const char* file, const int line);
void *__wrap__test_malloc(size_t size)
{
int fail = (int) mock();
if (fail) {
return NULL;
} else {
return __real__test_malloc(size, __FILE__, __LINE__);
}
}
and e.g. test_create_destroy() to test the tb_init() function which uses 3x malloc():
static void test_create_destroy(void **state)
{
trap_buffer_t *b = NULL;
(void) state; /* unused */
b = tb_init(0, 0);
assert_null(b);
b = tb_init(0, 1);
assert_null(b);
b = tb_init(1, 0);
assert_null(b);
will_return(__wrap__test_malloc, 0);
will_return(__wrap__test_malloc, 0);
will_return(__wrap__test_malloc, 0);
b = tb_init(10, 100000);
assert_non_null(b);
tb_destroy(&b);
tb_destroy(&b);
tb_destroy(NULL);
}
For the completeness, tb_init() is in src/trap_buffer.c line 146.
Compilation can be run like this (sample from Makefile):
buffer:
gcc --coverage -g -O0 -DUNIT_TESTING -c tests/test_trap_buffer.c
gcc --coverage -g -O0 -DUNIT_TESTING -c src/trap_buffer.c
gcc -g -O0 -Wl,--wrap=_test_malloc -lcmocka --coverage -DUNIT_TESTING -o test_buffer test_trap_buffer.o trap_buffer.o
See the UNIT_TESTING preprocessor macro defined for cmocka, this is important since it enables testing allocation functions in our code.
Finally, running the test generates *.gcda files for us, so we can visualize the code coverage. Output for the tested tb_init(): https://codecov.io/gh/CESNET/Nemea-Framework/src/775cfd34c9e74574741bc6a0a2b509ae6474dbdb/libtrap/src/trap_buffer.c#L146

Related

C++ function instrumentation via clang++'s -finstrument-functions : how to ignore internal std library calls?

Let's say I have a function like:
template<typename It, typename Cmp>
void mysort( It begin, It end, Cmp cmp )
{
std::sort( begin, end, cmp );
}
When I compile this using -finstrument-functions-after-inlining with clang++ --version:
clang version 11.0.0 (...)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: ...
The instrument code explodes the execution time, because my entry and exit functions are called for every call of
void std::__introsort_loop<...>(...)
void std::__move_median_to_first<...>(...)
I'm sorting a really big array, so my program doesn't finish: without instrumentation it takes around 10 seconds, with instrumentation I've cancelled it at 10 minutes.
I've tried adding __attribute__((no_instrument_function)) to mysort (and the function that calls mysort), but this doesn't seem to have an effect as far as these standard library calls are concerned.
Does anyone know if it is possible to ignore function instrumentation for the internals of a standard library function like std::sort? Ideally, I would only have mysort instrumented, so a single entry and a single exit!
I see that clang++ sadly does not yet support anything like finstrument-functions-exclude-function-list or finstrument-functions-exclude-file-list, but g++ does not yet support -finstrument-functions-after-inlining which I would ideally have, so I'm stuck!
EDIT: After playing more, it would appear the effect on execution-time is actually less than that described, so this isn't the end of the world. The problem still remains however, because most people who are doing function instrumentation in clang will only care about the application code, and not those functions linked from (for example) the standard library.
EDIT2: To further highlight the problem now that I've got it running in a reasonable time frame: the resulting trace that I produce from the instrumented code with those two standard library functions is 15GB. When I hard code my tracing to ignore the two function addresses, the resulting trace is 3.7MB!
I've run into the same problem. It looks like support for these flags was once proposed, but never merged into the main branch.
https://reviews.llvm.org/D37622
This is not a direct answer, since the tool doesn't support what you want to do, but I think I have a decent work-around. What I wound up doing was creating a "skip list" of sorts. In the instrumented functions (__cyg_profile_func_enter and __cyg_profile_func_exit), I would guess the part that is contributing most to your execution time is the printing. If you can come up with a way of short-circuiting the profile functions, that should help, even if it's not the most ideal. At the very least it will limit the size of the output file.
Something like
#include <stdint.h>
uintptr_t skipAddrs[] = {
// assuming 64-bit addresses
0x123456789abcdef, 0x2468ace2468ace24
};
size_t arrSize = 0;
int main(void)
{
...
arrSize = sizeof(skipAddrs)/sizeof(skipAddrs[0]);
// https://stackoverflow.com/a/37539/12940429
...
}
void __cyg_profile_func_enter (void *this_fn, void *call_site) {
for (size_t idx = 0; idx < arrSize; idx++) {
if ((uintptr_t) this_fn == skipAddrs[idx]) {
return;
}
}
}
I use something like objdump -t binaryFile to examine the symbol table and find what the addresses are for each function.
If you specifically want to ignore library calls, something that might work is examining the symbol table of your object file(s) before linking against libraries, then ignoring all the ones that appear new in the final binary.
All this should be possible with things like grep, awk, or python.
You have to add attribute __attribute__((no_instrument_function)) to the functions that should not be instrumented. Unfortunately it is not easy to make it work with C/C++ standard library functions because this feature requires editing all the C++ library functions.
There are some hacks you can do like #define existing macros from include/__config to add this attribute as well. e.g.,
-D_LIBCPP_INLINE_VISIBILITY=__attribute__((no_instrument_function,internal_linkage))
Make sure to append existing macro definition with no_instrument_function to avoid unexpected errors.

Remove auto generated exception code from coverage report

Let's start with a minimal working example:
main.cpp:
#include <iostream>
#include <string>
int main() {
std::cout << "hello " + std::to_string(42);
return 0;
}
I compile this code using the following flags:
[g++/clang++] -std=c++11 -g -Og --coverage -Wall -o main main.cpp
clang 4.0.1
gcc 4.8.5.
I get only 50% code coverage, since the compiler generates exception code, which is not executed, as explained in another stackoverflow question.
The problem is that disabling exceptions via -fno-exceptionsis not an option for me. The code I am writing unit tests for uses exceptions, so disabling all of them is not an option.
In order to generate a report I'm using gcovr, in case of clang++ additionally llvm-cov gcovto convert it. But I am not bound to these tools, so if you have other tools that do not show this behaviour please suggest them!
Basically I need a way to compile/write unit tests for this code and get 100% branch / conditional coverage with exceptions enabled. Is there a way?
Well, I believe your intention is not actually test this small piece of code, but use the concept in a project...
The code you entered throws an exception - bad_alloc is thrown when you have no memory left to store the string that will be created with std::to_string. To be 100% safe, std::to_string should be surrounded with try-catch, where you could handle your exception.
To build a 100% code coverage unit test, you will need to force the exception to happen - in this specific case it is almost impossible to guarantee, since the parameter is a constant number. But, in your project, you probably have some data to be allocated whose size is variable - in this case, you can isolate in your code the methods that allocate memory, to test them separately. Then you pass to these methods, in the test function, a huge amount to be allocated to evaluate what you have put on your catch block (and check if you are handling it properly).
For instance, this code should throw the exception, you could use it to inspire yourself when building your tests (source):
// bad_alloc.cpp
// compile with: /EHsc
#include<new>
#include<iostream>
using namespace std;
int main() {
char* ptr;
try {
ptr = new char[(~unsigned int((int)0)/2) - 1];
delete[] ptr;
}
catch( bad_alloc &ba) {
cout << ba.what( ) << endl;
}
}
However, if you are not planning to handle all bad_alloc exceptions (or absolutely all exceptions) in your code, there is no way to get 100% coverage - since it won't be 100% covered... Most of the cases, true 100% coverage is unnecessary, though.

Patch C/C++ function to just return without execution

I want to avoid one system function executing in a large project. It is impossible to redefine it or add some ifdef logic. So I want to patch the code to just the ret operation.
The functions are:
void __cdecl _wassert(const wchar_t *, const wchar_t *, unsigned);
and:
void __dj_assert(const char *, const char *, int, const char *) __attribute__((__noreturn__));
So I need to patch the first one on Visual C++ compiler, and the second one on GCC compiler.
Can I just write the ret instruction directly at the address of the _wassert/__dj_assert function, for x86/x64?
UPDATE:
I just wanna modify function body like this:
*_wassert = `ret`;
Or maybe copy another function body like this:
void __cdecl _wassert_emptyhar_t *, const wchar_t *, unsigned)
{
}
for (int i = 0; i < sizeof(void*); i++) {
((char*)_wassert)[i] = ((char*)_wassert_empty
}
UPDATE 2:
I really don't understand why there are so many objections against silent asserts. In fact, there is no asserts in the RELEASE mode, but nobody cares. I just want to be able turning on/off the asserts in the DEBUG mode.
You need to understand the calling conventions for your particular processor ISA and system ABI. See this for x86 & x86-64 calling conventions.
Some calling conventions require more than a single ret machine instruction in the epilogue, and you have to count with that. BTW, code of some function usually resides in a read-only code segment, and you'll need some dirty tricks to patch it and write inside it.
You could compile a no-op function of the same signature, and ask the compiler to show the emitted assembler code (e.g. with gcc -O -Wall -fverbose-asm -S if using GCC....)
On Linux you might use dynamic linker LD_PRELOAD tricks. If using a recent GCC you might perhaps consider customizing it with MELT, but I don't think it is worthwhile in your particular case...
However, you apparently have some assert failure. It is very unlikely that your program could continue without any undefined behavior. So practically speaking, your program will very likely crash elsewhere with your proposed "fix", and you'll lose more of your time with it.
Better take enough time to correct the original bug, and improve your development process. Your way is postponing a critical bug correction, and you are extremely likely to spend more time avoiding that bug fix than dealing with it properly (and finding it now, not later) as you should. Avoid increasing your technical debt and making your code base even more buggy and rotten.
My feeling is that you are going nowhere (except to a big failure) with your approach of patching the binary to avoid assert-s. You should find out why there are violated, and improve the code (either remove the obsolete assert, or improve it, or correct the bug elsewhere that assert has detected).
On Gnu/Linux you can use the --wrapoption like this:
gcc source.c -Wl,--wrap,functionToPatch -o prog
and your source must add the wrapper function:
void *__wrap_functionToPatch () {} // simply returns
Parameters and return values as needed for your function.

Find unimplemented class methods

In my application, I'm dealing with a larger-size classes (over 50 methods) each of which is reasonably complex. I'm not worried about the complexity as they are still straight forward in terms of isolating pieces of functionality into smaller methods and then calling them. This is how the number of methods becomes large (a lot of these methods are private - specifically isolating pieces of functionality).
However when I get to the implementation stage, I find that I loose track of which methods have been implemented and which ones have not been. Then at linking stage I receive errors for the unimplemented methods. This would be fine, but there are a lot of interdependencies between classes and in order to link the app I would need to get EVERYTHING ready. Yet I would prefer to get one class our of the way before moving to the next one.
For reasons beyond my control, I cannot use an IDE - only a plain text editor and g++ compiler. Is there any way to find unimplemented methods in one class without doing a full linking? Right now I literally do text search on method signatures in the implementation cpp file for each of the methods, but this is very time consuming.
You could add a stub for every method you intend to implement, and do:
void SomeClass::someMethod() {
#error Not implemented
}
With gcc, this outputs file, line number and the error message for each of these. So you could then just compile the module in question and grep for "Not implemented", without requiring a linker run.
Although you then still need to add these stubs to the implementation files, which might be part of what you were trying to circumvent in the first place.
Though I can't see a simple way of doing this without actually attempting to link, you could grep the linker output for "undefined reference to ClassInQuestion::", which should give you only lines related to this error for methods of the given class.
This at least lets you avoid sifting through all error messages from the whole linking process, though it does not prevent having to go through a full linking.
That’s what unit tests and test coverage tools are for: write minimal tests for all functions up-front. Tests for missing functions won’t link. The test coverage report will tell you whether all functions have been visited.
Of course that’s only helping up to some extent, it’s not a 100% fool proof. Your development methodology sounds slightly dodgy to me though: developing classes one by one in isolation doesn’t work in practice: classes that depend on each other (and remember: reduce dependencies!) need to be developed in lockstep to some extent. You cannot churn out a complete implementation for one class and move to the next, never looking back.
In the past I have built an executable for each class:
#include "klass.h"
int main() {
Klass object;
return 0;
}
This reduces build time, can let you focus on one class at a time, speeds up your feedback loop.
It can be easily automated.
I really would look at reducing the size of that class though!
edit
If there are hurdles, you can go brute force:
#include "klass.h"
Klass createObject() {
return *reinterpret_cast<Klass>(0);
}
int main() {
Klass object = createObject();
return 0;
}
You could write a small script which analyses the header file for method implementations (regular expressions will make this very straightforward), then scans the implementation file for those same method implementations.
For example in Ruby (for a C++ compilation unit):
className = "" # Either hard-code or Regex /class \w+/
allMethods = []
# Scan header file for methods
File.open(<headerFile>, "r") do |file|
allLines = file.map { |line| line }
allLines.each do |line|
if (line =~ /(\);)$/) # Finds lines ending in ");" (end of method decl.)
allMethods << line.strip!
end
end
end
implementedMethods = []
yetToImplement = []
# Scan implementation file for same methods
File.open(<implementationFile>, "r") do |file|
contents = file.read
allMethods.each do |method|
if (contents.include?(method)) # Or (className + "::" + method)
implementedMethods << method
else
yetToImplement << method
end
end
end
# Print the results (may need to scroll the code window)
print "Yet to implement:\n"
yetToImplement.each do |method|
print (method + "\n")
end
print "\nAlready implemented:\n"
implementedMethods.each do |method
print (method + "\n")
end
Someone else will be able to tell you how to automate this into the build process, but this is one way to quickly check which methods haven't yet been implemented.
The delete keyword of c++11 does the trick
struct S{
void f()=delete; //unimplemented
};
If C++11 is not avaiable, you can use private as a workaround
struct S{
private: //unimplemented
void f();
};
With this two method, you can write some testing code in a .cpp file
//test_S.cpp
#include "S.hpp"
namespace{
void test(){
S* s;
s->f(); //will trigger a compilation error
}
}
Note that your testing code will never be executed. The namespace{} says to the linker that this code is never used outside the current compilation unit (i.e., test_S.cpp) and will therefore be dropped just after compilation checking.
Because this code is never executed, you do not actualy need to create a real S object in the test function. You just want to trick the compiler in order to test if a S objects has a callable f() function.
You can create a custom exception and throw it so that:
Calling an unimplemented function will terminate the application instead of leaving it in an unexpected state
The code can still be compiled, even without the required functions being implemented
You can easily find the unimplemented functions by looking through compiler warnings (by using some possibly nasty tricks), or by searching your project directory
You can optionally remove the exception from release builds, which would cause build errors if there are any functions that try to throw the exception
#if defined(DEBUG)
#if defined(__GNUC__)
#define DEPRECATED(f, m) f __attribute__((deprecated(m)))
#elif defined(_MSC_VER)
#define DEPRECATED(f, m) __declspec(deprecated(m)) f
#else
#define DEPRECATED(f, m) f
#endif
class not_implemented : public std::logic_error {
public:
DEPRECATED(not_implemented(), "\nUnimplemented function") : logic_error("Not implemented.") { }
}
#endif // DEBUG
Unimplemented functions would look like this:
void doComplexTask() {
throw not_implemented();
}
You can look for these unimplemented functions in multiple ways. In GCC, the output for debug builds is:
main.cpp: In function ‘void doComplexTask()’:
main.cpp:21:27: warning: ‘not_implemented::not_implemented()’ is deprecated:
Unimplemented function [-Wdeprecated-declarations]
throw not_implemented();
^
main.cpp:15:16: note: declared here
DEPRECATED(not_implemented(), "\nUnimplemented function") : logic_error("Not implemented.") { }
^~~~~~~~~~~~~~~
main.cpp:6:26: note: in definition of macro ‘DEPRECATED’
#define DEPRECATED(f, m) f __attribute__((deprecated(m)))
Release builds:
main.cpp: In function ‘void doComplexTask()’:
main.cpp:21:11: error: ‘not_implemented’ was not declared in this scope
throw not_implemented;
^~~~~~~~~~~~~~~
You can search for the exception with grep:
$ grep -Enr "\bthrow\s+not_implemented\b"
main.cpp:21: throw not_implemented();
The advantage of using grep is that grep doesn't care about your build configuration and will find everything regardless. You can also get rid of the deprecated qualifier to clean up your compiler output--the above hack generates a lot of irrelevant noise. Depending on your priorities this might be a disadvantage (for example, you might not care about Windows-specific functions if you're currently implementing Linux-specific functions, or vice-versa.)
If you use an IDE, most will let you search your entire project, and some even let you right-click a symbol and find everywhere it is used. (But you said you can't use one so in your case grep is your friend.)
I cannot see an easy way of doing this. Having several classes with no implementation will easily lead to a situation where keeping track in a multiple member team will be a nightmare.
Personally I would want to unit test each class I write and test driven development is my recommendation. However this involves linking the code each time you want to check the status.
For tools to use TDD refer to link here.
Another option is to write a piece of code that can parse through the source and check for functihat are to be implemented. GCC_XML is a good starting point.

lcov woes: weird duplicate constructor marked as not covered & function not marked as covered, even though its lines have been executed

On my quest to learn more about automated testing by getting a small C++ test project up & running with 100% coverage, I've run into the following issue - even though all my actual lines of code and all the execution branches are covered by tests, lcov still reports two lines as untested (they only contain function definitions), as well as a "duplicate" constructor method that is supposedly untested even though it matches my "real" constructor (the only one ever defined & used) perfectly.
(Skip to EDIT for the minimal reproduction case)
If I generate the same coverage statistics (from the same exact source, .gcno & .gcda files) using the gcovr python script and pass the results to the Jenkins Cobertura plugin, it gives me 100% on all counts - lines, conditionals & methods.
Here's what I mean:
The Jenkins Cobertura Coverage page: http://gints.dyndns.info/heap_std_gcovr_jenkins_cobertura.html (everything at a 100%).
The same .gcda files processed using lcov: http://gints.dyndns.info/heap_std_lcov.html (two function definition lines marked as not executed even though lines within those functions are fully covered, as well as functions Hit = functions Total - 1).
The function statistics for that source file from lcov: http:// gints.dyndns.info/heap_std_lcov_func (shows two identical constructor definitions, both referring to the same line of code in the file, one of them marked hit 5 times, the other 0 times).
If I look at the intermediate lcov .info file: http://gints.dyndns.info/lcov_coverage_filtered.info.txt I see that there are two constructor definitions there too, both are supposed to be on the same line: FN:8,_ZN4BBOS8Heap_stdC1Ev & FN:8,_ZN4BBOS8Heap_stdC2Ev.
Oh, and don't mind the messiness around the .uic include / destructor, that's just a dirty way of dealing with What is the branch in the destructor reported by gcov? I happened to be trying out when I took those file snapshots.
Anyone have a suggestion on how to resolve this? Is there some "behind-the-scenes" magic the C++ compiler is doing here? (An extra copy of the constructor for special purposes that I should make sure to call from my tests, perhaps?) What about the regular function definition - how can the definition line be untested even though the body has been fully tested? Is this simply an issue with lcov? Any suggestions welcome - I'd like to understand why this is happening and if there's really some functionality that my tests are leaving uncovered and Cobertura is not complaining about ... or if not, how do I make lcov understand that?
EDIT: adding minimal repro scenario below
lcov_repro_one_bad.cpp:
#include <stdexcept>
class Parent {
public:
Parent() throw() { }
virtual void * Do_stuff(const unsigned m) throw(std::runtime_error) =0;
};
class Child : public Parent {
public:
Child() throw();
virtual void * Do_stuff(const unsigned m)
throw(std::runtime_error);
};
Child::Child()
throw()
: Parent()
{
}
void * Child::Do_stuff(const unsigned m)
throw(std::runtime_error)
{
const int a = m;
if ( a > 10 ) {
throw std::runtime_error("oops!");
}
return NULL;
}
int main()
{
Child c;
c.Do_stuff(5);
try {
c.Do_stuff(11);
}
catch ( const std::runtime_error & ) { }
return 0;
}
makefile:
GPP_FLAGS:=-fprofile-arcs -ftest-coverage -pedantic -pedantic-errors -W -Wall -Wextra -Werror -g -O0
all:
g++ ${GPP_FLAGS} lcov_repro_one_bad.cpp -o lcov_repro_one_bad
./lcov_repro_one_bad
lcov --capture --directory ${PWD} --output-file lcov_coverage_all.info --base-directory ${PWD}
lcov --output-file lcov_coverage_filtered.info --extract lcov_coverage_all.info ${PWD}/*.*
genhtml --output-directory lcov_coverage_html lcov_coverage_filtered.info --demangle-cpp --sort --legend --highlight
And here's the coverage I get from that: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_bad.cpp.gcov.html
As you can see, the supposedly not-hit lines are the definitions of what exceptions the functions may throw, and the extra not-hit constructor for Child is still there in the functions list (click on functions at the top).
I've tried removing the throw declarations from the function definitions, and that takes care of the un-executed lines at the function declarations: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_v1.cpp.gcov.html (the extra constructor is still there, as you can see).
I've tried moving the function definitions into the class body, instead of defining them later, and that gets rid of the extra constructor: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_v2.cpp.gcov.html (there's still some weirdness around the Do_stuff function definition, though, as you can see).
And then, of course, if I do both of the above, all is well: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_ok.cpp.gcov.html
But I'm still stumped as to what the root cause of this is ... and I still want to have my methods (including the constructor) defined in a separate .cpp file, not in the class body, and I do want my functions to have well defined exceptions they can throw!
Here's the source, in case you feel like playing around with this: http://gints.dyndns.info/lcov_repro_src.zip
Any ideas?
Thanks!
OK, after some hunting around & reading up on C++ exception declarations, I think I understand what's going on:
As far as the un-hit throw declarations are concerned, it seems everything is actually correct here: function throw declarations are supposed to add extra code to the output object file that checks for illegal (as far as the throw declaration is concerned) exceptions thrown. Since I was not testing the case of this happening, it makes sense that that code was never executed, and those statements were marked un-hit. Although the situation is far from ideal here anyway, at least one can see where this is coming from.
As far as the duplicate constructors are concerned, this seems to be a known thing with gcc with a longstanding discussion (and various attempts at patches to resolve the resulting object code duplication): http://gcc.gnu.org/bugzilla/show_bug.cgi?id=3187 - basically, there are two versions of the constructor created - one for use with this class, and one for use with child classes, and you need to exercise both, if you want 100% coverage.