lcov marking lines with function declarations as reachable but not covered - c++

I'm trying to use lcov (v1.13, on OS X, with clang as compiler) to generate code coverage for my test suite and I've hit one annoying problem that I don't know how to solve. There's a few similar questions on SO, but I couldn't find the solution to this one. For some reason, function/member declarations are marked as reachable but not executed, kind of like in the example below (this is inline method definition in a header):
This renders line coverage metrics useless, so I was hoping there's a way to fix it without marking each declaration as LCOV_EXCL_LINE.
Compiler flags used are pretty standard:
-g -O0 -fno-inline -ftest-coverage -fprofile-arcs -fno-elide-constructors
What's strange is that method definitions in source files are also marked as red, although the bodies are not, e.g.:
// header.h
class Foo {
void bar(); // ignored, marked as unreachable
}
// header.cpp
void Foo::bar() { // marked as red (reachable / not executed)
do_something(); // marked as covered
}
If it's of any importance, the source files are part of a static library that's statically linked to the test harness in CMake.

Answering my own question:
Apparently, lcov -i (initial capture) assumes that starting lines of functions are instrumented, whereas with LLVM they are actually not (whereas with GCC where they are). There is an upstream GitHub issue (linux-test-project/lcov#30) documenting this in more detail.
Until this is fixed upstream in lcov, I've posted a simple workaround -- a Python script that removes function starting lines from base coverage file, which should "fix" it, at least temporarily.

Related

Why a basic unreferenced c++ function does not get optimized away?

Consider this simple code:
#include <stdio.h>
extern "C"
{
void p4nenc256v32();
void p4ndec256v32();
}
void bigFunctionTest()
{
p4nenc256v32();
p4ndec256v32();
}
int main()
{
printf("hello\n");
}
Code size of those p4nenc256v32/p4ndec256v32 functions is significant, roughly 1.5MB. This binary size when compiled with latest VS2022 with optimizations enabled is 1.5MB. If I comment out that unused bigFunctionTest function then resulting binary is smaller by 1.4MB. Any ideas why would this clearly unused function wouldn't be eliminated by compiler and/or linker in release builds? By default, VS2022 in release uses /Gy and /OPT:REF.
I also tried mingw64 (gcc 12.2) with -fdata-sections -ffunction-sections -Wl,--gc-sections and results were much worse: when compiled with that dummy function exe grew by 5.2MB. Seem like ms and gcc compilers agree that for some reason these functions cannot be removed.
I created a working sample project that shows the issue: https://github.com/pps83/TestLinker.git (make sure to pull submodules as well) and filled an issue with VS issue tracker: Linker doesn't eliminate correctly dead code, however, I think I might get better explanation from SO users explaining what might be the reason for the problem.

Two GCC compiles for same input, two different codes generated (second one wrong)

I am having a strange issue with GCC (4.6.4, Ubuntu 12.04) sometimes, I am using it to compile a huge project (hundreds of files and hundreds of thousands of lines of code), but I recently spotted something. After certain compiles (seems to happen randomly), I get a specific piece of code compiled differently and erroneously, causing undefined behavior in my code:
class someDerivedClass : public someBaseClass
{
public:
struct anotherDerived : public anoterBaseClass
{
void SomeMethod()
{
someMember->someSetter(2);
}
}
}
Where "someSetter" is defined as:
void someSetter(varType varName) { someOtherMember = varName; }
Normally, SomeMethod() gets compiled to:
00000000019fd910 mov 0x20(%rdi),%rax
00000000019fd914 movl $0x2,0x278c(%rax)
00000000019fd91e retq
But sometimes it gets (wrongfully) compiled to:
000000000196e4ee mov 0x20(%rdi),%rax
000000000196e4f2 movl $0x2,0x27d4(%rax)
000000000196e4fc retq
The setter seems to get inlined, probably because of compile flags -O2:
-std=c++11 -m64 -O2 -ggdb3 -pipe -Wliteral-suffix -fpermissive -fno-fast-math -fno-strength-reduce -fno-delete-null-pointer-checks -fno-strict-aliasing
but that's not the issue. The real issue is the offset of the member someOtherMember, 0x278c is correct (first case) but 0x27d4 is incorrect (second case) and this obviously ends up modifying a totally different member of the class. Why is this happening? What am I missing? (also, I don't know what other relevant info I can post, so ask). Please keep in mind that this happens when compiling the project again (either full recompile or just compiling modified files only), without modifying the affected file (or files with the used classes). For example, just adding a simple printf() somewhere in a totally unrelated file might trigger this behavior or make it go away when it happens.
Should I simply blame this on the -O2? I can't reproduce it without optimization flag because this happens totally random.
I am using make -j 8, this happens even after cleaning build folder, but doesn't necessarily happen only after doing that
As stated in the comments, you probably have something that conditions the definition of your class differently in the various .cpp, for example a #pragma pack or something like that before the inclusion of your .h; when the linker has to choose, it may choose non-deterministically (since it expects all the definitions to be the same).
To narrow your search for the root of the problem, I would do something like this:
compile your whole project with debug symbols (-g);
use gdb to determine what is the offset of the "problematic" field according to each module
once you find where you have different values, you may use gcc -E to expand all the preprocessor stuff and look for your problem.
As an aid for step 2, you can use this bash one-liner (to be run in the directory where are the object files):
for i in ./*.o; do echo -n "$i: "; gdb -batch -q "$i" -ex "print &((YourClass*)0)->yourField"; done

Linker does not point out errors; multiple definition warnings pointed to the same line

Believe me, I've been through several posts on here, but none of them addressed the issue I'm having. I have this 2-year-old program that used to run. I'm kind of reviving it, but for some reason now it does not run.
Clearly, I'm having multiple definitions (too many of them):
============================ TERMINAL OUTPUT =============================
build_files/LinkedStack.o: In function `LinkedStack':
/home/owner/workspace/opencv-galaxies/utilities/structures/LinkedStack.cpp:12: multiple definition of `LinkedStack::LinkedStack()'
build_files/LinkedStack.o:/home/owner/workspace/opencv-galaxies/utilities/structures/LinkedStack.cpp:12: first defined here
... and so on, and so forth, ... and it all ends with:
collect2: ld returned 1 exit status
make: *** [executables/Assignment3.out] Error 1
========================================================================
Strangely, the linker does not indicate any errors throughout the extensive list of warnings, not to mention that these aren't true multiple definitions. Note that each warning in a "multiple...-first defined... " pair refers to the same line. Now I don't know what to do.
I'm wondering if it has something to do with the rather busy syntax of our makefile (though it looks really good to me):
=============================== MAKEFILE =================================
CFLAGS = -g -Wno-deprecated
OBJECTS = utilities/basic/image.h build_files/image.o build_files/ReadImage.o build_files/ReadImageHeader.o build_files/WriteImage.o build_files/LinkedStack.o build_files/unsortedList.o build_files/region.o build_files/Main.o
executables/Assignment3.out: $(OBJECTS)
g++ $(OBJECTS) -o executables/Assignment3.out build_files/*.o $(CFLAGS) -lncurses
...
build_files/LinkedStack.o: utilities/structures/LinkedStack.h utilities/structures/LinkedStack.cpp
g++ -c $(CFLAGS) utilities/structures/LinkedStack.cpp -o build_files/LinkedStack.o
...
clean:
rm build_files/*.o executables/Assignment3.out
=========================================================================
So, these are my questions: 1) why did the linker see an error and 2) why am I having so many multiple definitions?
If you want a clarification, let me know even if you kind of have an idea of what's going on.
============================== CODE EXAMPLE ==============================
Here's the full example function (I don't want to make this too long):
//constructor
LinkedStack::LinkedStack()
{
topPtr = NULL; //set top pointer to null
}
========================================================================
Most likely you are including a header which implements methods non-inline in multiple translation units. The Makefile has nothing to with it. You'll need to find the definition of the methods and see how they end up being included into multiple files. If they are actually in a header file the easiest fix is probably to make them all inline.
The compiler doesn't see that you are including the header into multiple translation units as it always only processes one at a time. When the linker sees the various object files, it just see many definitions of the same thing and complains. I would have thought that the linker pointer at the location of the definition, though.
Given the info you provided, it is hard to even guess what the problem is. However, make sure that
a) you do not include any cpp file in another cpp/h file.
b) any implementation you define in h file, is inline-ed.
(I would guess b is your problem)

GCC 4.6.3 - template specialization is influenced by the optimization level?

In an application I'm developing I have a template function like this:
template<class T>
void CIO::writeln(T item)
{
stringstream ss;
ss << item << '\r' << endl;
write(ss.str());
}
This function is called from several places, with T = const char* and T=std::string. With CodeSourcery Lite 2008.03-41 (GCC 4.3.2) this compiled and linked fine with -O3 compiler flag. However, since I changed to CodeSourcery Lite 2012.03-57 (GCC 4.6.3), compiling with -O3 is OK, but then linking fails with undefined reference to void CIO::writeln<std::string>(std::string). With -O2 or lower everything is OK, linking succeeds.
I had a deeper look into this and I discovered something strange in the assembly output: when compiling with -O2, I can find two specializations of the function: one for const char* (_ZN3CIO7writelnIPKcEEvT_) and one for std::string (_ZN3CIO7writelnISsEEvT_), but when compiling with -O3, the second specialization is missing, which explains the linking error.
Is this a compiler bug? Is this some weird optimization turned evil?
Thanks in advance!
Edit: this function is in a source file. Following Mike Seymour's comment, I moved it to the header and everything's fine now. I admit that I should've realized this earlier. Nevertheless, it still frightens me that a language rule is checked or not depending on an optimization flag.
Unlike what the other answer says, this is probably not a compiler bug.
One of the optimisations that gets enabled by -O3 is function inlining. What I think is happening is:
Source file 1 is calling CIO::writeln without having its definition available. It is compiled to object file 1.
Source file 2 is calling CIO::writeln while having its definition available. It is compiled to object file 2.
Object file 1 will only be usable if object file 2 contains the definition of CIO::writeln. If the call in source file 2 gets inlined, object file 2 won't contain a definition for it. If the call does not get inlined, a definition will be available.
The solution given in the comments, move the definition to a header file, is correct.

gdb automatically steps into inline functions

I'm debugging a running program with gdb 6.6 on solaris, and noticed that sometimes gdb steps into (inline) functions, even though I issued a next command.
My development host was recently reinstalled with a slightly newer build of solaris 10, and I know for sure the auto-stepping was not present before the host was reinstalled. The code is compiled with the same options since the makefiles and all the source code is unchanged since host reinstallation.
Is there any setting/new default option which influences gdb's debugging behaviour that I can check? Does anyone know why my gdb now auto-steps? Its a pain really ...
[edit] to clarify: I did not mean the inline keyword, but rather methods/functions which are implemented in the header file. Example:
header.hpp:
class MyClass
{
public:
void someFunc() { ... does something }
}
source.cc:
{
MyClass instance;
instance.someFunc(); // doing NEXT in gdb will actually STEP into header.hpp
}
Your new version of Solaris may have included a new version of the C or C++ compiler. The new compiler may be optimizing more aggressively than it did before. Check your optimization flags. If you are using GCC, you can disable inlining with -fno-inline (note that methods that are implemented in the class in header files are inlined by default which can be disabled with -fno-default-inline). If you are using the native Solaris compiler, you will need to check its documentation.
A similar problem was reported here. In the comment, the poster mentioned changing the debug symbol to use STABS resolved the issue.
You mentioned in a comment to my answer that STABS works, but is not acceptable. Also, you mentioned that you are unable to reproduce the issue with a simple example. It will be difficult to trouble shoot this issue if you have to recompile your entire project each time to perform a test. Try to isolate the problem to a few source files in your project. See what they have in common (do they include a common header file, do they use a pragma, are the compilation options a little different from the other source fies, etc.), and try to create a small example with the same problem. This will make it easier to identify the root cause of your issue and determine how to resolve it. Without this data, we are just the blind leading the blind.