I understand that valgrind can call memcheck to perform memory leak check, and in this case the compiled C++ executable program must contain debug information. Then, if I want to use valgrind/callgrind to perform profiling, must the executable contain debug information? I have run a small test, and it seems that valgrind/callgrind can work on release executable programs without debug information. Can anyone confirm it?
From Official Valgrind documentation link, following information can be found:
2.2. Getting started
First off, consider whether it might be beneficial to recompile your application and supporting libraries with debugging info enabled (the -g option).
Without debugging info, the best Valgrind tools will be able to do is guess which function a particular piece of code belongs to, which makes both error messages and profiling output nearly useless. With -g, you'll get messages which point directly to the relevant source code lines.
Another option you might like to consider, if you are working with C++, is -fno-inline. That makes it easier to see the function-call chain, which can help reduce confusion when navigating around large C++ apps. For example, debugging OpenOffice.org with Memcheck is a bit easier when using this option. You don't have to do this, but doing so helps Valgrind produce more accurate and less confusing error reports. Chances are you're set up like this already, if you intended to debug your program with GNU GDB, or some other debug.
Hence the recommended step is to recompile your program with -g option to get maximum information from the Valgrind.
According to the valgrind manual:
http://valgrind.org/docs/manual/manual-core.html
If you are planning to use Memcheck: On rare occasions, compiler optimisations (at -O2 and above, and sometimes -O1) have been observed to generate code which fools Memcheck into wrongly reporting uninitialised value errors, or missing uninitialised value errors. We have looked in detail into fixing this, and unfortunately the result is that doing so would give a further significant slowdown in what is already a slow tool. So the best solution is to turn off optimisation altogether. Since this often makes things unmanageably slow, a reasonable compromise is to use -O. This gets you the majority of the benefits of higher optimisation levels whilst keeping relatively small the chances of false positives or false negatives from Memcheck. Also, you should compile your code with -Wall because it can identify some or all of the problems that Valgrind can miss at the higher optimisation levels. (Using -Wall is also a good idea in general.) All other tools (as far as we know) are unaffected by optimisation level, and for profiling tools like Cachegrind it is better to compile your program at its normal optimisation level.
Related
Currently using VSCode, g++, C++20, Ubuntu 20.04 Lts.
What compiler flags can I use for release builds and debug builds separately? Do I turn off every optimization flag for debug builds? Or does it not really matter? I would appreciate any advice, recommendations, or feedback as I couldn't find much on my own.
Do I turn off every optimization flag for debug builds?
Yes, I would say that is the best way to go, and it does really matter! Depending on your code, your understanding of the compiler/debugger and level of optimisation chosen, the experience of debugging it will vary from mildly annoying to frustrating and almost useless. This answer gives a synopsis of the different levels for gcc and this question has several answers going into more detail about optimisations.
As a summary, the compiler is in general allowed to modify your code in any way it sees fit, as long as it still behaves as if all your statements were executed exactly as written. In practice, -O1 already enables dozens of techniques and -O2 and -O3 will probably leave almost nothing untouched, which makes it harder to pinpoint issues because:
Stepping through code may visit statements in a different order or skip them entirely, also hindering the use of breakpoints;
Function calls may disappear because they were inlined, and no longer be callable from the debugging prompt;
Local variables tend to have shorter lifetimes than in your source code, so you can't always query their values.
I personally build with CMake and primarily use two of its build types:
Debug (-g): No optimisations, compiles runtime assert statements;
RelWithDebInfo (-O2 -g -DNDEBUG): Fast code without these assertions that is harder to debug, but suitable for performance analysis once your program is working correctly.
I have a c++ code that segfaults with optimization flags, but not when I run it with debug flags. This precludes me from using a debugger. Is there any other way/ guidelines apart from a barrage of cout statements?
I am on a *nix platform and using intel-12.1 compilers and I am quite certain that it is a memory issue that I need to catch with valgrind. The only thing that puzzles me is why it does not show in the debug mode.
Valgrind is a useful tool for Unix-based systems for troubleshooting release-mode executables (gflags and WinDebug are useful for Windows.)
I also recommend not giving up on your debugger - you can run non-debug executables inside a debugger, and still get useful information about the segfault. Often times you can also add in some level of debug information, even with optimizations turned on, to provide you more context. You might also check for any debug-mode heap-checking facility that the intel compiler might provide, as these can go undetected in debug builds (due to different memory management).
Also note that there are usually multiple levels of optimization you can use for "release mode". You might try backing down to a less aggressive optimization level, and see if the error still occurs.
You might also check the the Intel compiler web site to see if there have been any bugfixes/bug-reports regarding optimization for the compiler version you're using.
If none of these help, you can try using an alternate compiler (unless you're using something Intel-specific) to see if the problem is compiler-related or not.
Finally, as klm123 noted, commenting out blocks is a good way to localize the problem.
I'm writing a program in C++/Qt which contains a graph file parser. I use g++ to compile the project.
While developing, I am constantly comparing the performance of my low level parser layer between different compiler flags regarding optimization and debug information, plus Qt's debug flag (turning on/off qDebug() and Q_ASSERT()).
Now I'm facing a problem where the only correctly functioning build is the one without any optimization. All other versions, even with -O1, seem to work in another way. They crash due to unsatisfied assertions, which are satisfied when compiled without a -O... flag. The code doesn't produce any compiler warning, even with -Wall.
I am very sure that there is a bug in my program, which seems to be only harmful with optimization being enabled. The problem is: I can't find it even when debugging the program. The parser seems to read wrong data from the file. When I run some simple test cases, they run perfectly. When I run a bigger test case (a route calculation on a graph read directly from a file), there is an incorrect read in the file which I can't explain.
Where should I start tracking down the problem of this undefined behavior? Which optimization methods are possibly involved within this different behavior? (I could enable all flags one after the other, but I don't know that much compiler flags but -O... and I know that there are a lot of them, so this would need a very long time.) As soon as I know which type the bug is of, I am sure I find it sooner or later.
You can help me a lot if you can tell me which compiler optimization methods are possible candidates for such problems.
There are a few classes of bugs that commonly arise in optimized builds, that often don't arise in debug builds.
Un-initialized variables. The compiler can catch some but not all. Look at all your constructors, look at global variables. etc. Particularly look for uninitialized pointers. In a debug build memory is reset to zero, but in a release build it isn't.
Use of temporaries that have gone out of scope. For example when you return a reference to a local temporary in a function. These often work in debug builds because the stack is padded out more. The temporaries tend to survive on the stack a little longer.
array overruns writing of temporaries. For example if you create an array as a temporary in a function and then write one element beyond the end. Again, the stack will have extra space in debug ( for debugging information ) and your overrun won't hit program data.
There are optimizations you can disable from the optimized build to help make debugging the optimized version easier.
-g -O1 -fno-inline -fno-loop-optimize -fno-if-conversion -fno-if-conversion2 \
-fno-delayed-branch
This should make stepping through your code in the debugger a little easier to follow.
Another suggestion is that if the assertions you have do not give you enough information about what is causing the problem, you should consider adding more assertions. If you are afraid of performance issues, or assertion clutter, you can wrap them in a macro. This allows you to distinguish the debugging assertions from the ones you originally added, so they can be disabled or removed from your code later.
1) Use valgrind on the broken version. (For that matter, try valgrind on the working version, maybe you'll get lucky.)
2) Build the system with "-O1 -g" and step through your program with gdb. At the crash, what variable has an incorrect value? Re-run your program and note when that variable is written to (or when it isn't and should have been.)
I'm facing a problem that is so mysterious, that I don't even know how to formulate this question... I cannot even post any piece of code.
I develop a big project on my own, started from scratch. It's nearly release time, but I can't get rid of some annoying error. My program writes an output file from time to time and during that I get either:
std::string out_of_range error
std::string length_error
just lots of nonsense on output
Worth noting that those errors appear very rarely and can never be reproduced, even with the same input. Memcheck shows no memory violation, even on runs where errors were previously noted. Cppcheck has no complains as well. I use STL and pthreads intensively, but without the latter one errors also happen.
I tried both newest g++ and icpc. I am running on some version of Ubuntu, but I don't believe that's the reason.
I would appreciate any help from you, guys, on how to tackle such problems.
Thanks in advance.
Enable coredumps (ulimit -c or setrlimit()), get a core and start gdb'ing. Or, if you can, make a setup where you always run under gdb, so that when the error eventually happen you have some information available.
The symptoms hint at a memory corruption.
If I had to guess, I'd say that something is corrupting the internal state of the std::string object that you're writing out. Does the string object live on the stack? Have you eliminated stack smashing as a possible cause (that wouldn't be detectable by valgrind)?
I would also suggest running your executable under a debugger, set up in such a way that it would trigger a breakpoint whenever the problem happens. This would allow you to examine the state of your process at that point, which might be helpful in figuring out what's going on.
gdb and valgrind are very useful tools for debugging errors like this. valgrind is especially powerful for identifying memory access problems and memory leaks.
I encountered strange optimization bugs in gcc (like a ++i being assembled to i++ in rare circumstances). You could try declaring some critical variables volatile but if valgrind doesn't find anything, chances are low. And of course it's like shooting in the dark...
If you can at least detect that something is wrong in a certain run from inside the program, like detecting nonsensical output, you could then call an empty "gotNonsense()" function that you can break into with gdb.
If you cannot determine where exactly in the code does your program crash, one way to find that place would be using a debug output. Debug output is good way of debugging bugs that cannot be reproduced, because you will get more information about the bug the next time it happens, without the need to actively reproduce it. I recommend using some logging lib for that, boost provides one, for example.
You are using STL intensively, so you can try to run your program with libstdc++ in debug mode. It will do extra checks on iterators, containers and algorithms. To use the libstdc++ debug mode, compile your application with the compiler flag -D_GLIBCXX_DEBUG
Many times I work with optimized code (sometimes even involving vectorized loops), which contain bugs and such. How would one debug such code? I'm looking for any kind of tools or techniques. I use the following (possibly outdated) tools, so I'm looking to upgrade.
I use the following:
Since with ddd, you cannot see the code, I use gdb+ dissambler command and see the produced code; I can't really step through the program using this.
ndisasm
Thanks
It is always harder to debug optimised programs, but there are always ways. Some additional tips:
Make a debug build, and see if you get the same bug in a debug build. No point debugging an optimised version if you don't have to.
Use valgrind if on a platform that supports it. The errors you see may be harder to understand, but catching the problem early often simplifies debugging.
printf debugging is primitive, but sometimes it is the simplest way if you have a complex issue that only shows up in optimised builds.
If you suspect a timing issue (especially in a multithreaded program), roll your own version of assert which aborts or prints if the condition is violated, and use it in a few select places, to rule out possible problems.
See if you can reproduce the problem without using -fomit-frame-pointers, since that makes code very hard to debug, and with -O2 or -O3 enabled. That might give you enough information to find the cause of your problem.
Isolate parts of your code, build a test-suite, and see if you can identify any testcases which fail. It is much easier to debug one function than the whole program.
Try turning off optimisations one by one with the -fno-X options. This might help you find common problems like strict aliasing problems.
Turn on more compiler warnings. Some things, like strict aliasing problems, can generate compiler warnings if they create a difference in behaviour between different optimisation levels.
When debugging release builds you can put in __asm nops; as a placeholder for breakpoints (int 3). This is nice as you can guarantee breakpoint locations without messing up compiler optimizations or writing printf/cout statements.
It's always easier to debug a non-optimized version, of course. Failing that, disassembly of the code can be helpful. Other techinques I've used include partially de-optimizing the code by forcing intermediate results to be printed or logged, or changing a critical variable to "volatile" so I can at least look at that value in the debugger.
Chances are what you call optimized code is scrambled to shave cycles (which makes debugging hard) but is not really very optimized. Here is an example of what I mean.
I would turn off the compiler optimization, debug and tune it yourself, and then turn compiler optimization back on if the code has hotspots that are actually in code the compiler sees (not in outside libraries). (I define a hotspot as a part of code where the PC is often found. That automatically exempts loops containing function calls because they steal away the PC.)