GDB documentation tells me that in order to compile for debugging, I need to ask my compiler to generate debugging symbols. This is done by specifying a '-g' flag.
Furthermore, GDB doc recommends I'd always compile with a '-g' flag. This sounds good, and I'd like to do that.
But first, I'd like to find out about downsides. Are there any penalties involved with compiling-for-debugging in production code?
I am mostly interested in:
GCC as the compiler of choice
Red hat Linux as target OS
C and C++ languages
(Although information about other environments is welcome as well)
Many thanks!
If you use -g (which on recent GCC or Clang can be used with optimization flags like -O2):
compilation time is slower (and linking will use a lot more memory)
the executable is a bigger file (see elf(5) and use readelf(1)...)
the executable carries a lot of information about your source code.
you can use GDB easily
some interesting libraries, like Ian Taylor's libbacktrace, requires DWARF information (e.g. -g)
If you don't use -g it would be harder to use the GDB debugger (but possible).
So if you transmit the binary executable to a partner that should not understand how your source code was written, you need to avoid -g
See also the strip(1) and strace(1) commands.
Notice that using the -g flag for debugging information is also valid for Ocaml, Rust
PS. Recent GCC (e.g. GCC 10 or GCC 11 in 2021) accept many debugger flags. With -g3 your executable carries more debug information (e.g. description of C++ macros and their expansion) that with -g or -g1. Of course, compilation time increases, and executable size also. In principle, your GCC plugin (perhaps Bismon in 2021, or those inside the source code of the Linux kernel) could add even more debug information. In practice, you won't do that unless you can improve your debugger. However, a GCC plugin (or some #pragmas) can remove some debug information (e.g. remove debug information for a selected set of functions).
Generally, adding debug information increases the size of the binary files (or creates extra files for the debug information). That's nowadays usually not a problem, unless you're distributing it over slow networks. And of course this debug information may help others in analyzing your code, if they want to do that. Typically, the -g flag is used together with -O0 (the default), which disables compiler optimization and generates code that is as close as possible to the source, so that debugging is easier. While you can use debug information together with optimizations enabled, this is really tricky, because variables may not exist, or the sequence of instructions may be different than in the source. This is generally only done if an error needs to be analyzed that only happens after the optimizations are enabled. Of course, the downside of -O0 is poorer performance.
So as a conclusion: Typically one uses -g -O0 during development, and for distribution or production code just -O3.
Related
I am doing some tests and I realized that using the -G parameter when compiling is giving me a bad performance than without it.
I have checked the documentation in Nvidia:
--device-debug (-G)
Generate debug information for device code.
But it is not helping me to know the reason why is giving me such bad performance.
Where is it generating this debug information and when? and what could be the cause of this bad performance?
Using the -G switch disables most compiler optimizations that nvcc might do in device code. The resulting code will often run slower than code that is not compiled with -G, for this reason.
This is pretty easy to see by running your executable in each case through cuobjdump -sass myexecutable and looking at the generated device code. You'll see generally less device code in the non -G case, and you can see the differences in specific optimizations as well.
One of the reasons for this is that highly optimized device code may eliminate actual lines of source code and actual source code variables. This can make it very difficult to debug code. Therefore to enable debugging, most optimizations are disabled with -G.
Also note that with Thrust, using the -G switch may result in unpredictable behavior. Newer versions of thrust should behave better, but there may still be unexpected issues when compiling thrust code with -G.
I'm compiling a program with -O3 for performance and -g for debug symbols (in case of crash I can use the core dump). One thing bothers me a lot, does the -g option results in a performance penalty? When I look on the output of the compilation with and without -g, I see that the output without -g is 80% smaller than the output of the compilation with -g. If the extra space goes for the debug symbols, I don't care about it (I guess) since this part is not used during runtime. But if for each instruction in the compilation output without -g I need to do 4 more instructions in the compilation output with -g than I certainly prefer to stop using -g option even at the cost of not being able to process core dumps.
How to know the size of the debug symbols section inside the program and in general does compilation with -g creates a program which runs slower than the same code compiled without -g?
Citing from the gcc documentation
GCC allows you to use -g with -O. The shortcuts taken by optimized
code may occasionally produce surprising results: some variables you
declared may not exist at all; flow of control may briefly move where
you did not expect it; some statements may not be executed because
they compute constant results or their values are already at hand;
some statements may execute in different places because they have been
moved out of loops.
that means:
I will insert debugging symbols for you but I won't try to retain them if an optimization pass screws them out, you'll have to deal with that
Debugging symbols aren't written into the code but into another section called "debug section" which isn't even loaded at runtime (only by a debugger). That means: no code changes. You shouldn't notice any performance difference in code execution speed but you might experience some slowness if the loader needs to deal with the larger binary or if it takes into account the increased binary size somehow. You will probably have to benchmark the app yourself to be 100% sure in your specific case.
Notice that there's also another option from gcc 4.8:
-Og
Optimize debugging experience. -Og enables optimizations that do not interfere with debugging. It should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience.
This flag will impact performance because it will disable any optimization pass that would interfere with debugging infos.
Finally, it might even happen that some optimizations are better suited to a specific architecture rather than another one and unless instructed to do so for your specific processor (see march/mtune options for your architecture), in O3 gcc will do its best for a generic architecture. That means you might even experience O3 being slower than O2 in some contrived scenarios. "Best-effort" doesn't always mean "the best available".
When I try to build the source with debug mode the stack shown is totally diffrent and in case of release there are only a few methods shown in the backtrace with gdb ,
Why does this happen ? Is this because in debug mode there are extra methods , How can have two methods have the same address in debug and release mode .
Also in that case How can I build to to have accurate address information with complete stack trace . Any help would be appreciated since I am new to debugging on Linux , Windows it was much easier it seems with pdb files .
As discussed in the comments to #rockoder's answer, besides lacking debug symbols (which would be included with -g) in an optimized build whole function calls may not be present any more due to inlining.
When I try to build the source with debug mode the stack shown is
totally diffrent and in case of release there are only a few methods
shown in the backtrace with gdb , Why does this happen ? Is this
because in debug mode there are extra methods?
It is probably just due to compiler optimizations. What you call release build is probably built with compiler speed optimizations enabled and debug symbols disabled. Speed optimizations include code inlining which just copies function code in place instead of calling it, so function is not visible in call stack.
There could also be some extra/different methods, if the code was written with some appropriate preprocessor checks.
How can have two methods have the same address in debug and release
mode .
Depends on what your debug and release mode are. If they use same compiler optimizations and differ only in debug information, methods will have same addresses. If you debug build is not optimized (-O0 on GCC) then methods will be larger, as much unnecessary work is done, for example variables are read from memory before every manipulation and written back after it. Since each method will probably be smaller, functions will have different addresses, as they are generally packed one after another.
Also in that case How can I build to to have accurate address
information with complete stack trace .
Enable debug information. On GCC that would be -g3 (or -g or similar). This adds enough information for code address <-> source line queries (either from debugger or crash stack dump).
Any help would be appreciated since I am new to debugging on Linux ,
Windows it was much easier it seems with pdb files .
Are there any significant differences with Windows binaries debugging?
g++/gcc has many options used for debugging a program but the most common one is -g. Refer link. The first option discussed is -g.
Some additional information here.
Example:
Compile code without -g:
g++ broken.cpp -o broken_release
Compile code with -g:
g++ -g broken.cpp -o broken_debug
Now fire ls -l and note the size different between the files broken_release and broken_debug. Size of broken_debug should be greater then that of broken_release. That's because it contains debug information which could be used by debuggers like gdb.
I am using -O3 when compiling the code, and now I need to profile it. For profiling, there are two main choices I came accross: valgrind --tool=callgrind and gprof.
Valgrind (callgrind) docs state:
As with Cachegrind, you probably want to compile with debugging info (the -g option) and with optimization turned on.
However, in the C++ optimization book by Agner Fog, I have read the following:
Many optimization options are incompatible with debugging. A debugger can execute a
code one line at a time and show the values of all variables. Obviously, this is not possible
when parts of the code have been reordered, inlined, or optimized away. It is common to
make two versions of a program executable: a debug version with full debugging support
which is used during program development, and a release version with all relevant
optimization options turned on. Most IDE's (Integrated Development Environments) have
facilities for making a debug version and a release version of object files and executables.
Make sure to distinguish these two versions and turn off debugging and profiling support in
the optimized version of the executable.
This seems to conflict the callgrind instructions to compile the code with the debugging info flag -g. If I enable debugging in the following way:
-ggdb -DFULLDEBUG
am I not causing this option to conflict with the -O3 optimization flag? Using those two options together makes no sense to me after what I have read so far.
If I use say -O3 optimization flag, can I compile the code with additional profiling info by using:
-pg
and still profile it with valgrind?
Does it ever make sense to profile a code compiled with
-ggdb -DFULLDEBUG -O0
flags? It seems silly - not inlining functions and unrolling loops may shift the bottlenecks in the code, so this should be used for development only, to get the code to actually do stuff properly.
Does it ever make sense to compile the code with one optimization flag, and profile the code compiled with another optimization flag?
Why are you profiling? Just to get measurements or to find speedups?
The common wisdom that you should only profile optimized code is based on assuming the code is nearly optimal to begin with, which if there are significant speedups, it is not.
You should treat the finding of speedups as if they were bugs. Many people use this method of doing so.
After you've removed needless computations, if you still have tight CPU loops, i.e. you're not spending all your time in system or library or I/O routines the optimizer doesn't see, then turn on -O3, and let it do its magic.
I am just starting with g++ compiler on Linux and got some questions on the compiler flags. Here are they
Optimizations
I read about optimization flags -O1, -O2 and -O3 in the g++ manual page. I didn't understood when to use these flags. Usually what optimization level do you use? The g++ manual says the following for -O2.
Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. The compiler does not perform loop unrolling or function inlining when you specify -O2. As compared to -O, this option increases both compilation time and the performance of the generated code.
If it is not doing inlining and loop unrolling, how the said performance befits are achieved and is this option recommended?
Static Library
How do I create a static library using g++? In Visual Studio, I can choose a class library project and it will be compiled into "lib" file. What is the equivalent in g++?
The rule of thumb:
When you need to debug, use -O0 (and -g to generate debugging symbols.)
When you are preparing to ship it, use -O2.
When you use gentoo, use -O3...!
When you need to put it on an embedded system, use -Os (optimize for size, not for efficiency.)
The gcc manual list all implied options by every optimization level. At O2, you get things like constant folding, branch prediction and co, which can change significantly the speed of your application, depending on your code. The exact options are version dependent, but they are documented in great detail.
To build a static library, you use ar as follows:
ar rc libfoo.a foo.o foo2.o ....
ranlib libfoo.a
Ranlib is not always necessary, but there is no reason for not using it.
Regarding when to use what optimization option - there is no single correct answer.
Certain optimization levels may, at times, decrease performance. It depends on the kind of code you are writing and the execution pattern it has, and depends on the specific CPU you are running on.
(To give a simple canonical example - the compiler may decide to use an optimization that makes your code slightly larger than before. This may cause a certain part of the code to no longer fit into the instruction cache, at which point many more accesses to memory would be required - in a loop, for example).
It is best to measure and optimize for whatever you need. Try, measure and decide.
One important rule of thumb - the more optimizations are performed on your code, the harder it is to debug it using a debugger (or read its disassembly), because the C/C++ source view gets further away from the generated binary. It is a good rule of thumb to work with fewer optimizations when developing / debugging for this reason.
There are many optimizations that a compiler can perform, other than loop unrolling and inlining. Loop unrolling and inlining are specifically mentioned there since, although they make the code faster, they also make it larger.
To make a static library, use 'g++ -c' to generate the .o files and 'ar' to archive them into a library.
In regards to the Static library question the answer given by David Cournapeau is correct but you can alternatively use the 's' flag with 'ar' rather than running ranlib on your static library file. The 'ar' manual page states that
Running ar s on an archive is equivalent to running ranlib on it.
Whichever method you use is just a matter of personal preference.