I know too much optimization doesn't make much sense for debug code.
But what about using -march=native to make better use of the instruction set?
EDIT:
Let's reformulate this. I know enabling optimizations and debug mode at the same time might have disadvantages like:
GCC allows you to use -g with -O. The shortcuts taken by optimized
code may occasionally produce surprising results: some variables you
declared may not exist at all; flow of control may briefly move where
you did not expect it; some statements may not be executed because
they compute constant results or their values were already at hand;
some statements may execute in different places because they were
moved out of loops.
So my question is, does -march=native have similar side effects or is it sensible to use it in debug code as well?
The problem with optimization is aggressive optimization passes that alter control flow can confuse debuggers. -march=native may enable additional optimizations (cmov, for example) if those passes have been enabled with a -O option, but will not in itself confuse the debugger.
Related
Currently using VSCode, g++, C++20, Ubuntu 20.04 Lts.
What compiler flags can I use for release builds and debug builds separately? Do I turn off every optimization flag for debug builds? Or does it not really matter? I would appreciate any advice, recommendations, or feedback as I couldn't find much on my own.
Do I turn off every optimization flag for debug builds?
Yes, I would say that is the best way to go, and it does really matter! Depending on your code, your understanding of the compiler/debugger and level of optimisation chosen, the experience of debugging it will vary from mildly annoying to frustrating and almost useless. This answer gives a synopsis of the different levels for gcc and this question has several answers going into more detail about optimisations.
As a summary, the compiler is in general allowed to modify your code in any way it sees fit, as long as it still behaves as if all your statements were executed exactly as written. In practice, -O1 already enables dozens of techniques and -O2 and -O3 will probably leave almost nothing untouched, which makes it harder to pinpoint issues because:
Stepping through code may visit statements in a different order or skip them entirely, also hindering the use of breakpoints;
Function calls may disappear because they were inlined, and no longer be callable from the debugging prompt;
Local variables tend to have shorter lifetimes than in your source code, so you can't always query their values.
I personally build with CMake and primarily use two of its build types:
Debug (-g): No optimisations, compiles runtime assert statements;
RelWithDebInfo (-O2 -g -DNDEBUG): Fast code without these assertions that is harder to debug, but suitable for performance analysis once your program is working correctly.
I see this thread, and I had the same question, but this one isn't really answered: GCC standard optimizations behavior
I'm trying to figure out exactly what flag is causing an incredible boost in performance, in O1. I first found out which flags are set, using g++ -O1 -Q --help=optimizers and then got each of the enabled ones and used them to compile with g++. But the output results were different (the binary itself was of difference sizes).
How do I handpick optimizations for g++ or is this not possible?
Not all optimizations have individual flags, so no combination of them will generate the same code as using -O1 or any other of the general optimization enabling options (-Os, -O2, etc...). Also I imagine that a lot of the specific optimization options are ignored when you use -O0 (the default) because they require passes that are skipped if optimization hasn't generally enabled.
To try to narrow down your performance increase you can try using -O1 and then selectively disabling optimizations. For example:
g++ -O1 -fno-peephole -fno-tree-cselim -fno-var-tracking ...
You still might not have better luck this way though. It might be multiple optimizations in combination are producing your performance increase. It could also be the result of optimizations not covered by any specific flag.
I also doubt that better cache locality resulted in your "incredible boost in performance". If so it was likely a coincidence, especially at -O1. Big performance increases usually come about because GCC was able eliminate a chunk of your code either because it didn't actually have any net effect, always resulted in the same value being computed or it invoked undefined behaviour.
I'm having a real strange problem using GCC for ARM with the optimizations turned on.
Compiling my C++ application without the optimizations produces an executable that
at runtime outputs the expected results. As soon as I turn on the
optimizations - that is -O1 - my application fails to produce the expected results.
I tried for a couple of days to spot the problem but I'm clueless.
I eliminated any uninitialized variables from my code, I corrected the spots where
strict aliasing could cause problems but still I do not have the proper results.
I'm using GCC 4.2.0 for ARM(the processor is an ARM926ej-s) and running the app
on a Montavista Linux distribution.
Below are the flags I'm using:
-O1 -fno-unroll-loops fno-merge-constants -fno-omit-frame-pointer -fno-toplevel-reorder \
-fno-defer-pop -fno-function-cse -Wuninitialized -Wstrict-aliasing=3 -Wstrict-overflow=3 \
-fsigned-char -march=armv5te -mtune=arm926ej-s -ffast-math
As soon as I strip the -O1 flag and recompile/relink the application I get the proper output results. As you can see from the flags I tried to disable any optimization I thought it might cause problems but still no luck.
Does anyone have any pointers on how I could further tackle this problem?
Thanks
Generally speaking, if you say "optimization breaks my program", it is 99.9% your programm that is broken. Enabling optimizations only uncovers the faults in your code.
You should also go easy on the optimization options. Only in very specific circumstances will you need anything else beyond the standard options -O0, -O2, -O3 and perhaps -Os. If you feel you do need more specific settings than that, heed the mantra of optimizations:
Measure, optimize, measure.
Never go by "gut feeling" here. Prove that a certain non-standard optimization option does significantly benefit your application, and understand why (i.e., understand exactly what that option does, and why it affects your code).
This is not a good place to navigate blindfolded.
And seeing how you use the most defensive option (-O1), then disable half a dozen optimizations, and then add -ffast-math, leads me to assume you're currently doing just that.
Well, perhaps one-eyed.
But the bottom line is: If enabling optimization breaks your code, it's most likely your code's fault.
EDIT: I just found this in the GCC manual:
-ffast-math: This option should never be turned on by any -O option
since it can result in incorrect output for programs which depend on
an exact implementation of IEEE or ISO rules/specifications for math
functions.
This does say, basically, that your -O1 -ffast-math could indeed break correct code. However, even if taking away -ffast-math removes your current problem, you should at least have an idea why. Otherwise you might merely exchange your problem now with a problem at a more inconvenient moment later (like, when your product breaks at your client's location). Is it really -ffast-math that was the problem, or do you have broken math code that is uncovered by -ffast-math?
-ffast-math should be avoided if possible. Just use -O1 for now and drop all the other optimisation switches. If you still see problems then it's time to start debugging.
Without seeing your code, it's hard to get more specific than "you probably have a bug".
There are two scenarios where enabling optimizations changes the semantics of the program:
there is a bug in the compiler, or
there is a bug in your code.
The latter is probably the most likely. Specifically, you probably rely on Undefined Behavior somewhere in your program. You rely on something that just so happen to be true when you compile using this compiler on this computer with these compiler flags, but which isn't guaranteed by the language. And so, when you enable optimizations, GCC is under no obligation to preserve that behavior.
Show us your code. Or step through it in the debugger until you get to the point where things go wrong.
I can't be any more specific. It might be a dangling pointer, uninitialized variables, breaking the aliasing rules, or even just doing one of the many things that yield undefined results (like i = i++)
Try to make a minimal test case. Rewrite the program, removing things that don't affect the error. It's likely that you'll discover the bug yourself in the process, but if you don't, you should have a one-screen example program you can post.
Incidentally, if, as others have speculated, it is -ffast-math which causes your trouble (i.e. compiling with just -O1 works fine), then it is likely you have some math in there you should rewrite anyhow. It's a bit of an over-simplification, but -ffast-math permits the compiler to essentially rearrange computations as you could abstract mathematical numbers - even though doing so on real hardware may cause slightly different results since floating point numbers aren't exact. Relying on that kind of floating point detail is likely to be unintentional.
If you want to understand the bug, a minimal test-case is critical in any case.
Many times I work with optimized code (sometimes even involving vectorized loops), which contain bugs and such. How would one debug such code? I'm looking for any kind of tools or techniques. I use the following (possibly outdated) tools, so I'm looking to upgrade.
I use the following:
Since with ddd, you cannot see the code, I use gdb+ dissambler command and see the produced code; I can't really step through the program using this.
ndisasm
Thanks
It is always harder to debug optimised programs, but there are always ways. Some additional tips:
Make a debug build, and see if you get the same bug in a debug build. No point debugging an optimised version if you don't have to.
Use valgrind if on a platform that supports it. The errors you see may be harder to understand, but catching the problem early often simplifies debugging.
printf debugging is primitive, but sometimes it is the simplest way if you have a complex issue that only shows up in optimised builds.
If you suspect a timing issue (especially in a multithreaded program), roll your own version of assert which aborts or prints if the condition is violated, and use it in a few select places, to rule out possible problems.
See if you can reproduce the problem without using -fomit-frame-pointers, since that makes code very hard to debug, and with -O2 or -O3 enabled. That might give you enough information to find the cause of your problem.
Isolate parts of your code, build a test-suite, and see if you can identify any testcases which fail. It is much easier to debug one function than the whole program.
Try turning off optimisations one by one with the -fno-X options. This might help you find common problems like strict aliasing problems.
Turn on more compiler warnings. Some things, like strict aliasing problems, can generate compiler warnings if they create a difference in behaviour between different optimisation levels.
When debugging release builds you can put in __asm nops; as a placeholder for breakpoints (int 3). This is nice as you can guarantee breakpoint locations without messing up compiler optimizations or writing printf/cout statements.
It's always easier to debug a non-optimized version, of course. Failing that, disassembly of the code can be helpful. Other techinques I've used include partially de-optimizing the code by forcing intermediate results to be printed or logged, or changing a critical variable to "volatile" so I can at least look at that value in the debugger.
Chances are what you call optimized code is scrambled to shave cycles (which makes debugging hard) but is not really very optimized. Here is an example of what I mean.
I would turn off the compiler optimization, debug and tune it yourself, and then turn compiler optimization back on if the code has hotspots that are actually in code the compiler sees (not in outside libraries). (I define a hotspot as a part of code where the PC is often found. That automatically exempts loops containing function calls because they steal away the PC.)
If I use "-O2" flag, the performance improves, but the compilation time gets longer.
How can I decide, whether to use it or not?
Maybe O2 makes the most difference in some certain types of code (e.g. math calculations?), and I should use it only for those parts of the project?
EDIT: I want to emphasize the fact that setting -O2 for all components of my project changes the total compilation time from 10 minutes to 30 minutes.
I would recommend using -O2 most of the time, benefits include:
Usually reduces size of generated code (unlike -O3).
More warnings (some warnings require analysis that is only done during optimization)
Often measurably improved performance (which may not matter).
If release-level code will have optimization enabled, it's best to have optimization enabled throughout the development/test cycle.
Source-level debugging is more difficult with optimizations enabled, occasionally it is helpful to disable optimization when debugging a problem.
I'm in bioinformatics so my advice may be biased. That said, I always use the -O3 switch (for release and test builds, that is; not usually for debugging). True, it has certain disadvantages, namely increasing compile-time and often the size of the executable.
However, the first factor can be partially mitigated by a good build strategy and other tricks reducing the overall build time. Also, since most of the compilation is really I/O bound, the increase of compile time is often not that pronounced.
The second disadvantage, the executable's size, often simply doesn't matter at all.
Never.
Use -O3 -Wall -Werror -std=[whatever your code base should follow]
Always, except when you're programming and just want to test something you just wrote.
We usually have our build environment set up so that we can build debug builds that use -O0 and release builds that use -O3 (the build enviroment preserves the objects and libraries of all configurations, so that one can switch easily between configurations). During development one mostly builds and runs the debug configuration for faster build speed (and more accurate debug information) and less frequently also builds and tests the release configuration.
Is the increased compilation time really noticable? I use -O2 all the time as the default, anything less just leaves a lot of "friction" in your code. Also note that the optimization levels of -O1, -O2 tends to be the best tested, as they are most interesting. -O0 tends to be more buggy, and you can debug pretty well at -O2 in my experience. Provided you have some idea about what a compiler can do in terms of code reordering, inlining, etc.
-Werror -Wall is necessary.