So I have just followed the advice in enabling debug symbols for Release mode and after enabling debug symbols, disabling optimization and finding that break-points do work if symbols are complied with a release mode, I find myself wondering...
Isn't the purpose of Debug mode to help you to find bugs?
Why bother with Debug mode if it let bugs slip past you?
Any advice?
In truth there is no such thing as a release mode or a debug mode. there are just different configurations with different options enabled. Release 'mode' and Debug 'mode' are just common configurations.
What you've done is to modify the release configuration to enable some options which are usually enabled in the debug configuration.
Enabling these options makes the binaries larger and slower according to which ones you enable.
The more of these options you enable, the easier it is to find bugs. I think your question should really be "Why bother with release mode?" The answer to that is that it's smaller and faster.
Debug mode doesn't "let bugs slip past you". It inserts checks to catch a large number of bugs, but the presence of these checks may also hide certain other bugs. All the error-checking code catches a lot of errors, but it also acts as padding, and may hide subtle bounds errors.
So that in itself should be plenty of reason to run both. MSVC performs a lot of additional error checking in Debug mode.
In addition, many debug tools, such as assert rely on NDEBUG not being defined, which is the case in debug builds, but not, by default, in release builds.
Optimisations will be turned off, which makes debugging easier (otherwise code can be reordered in strange ways). Also conditional code such as assert() etc. can be included.
Apart from your application being very debuggeable in release mode, the MSVC runtime libraries aren't so nice, neither are some other libraries.
The debug heap, for instance, adds no-man's land markers around allocated memory to trap buffer overruns. The standard library used by MSVC asserts for iterator validity. And much more.
Bugs due to optimisers aren't unheard of; but normally they hint at a deeper problem, for instance, not using volatile when you need to will cause optimiser bugs (optimising away comparisons and using cached results instead).
At the end of the day, including debug symbols in early releases can help you trace down bugs after deployment.
Now, to answer your questions directly:
Debug mode has other things, like assert()s which are stripped in release mode.
Even if you do all your testing in release mode, bugs will still slip by. Infact, some bugs (like the above volatile bug) are only hidden in debug mode: they still exist, they are just harder to trigger.
Including full symbols with your application includes significant information about the build machine (paths etc. are embedded).
The advice is to include "PDB Only" symbols with release builds (don't include file, line and local variable symbols) with optimisation on. And debug build has no optimisation and full symbols.
And (as noted in other answer) common sub-expression elimination and instructin reordering can make debugging interesting (move next goes to lines n, n+2, n+1...).
optimization is a nightmare for debugging. Once i had an application with this for loop
for (int i = 0; i < 10; i++)
{
//use i with something
}
i was always 0 in debugging. but outputting it to a console showed that it did increase
Related
I have a c++ code that segfaults with optimization flags, but not when I run it with debug flags. This precludes me from using a debugger. Is there any other way/ guidelines apart from a barrage of cout statements?
I am on a *nix platform and using intel-12.1 compilers and I am quite certain that it is a memory issue that I need to catch with valgrind. The only thing that puzzles me is why it does not show in the debug mode.
Valgrind is a useful tool for Unix-based systems for troubleshooting release-mode executables (gflags and WinDebug are useful for Windows.)
I also recommend not giving up on your debugger - you can run non-debug executables inside a debugger, and still get useful information about the segfault. Often times you can also add in some level of debug information, even with optimizations turned on, to provide you more context. You might also check for any debug-mode heap-checking facility that the intel compiler might provide, as these can go undetected in debug builds (due to different memory management).
Also note that there are usually multiple levels of optimization you can use for "release mode". You might try backing down to a less aggressive optimization level, and see if the error still occurs.
You might also check the the Intel compiler web site to see if there have been any bugfixes/bug-reports regarding optimization for the compiler version you're using.
If none of these help, you can try using an alternate compiler (unless you're using something Intel-specific) to see if the problem is compiler-related or not.
Finally, as klm123 noted, commenting out blocks is a good way to localize the problem.
I got a program that has a quite simple 'load up' function that takes about 30 seconds (3M triangles that goes into std containers).
It works perfectly well.
The rest doesn't always (it is not finished) so I debug a lot I make a lot of changes and so on which means restarting quite often.
Is there any secret technique to compile the loader in release (which speeds up everything enormously) and leaving the rest as debug?
ps. I use MSVC 2005
Debug builds tend to be very slow on Visual C++. There are a few reasons for this:
the most obvious is that the code is not optimized
the memory allocation functions in the debug CRT library perform additional checks to detect heap corruption and other issues, so they are slower.
many STL functions perform additional validations and assertions on debug builds, which make use of STL containers very slow
I've had success debugging apps that make heavy use of memory and STL using the following method:
use the release build for debugging (yes, this works fine!).
configure your release builds to include debugging symbols, this should make the compiler write the .pdb file that the debugger uses
only for the files you intend to debug, set the compiler to no optimizations.
Note that the above works great to debug problems in your own logic, but it may not be the best idea if you are debugging memory corruption or other problems, since you are eliminating all the extra debug code that the CRT provides for these types of issues.
I hope this helps!
Mixing debug and release builds tends to go horribly wrong.
But there's no reason why you shouldn't turn on optimisation for some selected source files even in a debug build - and the optimisation should give you the performance improvements.
When compiler optimizes code, what is the scope of the optimizations? Can an optimization
a) span more than a set of nested braces?
b) span more than a function?
c) span more than a file?
We are chasing an obscure bug that seems to derive from optimization. Code crashes in release mode but not debug mode. And of course we are aware this could be heap corruption or other memory problem bur have been chasing this for a while now. One avenue we are considering is to selectively compile our files in debug mode until the problem goes away. In other words:
a) Start with all file compiled in release mode
b) Compile 1/2 the files in debug mode
if crash still seen, take half the release compiled files and compile in debug mode
if crash not seen, talk half the files compiled in debug mode and compile in release mode
repeat until we narrow in on suspect files
this is a binary search to narrow in on problem files
We are aware that if this is a memory issue, simple doing this mixed compilation may make the bug go away, but we are curous if we can narrow in on the problem files.
The outstanding question though is what is the scope of optimizations - can they span more than one file?
An optimization can do literally anything as long as it doesn't change the semantics of the behaviour defined by the language. That means the answers to your first questions (a), (b), and (c) are all yes. In practice, most compilers aren't that ambitious, but there are certainly some examples. Clang and LLVM have flags for link time optimization that allow optimizations to span pretty much the entire program. MSVC has a similar /GL flag that allows whole-program optimization.
Often the causes of these sorts of failures are uninitialized variables. Static analysis tools can be very helpful in finding problems like the one you're describing. I'm not sure your binary searching via optimizations is going to help you track down too much, though it is possible. Your bug could be the result of pretty complicated interactions between modules.
Good luck!
You can approximately identify problem files from call traces of crash in release mode. Then try to rebuild them without optimizations - this is much simpler than binary search.
To know about compiler optimization, one easy resourse could be wikipedia. It nicely explains in brief some important and wide-implemented optimizations by almost all modern compilers.
Maybe, you would like to read the wiki entry first, especially these sections:
Types of optimizations
Specific techniques
I think you are asking the wrong question.
It is unlikely (unless you are doing something special) that the optimizer is your problem.
The problem is most likely an uninitialized variable in your code.
When you compile in debug mode most compilers will initialize all memory to a specific pattern, so that during debugging you can see what has happened to the memory (was it allocated/ was it de-allcoated/ was it from the stack/ Was the stack popped back from etc).
Thus in debug mode any uninitialized memory has the a specific pattern that can be recognized by the debugger. The exact pattern will depend on the compiler. But this can make sloppy code work as the uninitialized pointers are NULL, integer counters start at 0 etc.
When you compile for release mode all the extra initialization is turned off. If you did not explicitly initialize the memory it is in a random state. This is what debug/release versions of an application behave differently.
The easy way to check for this is to check your compiler warnings.
Make sure that all variables are initialized before use (it will be in the warnings).
I've seen posts talk about what might cause differences between Debug and Release builds, but I don't think anybody has addressed from a development standpoint what is the most efficient way to solve the problem.
The first thing I do when a bug appears in the Release build but not in Debug is I run my program through valgrind in hopes of a better analysis. If that reveals nothing, -- and this has happened to me before -- then I try various inputs in hopes of getting the bug to surface also in the Debug build. If that fails, then I would try to track changes to find the most recent version for which the two builds diverge in behavior. And finally I guess I would resort to print statements.
Are there any best software engineering practices for efficiently debugging when the Debug and Release builds differ? Also, what tools are there that operate at a more fundamental level than valgrind to help debug these cases?
EDIT: I notice a lot of responses suggesting some general good practices such as unit testing and regression testing, which I agree are great for finding any bug. However, is there something specifically tailored to this Release vs. Debug problem? For example, is there such a thing as a static analysis tool that says "Hey, this macro or this code or this programming practice is dangerous because it has the potential to cause differences between your Debug/Release builds?"
One other "Best Practice", or rather a combination of two: Have Automated Unit Tests, and Divide and Conquer.
If you have a modular application, and each module has good unit tests, then you may be able to quickly isolate the errant piece.
The very existence of two configurations is a problem from debugging point of view. Proper engineering would be such that the system on the ground and in the air behave the same way, and achieve this by reducing the number of ways by which the system can tell the difference.
Debug and Release builds differ in 3 aspects:
_DEBUG define
optimizations
different version of the standard library
The best way around, the way I often work, is this:
Disable optimizations where performance is not critical. Debugging is more important. Most important is disable function auto-inlining, keep standard stack frame and variable reuse optimizations. These annoy debug the most.
Monitor code for dependence on DEBUG define. Never use debug-only asserts, or any other tools sensitive to DEBUG define.
By default, compile and work /release.
When I come across a bug that only happens in release, the first thing I always look for is use of an uninitialized stack variable in the code that I am working on. On Windows, the debug C runtime will automatically initialise stack variables to a know bit pattern, 0xcdcdcdcd or something. In release, stack variables will contain the value that was last stored at that memory location, which is going to be an unexpected value.
Secondly, I will try to identify what is different between debug and release builds. I look at the compiler optimization settings that the compiler is passed in Debug and Release configurations. You can see this is the last property page of the compiler settings in Visual Studio. I will start with the release config, and change the command line arguments passed to the compiler one item at a time until they match the command line that is used for compiling in debug. After each change I run the program and reproducing the bug. This will often lead me to the particular setting that causes the bug to happen.
A third technique can be to take a function that is misbehaving and disable optimizations around it using the pre-processor. This will allow you run the program in release with the particular function compiled in debug. The behaviour of the program which has been built in this way will help you learn more about the bug.
#pragma optimize( "", off )
void foo() {
return 1;
}
#pragma optimize( "", on )
From experience, the problems are usually stack initialization, memory scrubbing in the memory allocator, or strange #define directives causing the code to be compiled incorrectly.
The most obvious cause is simply the use of #ifdef and #ifndef directives associated DEBUG or similar symbol that change between the two builds.
Before going down the debugging road (which is not my personal idea of fun), I would inspect both command lines and check which flags are passed in one mode and not the other, then grep my code for this flags and check their uses.
One particular issue that comes to mind are macros:
#ifdef _DEBUG_
#define CHECK(CheckSymbol) { if (!(CheckSymbol)) throw CheckException(); }
#else
#define CHECK(CheckSymbol)
#endif
also known as a soft-assert.
Well, if you call it with a function that has side effect, or rely on it to guard a function (contract enforcement) and somehow catches the exception it throws in debug and ignore it... you will see differences in release :)
When debug and release differ it means:
you code depends on the _DEBUG or similar macros (defined when compiling a debug version - no optimizations)
your compiler has an optimization bug (I seen this few times)
You can easily deal with (1) (code modification) but with (2) you will have to isolate the compiler bug. After isolating the bug you do a little "code rewriting" to get the compiler generate correct binary code (I did this a few times - the most difficult part is to isolate the bug).
I can say that when enabling debug information for release version the debugging process works ... (though because of optimizations you might see some "strange" jumps when running).
You will need to have some "black-box" tests for your application - valgrind is a solution in this case. These solutions help you find differences between release and debug (which is very important).
The best solution is to set up something like automated unit testing to thoroughly test all aspects of the application (not just individual components, but real world tests which use the application the same way a regular user would with all of the dependencies). This allows you to know immediately when a release-only bug has been introduced which should give you a good idea of where the problem is.
Good practice to actively monitor and seek out problems beats any tool to help you fix them long after they happen.
However, when you have one of those cases where it's too late: too many builds have gone by, can't reproduce consistently, etc. then I don't know of any one tool for the job. Sometimes fiddling with your release settings can give a bit of insight as to why the bug is occurring: if you can eliminate optimizations which suddenly make the bug go away, that could give you some useful information about it.
Release-only bugs can fall into various categories, but the most common ones (aside from something like a misuse of assertions) is:
1) Uninitialized memory. I use this term over uninitialized variables as a variable may be initialized but still be pointing to memory which hasn't been initialized properly. For this, memory diagnostic tools like Valgrind can help.
2) Timing (ex: race conditions). These can be a nightmare to debug, but there are some multithreading profilers and diagnostic tools which can help. I can't suggest any off the bat, but there's Coverity Integrity Manager as one example.
We have a large C++ application, which sometimes we need to run as a debug build in order to investigate bugs. The debug build is much much slower than the release build, to the point of being almost unusable.
What tricks are available for making MSVC Debug builds execute faster without sacrificing too much on the debugability?
Use #pragma optimize("", off) at the top of selected files that you want to debug in release. This gives better stack trace/variable view.
Works well if it's only a few files you need to chase the bug in.
Why don't you just switch on debug information in your release configuration?
We turned off Iterator debugging with the preprocessor symbols:
_HAS_ITERATOR_DEBUGGING=0
_SCL_SECURE=0
It helped a bit, but was still not as fast as we'd like. We also ended up making our debug build more release-like by defining NDEBUG instead of _DEBUG. There were a couple other options that we changed too, but I'm not remembering them.
Its unfortunate that we needed to do all this, but our application has a certain amount of work needed to be done every 50ms or its unusable. VS2008 out of the box would give us ~60ms times for debug and ~6ms times for release. With the tweaks mentioned above we could get debug down to ~20ms or so, which is at least usable.
profile the app and see what ti taking the time. you should then be able to see what debugging need to be tuned.
Are you using MFC?
In my experience, the main thing that can make a debug version slow is the class validation routines, which are usually disabled in release. If the data structure is at all tree-like, it can end up re-validating subtrees hundreds of times.
Regardless, if it is, say, 10 times slower than the release build, that means it is spending 1/10 of its time doing what's necessary, and 9/10 doing something else. If, while you're waiting for it, you just hit the "pause" button and look at the call stack, chances are 9/10 that you will see exactly what the problem is.
It's a quick & dirty, but effective way to find performance problems.
Create a ReleaseWithSymbols configuration, that defines NDEBUG and has no optimisations enabled. This will give you better performance while maintaining accurate symbols for debugging.
there are several difference between debug builds and release builds that influence both debugability and speed. The most important are the _DEBUG/NDEBUG define, the compiler optimizations and the creation of debug information.
You might want to create a third Solution Configuration and play around with these settings. For example, adding debug information to a release build doesn't really decrease performance but you already get a sensible stack trace so you know which function you are in. Only the line information is not reliable because of the compiler optimizations.
If you want reliable line information, go on and turn off optimizations. This will slow down the execution a bit but this will still be faster than normal debug as long as the _DEBUG define is not set yet. Then you can do pretty good debugging, only everything that has "#ifdef _DEBUG" or similar around it won't be there (e.g. calls to assert etc.).
Hope this helps,
Jan
Which VS are you using? We moved from VS.net to VS2008 recently and I experienced same slowness while debugging on high end machine on > 500k LOC project. Turns out, Intellisense base got corrupted and would update itself constantly but get stuck somewhere. Deleting .ncb file solved the problem.