When debugging in release, c++ code doesn't expand certain variables. What are the kind of variables that cant be expanded and why so? I can understand that a release dll has been packaged with extra optimizations but not quite sure if this is the only reason. Also is there anything that can be done to view those values
Even assuming that you have debug information in the build, debugging a release build (optimized) is hard in general. The optimizer can mangle the result of the code up to a point you might not recognize it.
It will remove variables altogether and hide them from the debugger (since the variable is not there, the debugger cannot show them). It might not remove it, but reuse the space for register spills temporarily and you will see the value of the memory where your variable is jumping to some random value. The flow might be reordered and the variable might be and have the right value once initialized, but initialization might have been pushed further down and not yet executed...
If you can reproduce the issue in a debug build, I'll start there. If not, good luck. Don't trust anything you see, but try to extract as much information as you can from the data points you have available.
When you build in "Debug" mode, then the compiler (and linker) adds extra information about things like variables, their names, the source files used, line number information, etc. This is missing when compiling in "Release" mode. It can be added though, by changing it in the project settings.
Related
I'm writing a program in C++/Qt which contains a graph file parser. I use g++ to compile the project.
While developing, I am constantly comparing the performance of my low level parser layer between different compiler flags regarding optimization and debug information, plus Qt's debug flag (turning on/off qDebug() and Q_ASSERT()).
Now I'm facing a problem where the only correctly functioning build is the one without any optimization. All other versions, even with -O1, seem to work in another way. They crash due to unsatisfied assertions, which are satisfied when compiled without a -O... flag. The code doesn't produce any compiler warning, even with -Wall.
I am very sure that there is a bug in my program, which seems to be only harmful with optimization being enabled. The problem is: I can't find it even when debugging the program. The parser seems to read wrong data from the file. When I run some simple test cases, they run perfectly. When I run a bigger test case (a route calculation on a graph read directly from a file), there is an incorrect read in the file which I can't explain.
Where should I start tracking down the problem of this undefined behavior? Which optimization methods are possibly involved within this different behavior? (I could enable all flags one after the other, but I don't know that much compiler flags but -O... and I know that there are a lot of them, so this would need a very long time.) As soon as I know which type the bug is of, I am sure I find it sooner or later.
You can help me a lot if you can tell me which compiler optimization methods are possible candidates for such problems.
There are a few classes of bugs that commonly arise in optimized builds, that often don't arise in debug builds.
Un-initialized variables. The compiler can catch some but not all. Look at all your constructors, look at global variables. etc. Particularly look for uninitialized pointers. In a debug build memory is reset to zero, but in a release build it isn't.
Use of temporaries that have gone out of scope. For example when you return a reference to a local temporary in a function. These often work in debug builds because the stack is padded out more. The temporaries tend to survive on the stack a little longer.
array overruns writing of temporaries. For example if you create an array as a temporary in a function and then write one element beyond the end. Again, the stack will have extra space in debug ( for debugging information ) and your overrun won't hit program data.
There are optimizations you can disable from the optimized build to help make debugging the optimized version easier.
-g -O1 -fno-inline -fno-loop-optimize -fno-if-conversion -fno-if-conversion2 \
-fno-delayed-branch
This should make stepping through your code in the debugger a little easier to follow.
Another suggestion is that if the assertions you have do not give you enough information about what is causing the problem, you should consider adding more assertions. If you are afraid of performance issues, or assertion clutter, you can wrap them in a macro. This allows you to distinguish the debugging assertions from the ones you originally added, so they can be disabled or removed from your code later.
1) Use valgrind on the broken version. (For that matter, try valgrind on the working version, maybe you'll get lucky.)
2) Build the system with "-O1 -g" and step through your program with gdb. At the crash, what variable has an incorrect value? Re-run your program and note when that variable is written to (or when it isn't and should have been.)
I've seen posts talk about what might cause differences between Debug and Release builds, but I don't think anybody has addressed from a development standpoint what is the most efficient way to solve the problem.
The first thing I do when a bug appears in the Release build but not in Debug is I run my program through valgrind in hopes of a better analysis. If that reveals nothing, -- and this has happened to me before -- then I try various inputs in hopes of getting the bug to surface also in the Debug build. If that fails, then I would try to track changes to find the most recent version for which the two builds diverge in behavior. And finally I guess I would resort to print statements.
Are there any best software engineering practices for efficiently debugging when the Debug and Release builds differ? Also, what tools are there that operate at a more fundamental level than valgrind to help debug these cases?
EDIT: I notice a lot of responses suggesting some general good practices such as unit testing and regression testing, which I agree are great for finding any bug. However, is there something specifically tailored to this Release vs. Debug problem? For example, is there such a thing as a static analysis tool that says "Hey, this macro or this code or this programming practice is dangerous because it has the potential to cause differences between your Debug/Release builds?"
One other "Best Practice", or rather a combination of two: Have Automated Unit Tests, and Divide and Conquer.
If you have a modular application, and each module has good unit tests, then you may be able to quickly isolate the errant piece.
The very existence of two configurations is a problem from debugging point of view. Proper engineering would be such that the system on the ground and in the air behave the same way, and achieve this by reducing the number of ways by which the system can tell the difference.
Debug and Release builds differ in 3 aspects:
_DEBUG define
optimizations
different version of the standard library
The best way around, the way I often work, is this:
Disable optimizations where performance is not critical. Debugging is more important. Most important is disable function auto-inlining, keep standard stack frame and variable reuse optimizations. These annoy debug the most.
Monitor code for dependence on DEBUG define. Never use debug-only asserts, or any other tools sensitive to DEBUG define.
By default, compile and work /release.
When I come across a bug that only happens in release, the first thing I always look for is use of an uninitialized stack variable in the code that I am working on. On Windows, the debug C runtime will automatically initialise stack variables to a know bit pattern, 0xcdcdcdcd or something. In release, stack variables will contain the value that was last stored at that memory location, which is going to be an unexpected value.
Secondly, I will try to identify what is different between debug and release builds. I look at the compiler optimization settings that the compiler is passed in Debug and Release configurations. You can see this is the last property page of the compiler settings in Visual Studio. I will start with the release config, and change the command line arguments passed to the compiler one item at a time until they match the command line that is used for compiling in debug. After each change I run the program and reproducing the bug. This will often lead me to the particular setting that causes the bug to happen.
A third technique can be to take a function that is misbehaving and disable optimizations around it using the pre-processor. This will allow you run the program in release with the particular function compiled in debug. The behaviour of the program which has been built in this way will help you learn more about the bug.
#pragma optimize( "", off )
void foo() {
return 1;
}
#pragma optimize( "", on )
From experience, the problems are usually stack initialization, memory scrubbing in the memory allocator, or strange #define directives causing the code to be compiled incorrectly.
The most obvious cause is simply the use of #ifdef and #ifndef directives associated DEBUG or similar symbol that change between the two builds.
Before going down the debugging road (which is not my personal idea of fun), I would inspect both command lines and check which flags are passed in one mode and not the other, then grep my code for this flags and check their uses.
One particular issue that comes to mind are macros:
#ifdef _DEBUG_
#define CHECK(CheckSymbol) { if (!(CheckSymbol)) throw CheckException(); }
#else
#define CHECK(CheckSymbol)
#endif
also known as a soft-assert.
Well, if you call it with a function that has side effect, or rely on it to guard a function (contract enforcement) and somehow catches the exception it throws in debug and ignore it... you will see differences in release :)
When debug and release differ it means:
you code depends on the _DEBUG or similar macros (defined when compiling a debug version - no optimizations)
your compiler has an optimization bug (I seen this few times)
You can easily deal with (1) (code modification) but with (2) you will have to isolate the compiler bug. After isolating the bug you do a little "code rewriting" to get the compiler generate correct binary code (I did this a few times - the most difficult part is to isolate the bug).
I can say that when enabling debug information for release version the debugging process works ... (though because of optimizations you might see some "strange" jumps when running).
You will need to have some "black-box" tests for your application - valgrind is a solution in this case. These solutions help you find differences between release and debug (which is very important).
The best solution is to set up something like automated unit testing to thoroughly test all aspects of the application (not just individual components, but real world tests which use the application the same way a regular user would with all of the dependencies). This allows you to know immediately when a release-only bug has been introduced which should give you a good idea of where the problem is.
Good practice to actively monitor and seek out problems beats any tool to help you fix them long after they happen.
However, when you have one of those cases where it's too late: too many builds have gone by, can't reproduce consistently, etc. then I don't know of any one tool for the job. Sometimes fiddling with your release settings can give a bit of insight as to why the bug is occurring: if you can eliminate optimizations which suddenly make the bug go away, that could give you some useful information about it.
Release-only bugs can fall into various categories, but the most common ones (aside from something like a misuse of assertions) is:
1) Uninitialized memory. I use this term over uninitialized variables as a variable may be initialized but still be pointing to memory which hasn't been initialized properly. For this, memory diagnostic tools like Valgrind can help.
2) Timing (ex: race conditions). These can be a nightmare to debug, but there are some multithreading profilers and diagnostic tools which can help. I can't suggest any off the bat, but there's Coverity Integrity Manager as one example.
I have run though a code formatting tool to my c++ files. It is supposed to make only formatting changes. Now when I built my code, I see that size of object file for some source files have changed. Since my files are very big and tool has changed almost every line, I dont know whether it has done something disastrous. Now i am worried to check in this code to repo as it might lead to runtime error due to formatting tool. My question is , will the size of object file be changed , if code formatting is changed.?
Brief answer is no:)
I would not check your code into the repo without thoroughly checking it first (review, testing).
Pure formatting changes should not change the object file size, unless you've done a debug build (in which case all bets are off). A release build should be not just the same size, but barring your using __DATE__ and such to insert preprocessor content, it should be byte-for-byte the same as well.
If the "reformatting" tool has actually done some micro-optimizations for you (caching repeated access to invariants in local vars, or undoing your having done that unnecessarily), that might affect the optimization choices the compiler makes, which can have an effect on the object file. But I wouldn't assume that that was the case.
if ##__LINE__ macro is used might produce longer strings. How different are the sizes?
(this macro is often hides in new and assert messages in debug.)
just formatting the code should not change the size of the object file.
It might if you compile with debugging symbols, as it might have added more line number information. Normally it wouldn't though, as has already been pointed out.
Try comparing object files built without debugging symbols.
Try to find a comparison tool that won't care about the formatting changes (like perhaps "diff--ignore-all-space") and check using that before checking in.
I'm using the Intel compiler and visual studio and I can't seem to debug values that are in maps. I get a quick preview which shows the size of the map but the elements only show up as "(error)", I'll illustrate with a quick example, i've generated a map with a single entry myMapVariable[6]=1;
if I mouse over I get this "myMapVariable 1"
and in the watch window I get the same thing and expanding on the plus gives a single child entry which says name = "(error)" and value = 0 (which is wrong).
I've added a line to my autoexp.dat debugging file which shows the raw member variables under the child called [raw members]. I've pretty much reached the limits of my ability to dig into this further without help so I would ask if anyone here can provide some insights.
You're most likely using aggressive optimization settings. At least your screenshot is typical of that sort of thing. In that case, the debugger is actively stuffing hot values into registers, and it may be that, at the point you're stopped, the values that are needed to properly visualize the entire map are already discarded and overwritten by something else that is enough (like, say, a pointer to a current node). I would imagine that Intel C++, which is well-known for its high-quality optimization, does this sort of thing even more often than VC++ (but I've seen such with the latter often enough as well).
Consider recompiling the project in Debug configuration (which would disable the optimizer), and see if that helps.
My only suggestion is to make sure the map is initialized and in scope. Otherwise, I'm not sure, I've never seen this but I use VS2008 now.
I have never been able to fix this problem using Intel, but I have now moved to the latest visual studio compiler VS2010 and this is no longer a problem. I'm marking this as the answer because I don't want to leave unanswered questions lying around.
We have a large C++ application, which sometimes we need to run as a debug build in order to investigate bugs. The debug build is much much slower than the release build, to the point of being almost unusable.
What tricks are available for making MSVC Debug builds execute faster without sacrificing too much on the debugability?
Use #pragma optimize("", off) at the top of selected files that you want to debug in release. This gives better stack trace/variable view.
Works well if it's only a few files you need to chase the bug in.
Why don't you just switch on debug information in your release configuration?
We turned off Iterator debugging with the preprocessor symbols:
_HAS_ITERATOR_DEBUGGING=0
_SCL_SECURE=0
It helped a bit, but was still not as fast as we'd like. We also ended up making our debug build more release-like by defining NDEBUG instead of _DEBUG. There were a couple other options that we changed too, but I'm not remembering them.
Its unfortunate that we needed to do all this, but our application has a certain amount of work needed to be done every 50ms or its unusable. VS2008 out of the box would give us ~60ms times for debug and ~6ms times for release. With the tweaks mentioned above we could get debug down to ~20ms or so, which is at least usable.
profile the app and see what ti taking the time. you should then be able to see what debugging need to be tuned.
Are you using MFC?
In my experience, the main thing that can make a debug version slow is the class validation routines, which are usually disabled in release. If the data structure is at all tree-like, it can end up re-validating subtrees hundreds of times.
Regardless, if it is, say, 10 times slower than the release build, that means it is spending 1/10 of its time doing what's necessary, and 9/10 doing something else. If, while you're waiting for it, you just hit the "pause" button and look at the call stack, chances are 9/10 that you will see exactly what the problem is.
It's a quick & dirty, but effective way to find performance problems.
Create a ReleaseWithSymbols configuration, that defines NDEBUG and has no optimisations enabled. This will give you better performance while maintaining accurate symbols for debugging.
there are several difference between debug builds and release builds that influence both debugability and speed. The most important are the _DEBUG/NDEBUG define, the compiler optimizations and the creation of debug information.
You might want to create a third Solution Configuration and play around with these settings. For example, adding debug information to a release build doesn't really decrease performance but you already get a sensible stack trace so you know which function you are in. Only the line information is not reliable because of the compiler optimizations.
If you want reliable line information, go on and turn off optimizations. This will slow down the execution a bit but this will still be faster than normal debug as long as the _DEBUG define is not set yet. Then you can do pretty good debugging, only everything that has "#ifdef _DEBUG" or similar around it won't be there (e.g. calls to assert etc.).
Hope this helps,
Jan
Which VS are you using? We moved from VS.net to VS2008 recently and I experienced same slowness while debugging on high end machine on > 500k LOC project. Turns out, Intellisense base got corrupted and would update itself constantly but get stuck somewhere. Deleting .ncb file solved the problem.