How could running code in the debugger makes it faster? - c++

It never happened to me. In Visual Studio, I have a part of code that is executed 300 times, I time it every iteration with the performance counter, and then average it.
If I'm running the code in the debugger I get an average of 1.01 ms if I run it without the debugger I get 1.8 ms.
I closed all other apps, I rebooted, I tried it many times: Always the same timing.
I'm trying to optimize my code, but before throwing me into changing the code, I want to be sure of my timings. To have something to compare with.
What can cause that strange behaviour?
Edit:
Some clarification:
I'm running the same compiled piece of code: the release build. The only difference is (F5 vs CTRL-F5)
So, the compiler optimization should not be invoved.
Since each calcuated times were verry small, I changed the way I benchmark: I'm now timing the 300 iterations and then divide by 300. I have the same result.
About caching: The code is doing some image cross correlation, with different images at each iterations. The steps of the processing are not modified by the data in the images. So, I think caching is not the problem.

I think I figured it out.
If I add a Sleep(3000) before running the tests, they give the same result.
I think it has something to do with the loading of misc. dlls. In the debugger, the dlls were loaded before any code was executed. Outside the debugger, the dlls were loaded on demand, and one or more were loaded after the timer was started.
Thanks all.

I don't think anyone has mentioned this yet, but the debug build may not only affect the way your code executes, but also the way the timer itself executes. This can lead to the timer being inaccurate / slower / definitely not reliable. I would recommend using a profiler as others have mentioned, and compare only similar configurations.

You are likely to get very erroneous results by doing it this way ... you should be using a profiler. You should read this article entitled The Perils of MicroBenchmarking:
http://blogs.msdn.com/shawnhar/archive/2009/07/14/the-perils-of-microbenchmarking.aspx

It's probably a compiler optimization that's actually making your code worse. This is extremely rare these days but if you're doing odd, odd stuff, this can happen.
Some debugger / IDEs like Visual Studio will automatically zero out memory for you in Debug mode; this may be a contributing factor.

Are you running the exact same code in the debugger and outside the debugger or running debug in the debugger and release outside? If so the code isn't the same. If you're running debug and release and seeing the difference you could turn off optimization in release and see what that does or run your code in a profiler in debug and release and see what changes.

The debug version initializes variables to 0 (usually).
While a release binary does not initialize variables (unless the code explicitly does). This may affect what the code is doing the ziae of a loop or a whole host of other possibilities.
Set the warning level to the highest level (level 4, default 3).
Set the flag that says treat warnings as errors.
Recompile and re-test.

Before you dive into an optimization session get some facts:
dose it makes a difference? dose this application runs twice as slow measured over a reasonable length of time?
how are the debug and release builds configured
what is the state of this project? Is it a complete software or are you profiling a single function ?
how are you running the debug and build releases , are you sure you are testing under the same conditions (e.g. process priority settings )
suppose you do optimize the code what do you have in mind ?

Having read your additional data a distant bell started to ring ...
When running a program in the debugger it will catch both C++ exceptions and structured exceptions (windows execution)
One event that will trigger a structured exception is a divide by zero, it is possible that the debugger quickly catches and dismiss this event (as a first chance exception handling) while the release code goes a bit longer before doing something about it.
so if your code might be generating such or similar exceptions it worth a while to look into it.

Related

Debug code taking several orders of magnitude longer to run than release code

I have a large bit of code that takes about 5 minutes to run in debug mode of Visual Studio, and about 10 seconds to run in release mode.
This becomes an enormous issue when I have to debug code at the end of the program, where I have to wait far too long just for the program to hit the breakpoint.
I gave serialization a shot, and used boost::serialize to serialize all the variables before the debug code, but it turns out that deserializing all those variables still takes a minute or two.
So what gives? I'm aware that many optimizations and inline stuff is disabled when running code in debug mode, but it strikes me as very peculiar that it takes almost 2 orders of magnitude longer to run the code in debug mode. Are there any hacks or something programmers use to bypass this wait time? I know there's lots of programs out there much more computationally intensive than mine, but I highly doubt that they would wait 5 minutes just for their debug code to hit a breakpoint.
I have a large bit of code that takes about 5 minutes to run in debug mode of Visual Studio, and about 10 seconds to run in release mode.
That's normal.
So what gives? I'm aware that many optimizations and inline stuff is disabled when running code in debug mode,
That isn't all. In addition to that msvc insert MANY sanity checks, especially when stl containers are involved. For example, it will warn you about incompatible iterators, broken ordering comparator in std::map, and many other similar issues. I think it also detects memory corruption to some extent, and buffer overruns, out of range access for std::vector, etc. This can be useful, but overhead is massive. Throw a certain profiler on top of that and your 10 seconds can as well take 30 minutes to finish. And this will also be normal.
Are there any hacks or something programmers use to bypass this wait time?
Aside from using it instead of #1 excuse...
You could build debug version of your code on mingw - it doesn't insert (this kind of) sanity checks.
You could also investigate source STL libraries and see which macros enables all those features. It is quite possible that it can be disabled. It is also quite possible that said macros is documented somewhere on msdn.
You could try to find alternative STL implementation for the debug mode.
YOu could also build release mode with debug info and debug it instead.
OP here, so I was messing around with release builds running without the debugger attached, and found that with VS2010 ultimate (may also be for express too), when a program crashes it gives you a prompt asking if you want to debug or close the program (however before this it asks you if you want to abort, retry, or ignore; choose ignore). Clicking debug and selecting the current open solution in visual studio will load up the code and pretend the entire crash occurred while the program was being debugged.
This essentially means that if you put intentional glitches in your code where you would want a breakpoint, you can run the code in fast release mode without the debugger attached, and start debugging after the program crashes. For the purpose of forcing a crash I created an empty vector and had it try to access an element of the empty vector.
However this method has a major setback, being that it's one-time use. After the program crashes and you start debugging it, you cannot do anything more than view the watchlist and other variables, which means no other breakpoints can be set since you're technically not running a debug-enabled process.
Sure, it's a pretty huge setback, but that doesn't mean the method wouldn't have its uses.
Depends on what you want to debug. If you're ready to tolerate some strange behavior, you can usually customize what you want to debug. Try turning on optimization, using release mode libraries (keeping debug info enabled).

How do I detect where the program is stuck in an infinite loop?

I am working on a (relatively complex) game. The game freezes in release mode. The freeze happens after 1-2 min. of game-play. The current configuration of the release mode that I have allows me to break (that is go into debug), which is good, but may give me wrong information but that is fine for this particular case (I can turn off the optimization for a single file/function/code).
Problem is, I (we, since we are a team) don't know where it is hanging. It is not as simple as one relatively small infinite loop that is hanging, as other things (Graphics, sound) are being updated, just that the game-play has stalled. The main game loop (an infinite loop) is always running and is very long/complex, so stepping through is going to be a pain (but it is one of the options).
The first thing I tried is Visual Studio's break all but it always breaks in code that is not mine and consequently shows me assembly output. Eventually, with enough persistence, SVN history checking and commenting out code I will be able to figure out where it is hanging, but there has to be a better way... hopefully?
Note: There is a Visual Studio option I am aware of that allows debugging user code only, but that is managed code only.
EDIT: Was able to solve the problem via stack trace and lots of hours of keeping track of various things to see where the game is hanging. I will select Sjoerd's answer as the correct one, however, if someone has a suggestion for a tool/technique that allows to automate such a task, by all means, add your answer!
If you break and you encounter native code that is not yours, check the call stack. The call stack is the list of functions that got called to reach the current point in the code. Go up some levels in the stack until you encounter the method which is currently running.
Hit the pause button in Visual Studio while the program is hung.
This should break the debugger at the current line. You can then step through and see what is happening.
As an alternative to debugging symbols and breaks (which is the tool of choice when possible), add logging: It is not uncommon for games (and other apps) to have a huge logging system they can turn on and off with a compiler flag so they can still do some kind of debugging/tracing in "release builds". If your logging works fine you should see what is and what is not happening and get at least some idea where things go wrong.
You might well never be able to catch the problem via an interrupt if the code that should be executing isn't executing. There are lots of ways this can happen. Just a few:
You have some parameter that indicates the time at which the next update is to be performed. If this somehow gets set to some big number, the code that does the update will happily see that nothing needs to be done. Next! This can give all the appearances of a hung program even though it isn't really hung at all. The state update and the graphics functions are still being called at their prescribed rate.
You may some counter that represents time and some rounding mechanism for incrementing time. If the counter is a 32 bit signed int and the granularity of your counter is 0.1 microseconds, you will hit INT32_MAX after just 3.6 minutes. Now time is frozen, so once again you have a situation where updates may not be performed.
You are using a single precision floating point number to represent time and update time via time += delta_t; This will stop working after a couple of minutes if your delta_t is 10 microseconds. This is yet another mechanism by which time can be frozen.
Edit
Have you looked at the CPU usage in your various threads? The above problems might cause the physics or game-playing thread to exhibit a drastic drop in CPU usage after a couple of minutes. You might also get this behavior if the game playing thread is perpetually locked, but here you might (with the right tool) get an indication that that thread is always asleep.

How can I find a rare bug that seems to only occur in release builds?

I have a fairly large solution that occasionally crashes. Sadly, these crashes appear to only occur in release build. When I attach the debugger upon crashing, I get the message:
"No symbols are loaded for any call
stack frame. The source code cannot be
displayed"
This makes it quite hard to find the cause of the crashes. I am using the default release build settings of visual studio 2008, in which 'debug information format' is set to 'Program Database (/Zi)'.
Do you have any tips that might help me find the bug? For example, could I change some settings in my projects so that the crashes might still occur but get more meaningful information in the debugger?
Update: The problem was a very rarely occurring logic error that in itself should not cause a crash, but apparently caused a crash elsewhere. Solving the logic error solved the crashing behavior.
To anyone that came here looking for a resolution of a similar problem: best of luck, you're in for a rough ride. What eventually helped me locate the problem was adding a lot of bounds checks in the code (that I could enable/disable with preprocessor directives) and compiling for linux and running with gdb/valgrind.
First ensure you are building symbols (debug info) for release build, and that the debugger can find them (this may require configuring the symbol path—a symbol server would be better).
Second use the Modules view while debugging to confirm you have the symbols loaded.
The simplest way to get the symbols is to put the .pdb files in the same location as the assemblies.
Check out John Robbins blog for many many more details of doing this.
If the code crashes after optimisation is applied (as in the default release), it is most likely that your code is in some way flawed and relies on undefined behaviour which changes between the release and debug build.
Try switching off optimisation in the release build to see if the problem goes away (or switch it on in the debug build to see if it occurs). If it does, you should still aim to find and fix the bug, but you will at least know to be looking for undefined behaviour.
Set the compiler warning level to maximum (/W4) and warnings as errors (/Wx) and fix all warnings (and not simply by casting everything in sight - think about it!). When optimisation is applied, you may well get warnings that did not occur in the debug build because of the more extensive code analysis that is performed - this is useful static analysis.
You can if you wish switch debugging on in an optimised build, but it is unlikely that you will be able to follow what is going on since the optimiser may re-order code, and remove code and variables.
Sounds to me like that stack frame was blown. Trivial to do with a buffer overflow, just copy a large string in a small char[] for example. That wipes out the return address. The code just keeps running until the return, then bombs when it pops a bad address off the stack. Or worse, if the address happens to be valid.
The debugger cannot display anything meaningful since it cannot walk the stack to show you how the code got to the crash location. The actual crash location doesn't tell you anything.
Tuff as nails to debug. You have to get it reproducible and you need either stepping or tracing to find the last known-good function. The one that produces the crash after stepping out of it is the one with the bug. You can actually see the statement that does the damage, the debugger call stack suddenly goes catatonic. If you can't get a consistent repro then a thorough code review is all that's left. You can justify the time by calling it a "security review". Good luck with it.
A few reasons why a debug build might not allow a defect to express itself:
Some debug configurations initialize all variables.
Debug memory allocations and deallocations might be more forgiving of pointer abuse.
The debug build might execute at a different speed thus masking a race condition.
Since you are using C++ you might consider using a static analysis tool like valgrind to point out possible uninitialized data and pointer mishandling.
Race conditions may be tracked down by adding log output with time stamps. You first have to narrow down where in your "large solution" the problem occurs by observing what happened just prior to the crash. Be sure to use a deferred logging mechanism -- one that does the string processing later or in another thread so it doesn't itself impact the timing too much.
Did you know you can still debug release builds? Just hit F5 (rather than CTRL+F5) to run in debug.
Is it repeatable i.e. are you doing something specific when it goes bang?
If so, set a breakpoint in your code before the crash and hit F5 to run in debug (make sure you're using your release build though). Then step through until your app crashes. I generally find this faster than adding logging and debug print statements.
If not, just running in debug mode will sometimes catch the error and halt on the offending line.
Failing that, Richard and Amar's answers are good :-)
An uninitalized variable (pointer perhaps) could also be causing the problem. Perhaps you should run a static analysis program over your code - CppCheck isn't bad.

Why does my program run way faster when I enable profiling?

I have a program that's running pretty slowly (takes like 20 seconds even on release) so, wanting to fix it, I tried to use Visual Studio's built in profiler. However, when I run the program with profiling enabled, it finishes in less than a second. This makes it very difficult to find a bottleneck. I would post the code but it is long. Are there any obvious or not so obvious reasons why this would be happening?
EDIT:
Ok so I narrowed the problem down to a bunch of free() calls. When I comment them out, the program runs in the same amount of time that it does with profiling enabled. But now I have a memory leak :-/
The reason is because when you run your application within Visual Studio, the debugger is attached to it. When you run it using the profiler, the debugger is not attached.
If you press F5 to run your program, even with the Release build, the debugger is still attached.
If you try running the .exe by itself, or running the program through the IDE with "Debug > Start Without Debugging" (or just press Ctrl+F5) the application should run as fast as it does with the profiler.
That sounds a lot like a Heisenbug.
They really happen, and they can be painful to uncover.
Your best solution in my experience is to change how you are profiling -- possibly several ways -- until the bug disappears.
Use different profilers. Try adding timing code instead of using a profiler.
turning on the profiler will end up moving your code around (a bit) which probably masking the problem.
The most common cause of hiesenbugs is unitialized variables, The second most common cause is using memory after it has been freed(). Since your free seems to fix it, you might think to look for late references, but I would still look for uninitialized variables first if I were you.
In my case it was due to the Windows Timer Resolution.
If you program uses threading, the System wide Timer resolution may be the reason for longer times when running through Visual studio.
The default windows timer resolution is 15.6ms
When running through profiler, the profiler sets this value to 1ms causing faster execution. Checkout this answer
The general way would be divide-and-conquer, i.e. running only parts of the program and see when the problem goes away. But it sounds as if you already did that. AFAIK free usually doesn't take much time, but malloc can take a lot of time if memory is fragmented. If you don't call free(), the heap never gets fragmented in the first place. (intrusive profiling code might prevent memory fragmentation by allocating small data blocks and filling the free gaps - but I admit that's bit of a weak explanation).
Maybe you can add manual time measurement calls before/after the calls to malloc and new and print out the times to verify that? Maybe you can also analyze your memory allocation patterns to find out if you have a heap fragmentation problem (probably by looking at the code and doing some symbolic debugging in your head ;-)
Use a non-intrusive sample profiler instead of an intrusive instrumented profiler.
It could be due to few optimizations not being performed by the compiler when you run it in profiling mode. So, I suggest you check the parameters being passed and check the compiler documentation.

Program only crashes as release build -- how to debug?

I've got a "Schroedinger's Cat" type of problem here -- my program (actually the test suite for my program, but a program nonetheless) is crashing, but only when built in release mode, and only when launched from the command line. Through caveman debugging (ie, nasty printf() messages all over the place), I have determined the test method where the code is crashing, though unfortunately the actual crash seems to happen in some destructor, since the last trace messages I see are in other destructors which execute cleanly.
When I attempt to run this program inside of Visual Studio, it doesn't crash. Same goes when launching from WinDbg.exe. The crash only occurs when launching from the command line. This is happening under Windows Vista, btw, and unfortunately I don't have access to an XP machine right now to test on.
It would be really nice if I could get Windows to print out a stack trace, or something other than simply terminating the program as if it had exited cleanly. Does anyone have any advice as to how I could get some more meaningful information here and hopefully fix this bug?
Edit: The problem was indeed caused by an out-of-bounds array, which I describe more in this post. Thanks everybody for your help in finding this problem!
In 100% of the cases I've seen or heard of, where a C or C++ program runs fine in the debugger but fails when run outside, the cause has been writing past the end of a function local array. (The debugger puts more on the stack, so you're less likely to overwrite something important.)
When I have encountered problems like this before it has generally been due to variable initialization. In debug mode, variables and pointers get initialized to zero automatically but in release mode they do not. Therefore, if you have code like this
int* p;
....
if (p == 0) { // do stuff }
In debug mode the code in the if is not executed but in release mode p contains an undefined value, which is unlikely to be 0, so the code is executed often causing a crash.
I would check your code for uninitialized variables. This can also apply to the contents of arrays.
No answer so far has tried to give a serious overview about the available techniques for debugging release applications:
Release and Debug builds behave differently for many reasons. Here is an excellent overview. Each of these differences might cause a bug in the Release build that doesn't exist in the Debug build.
The presence of a debugger may change the behavior of a program too, both for release and debug builds. See this answer. In short, at least the Visual Studio Debugger uses the Debug Heap automatically when attached to a program. You can turn the debug heap off by using environment variable _NO_DEBUG_HEAP . You can specify this either in your computer properties, or in the Project Settings in Visual Studio. That might make the crash reproducible with the debugger attached.
More on debugging heap corruption here.
If the previous solution doesn't work, you need to catch the unhandled exception and attach a post-mortem debugger the instance the crash occurs. You can use e.g. WinDbg for this, details about the avaiable post-mortem debuggers and their installation at MSDN
You can improve your exception handling code and if this is a production application, you should:
a. Install a custom termination handler using std::set_terminate
If you want to debug this problem locally, you could run an endless loop inside the termination handler and output some text to the console to notify you that std::terminate has been called. Then attach the debugger and check the call stack. Or you print the stack trace as described in this answer.
In a production application you might want to send an error report back home, ideally together with a small memory dump that allows you to analyze the problem as described here.
b. Use Microsoft's structured exception handling mechanism that allows you to catch both hardware and software exceptions. See MSDN. You could guard parts of your code using SEH and use the same approach as in a) to debug the problem. SEH gives more information about the exception that occurred that you could use when sending an error report from a production app.
Things to look out for:
Array overruns - the visual studio debugger inserts padding which may stop crashes.
Race conditions - do you have multiple threads involved if so a race condition many only show up when an application is executed directly.
Linking - is your release build pulling in the correct libraries.
Things to try:
Minidump - really easy to use (just look it up in msdn) will give you a full crash dump for each thread. You just load the output into visual studio and it is as if you were debugging at the time of the crash.
You can set WinDbg as your postmortem debugger. This will launch the debugger and attach it to the process when the crash occurs. To install WinDbg for postmortem debugging, use the /I option (note it is capitalized):
windbg /I
More details here.
As to the cause, it's most probably an unitialized variable as the other answers suggest.
After many hours of debugging, I finally found the cause of the problem, which was indeed caused by a buffer overflow, caused a single byte difference:
char *end = static_cast<char*>(attr->data) + attr->dataSize;
This is a fencepost error (off-by-one error) and was fixed by:
char *end = static_cast<char*>(attr->data) + attr->dataSize - 1;
The weird thing was, I put several calls to _CrtCheckMemory() around various parts of my code, and they always returned 1. I was able to find the source of the problem by placing "return false;" calls in the test case, and then eventually determining through trial-and-error where the fault was.
Thanks everybody for your comments -- I learned a lot about windbg.exe today! :)
Even though you have built your exe as a release one, you can still generate PDB (Program database) files that will allow you to stack trace, and do a limited amount of variable inspection.
In your build settings there is an option to create the PDB files. Turn this on and relink. Then try running from the IDE first to see if you get the crash. If so, then great - you're all set to look at things. If not, then when running from the command line you can do one of two things:
Run EXE, and before the crash do an Attach To Process (Tools menu on Visual Studio).
After the crash, select the option to launch debugger.
When asked to point to PDB files, browse to find them. If the PDB's were put in the same output folder as your EXE or DLL's they will probably be picked up automatically.
The PDB's provide a link to the source with enough symbol information to make it possible to see stack traces, variables etc. You can inspect the values as normal, but do be aware that you can get false readings as the optimisation pass may mean things only appear in registers, or things happen in a different order than you expect.
NB: I'm assuming a Windows/Visual Studio environment here.
Crashes like this are almost always caused because an IDE will usually set the contents of uninitialized variable to zeros, null or some other such 'sensible' value, whereas when running natively you'll get whatever random rubbish that the system picks up.
Your error is therefore almost certainly that you are using something like you are using a pointer before it has been properly initialized and you're getting away with it in the IDE because it doesn't point anywhere dangerous - or the value is handled by your error checking - but in release mode it does something nasty.
In order to have a crash dump that you can analyze:
Generate pdb files for your code.
You rebase to have your exe and dlls loaded in the same address.
Enable post mortem debugger such as Dr. Watson
Check the crash failures address using a tool such as crash finder.
You should also check out the tools in Debugging tools for windows.
You can monitor the application and see all the first chance exceptions that were prior to your second chance exception.
Hope it helps...
Sometimes this happens because you have wrapped important operation inside "assert" macro. As you may know, "assert" evaluates expressions only on debug mode.
A great way to debug an error like this is to enable optimizations for your debug build.
Once i had a problem when app behaved similarily to yours. It turned out to be a nasty buffer overrun in sprintf. Naturally, it worked when run with a debugger attached. What i did, was to install an unhandled exception filter (SetUnhandledExceptionFilter) in which i simply blocked infinitely (using WaitForSingleObject on a bogus handle with a timeout value of INFINITE).
So you could something along the lines of:
long __stdcall MyFilter(EXCEPTION_POINTERS *)
{
HANDLE hEvt=::CreateEventW(0,1,0,0);
if(hEvt)
{
if(WAIT_FAILED==::WaitForSingleObject(hEvt, INFINITE))
{
//log failure
}
}
}
// somewhere in your wmain/WinMain:
SetUnhandledExceptionFilter(MyFilter);
I then attached the debugger after the bug had manifested itself (gui program stopped responding).
Then you can either take a dump and work with it later:
.dump /ma path_to_dump_file
Or debug it right away. The simplest way is to track where processor context has been saved by the runtime exception handling machinery:
s-d esp Range 1003f
Command will search stack address space for CONTEXT record(s) provided the length of search. I usually use something like 'l?10000'. Note, do not use unsually large numbers as the record you're after usually near to the unhanded exception filter frame.
1003f is the combination of flags (i believe it corresponds to CONTEXT_FULL) used to capture the processor state.
Your search would look similar to this:
0:000> s-d esp l1000 1003f
0012c160 0001003f 00000000 00000000 00000000 ?...............
Once you get results back, use the address in the cxr command:
.cxr 0012c160
This will take you to this new CONTEXT, exactly at the time of crash (you will get exactly the stack trace at the time your app crashed).
Additionally, use:
.exr -1
to find out exactly which exception had occurred.
Hope it helps.
With regard to your problems getting diagnostic information, have you tried using adplus.vbs as an alternative to WinDbg.exe? To attach to a running process, use
adplus.vbs -crash -p <process_id>
Or to start the application in the event that the crash happens quickly:
adplus.vbs -crash -sc your_app.exe
Full info on adplus.vbs can be found at: http://support.microsoft.com/kb/286350
Ntdll.dll with debugger attached
One little know difference between launching a program from the IDE or WinDbg as opposed to launching it from command line / desktop is that when launching with a debugger attached (i.e. IDE or WinDbg) ntdll.dll uses a different heap implementation which performs some little validation on the memory allocation/freeing.
You may read some relevant information in unexpected user breakpoint in ntdll.dll. One tool which might be able to help you identifying the problem is PageHeap.exe.
Crash analysis
You did not write what is the "crash" you are experiencing. Once the program crashes and offers you to send the error information to the Microsoft, you should be able to click on the technical information and to check at least the exception code, and with some effort you can even perform post-mortem analysis (see Heisenbug: WinApi program crashes on some computers) for instructions)
Vista SP1 actually has a really nice crash dump generator built into the system. Unfortunately, it isn't turned on by default!
See this article:
http://msdn.microsoft.com/en-us/library/bb787181(VS.85).aspx
The benefit of this approach is that no extra software needs to be installed on the affected system. Grip it and rip it, baby!
As my experience, that are most being memory corruption issues.
For example :
char a[8];
memset(&a[0], 0, 16);
: /*use array a doing some thing */
it is very possible to be normal in debug mode when one runs the code.
But in release, that would/might be crash.
For me, to rummage where the memory is out of bound is too toilsome.
Use some tools like Visual Leak Detector (windows) or valgrind (linux) are more wise choise.
I've seen a lot of right answers. However, there is none that helped me. In my case, there was a wrong usage of the SSE instructions with the unaligned memory. Take a look at your math library (if you use one), and try to disable SIMD support, recompile and reproduce the crash.
Example:
A project includes mathfu, and uses the classes with STL vector: std::vector< mathfu::vec2 >. Such usage will probably cause a crash at the time of the construction of mathfu::vec2 item since the STL default allocator does not guarantee required 16-byte alignment. In this case to prove the idea, one can define #define MATHFU_COMPILE_WITHOUT_SIMD_SUPPORT 1 before each include of the mathfu, recompile in Release configuration and check again.
The Debug and RelWithDebInfo configurations worked well for my project, but not the Release one. The reason behind this behavior is probably because debugger processes allocation/deallocation requests and does some memory bookkeeping to check and verify the accesses to the memory.
I experienced the situation in Visual Studio 2015 and 2017 environments.
Something similar happend to me once with GCC. It turned out to be a too aggressive optimization that was enabled only when creating the final release and not during the development process.
Well, to tell the truth it was my fault, not gcc's, as I didn't noticed that my code was relying on the fact that that particular optimization wouldn't have been done.
It took me a lot of time to trace it and I only came to it because I asked on a newsgroup and somebody made me think about it. So, let me return the favour just in case this is happening to you as well.
I've found this this article useful for your scenario. ISTR the compiler options were a little out of date. Look around your Visual Studio project options to see how to generate pdb files for your release build, etc.
It's suspicious that it would happen outside the debugger and not inside; running in the debugger does not normally change the application behavior. I would check the environment differences between the console and the IDE. Also, obviously, compile release without optimizations and with debug information, and see if that affects the behavior. Finally, check out the post-mortem debugging tools other people have suggested here, usually you can get some clue from them.
Debugging release builds can be a pain due to optimizations changing the order in which lines of your code appear to be executed. It can really get confusing!
One technique to at least narrow down the problem is to use MessageBox() to display quick statements stating what part of the program your code has got to ("Starting Foo()", "Starting Foo2()"); start putting them at the top of functions in the area of your code that you suspect (what were you doing at the time when it crashed?). When you can tell which function, change the message boxes to blocks of code or even individual lines within that function until you narrow it down to a few lines. Then you can start printing out the value of variables to see what state they are in at the point of crashing.
Try using _CrtCheckMemory() to see what state the allocated memory is in .
If everything goes well , _CrtCheckMemory returns TRUE , else FALSE .
You might run your software with Global Flags enabled (Look in Debugging Tools for Windows). It will very often help to nail the problem.
Make your program generate a mini dump when the exception occurs, then open it up in a debugger (for example, in WinDbg). The key functions to look at: MiniDumpWriteDump, SetUnhandledExceptionFilter
Here's a case I had that somebody might find instructive. It only crashed in release in Qt Creator - not in debug. I was using .ini files (as I prefer apps that can be copied to other drives, vs. ones that lose their settings if the Registry gets corrupted). This applies to any apps that store their settings under the apps' directory tree. If the debug and release builds are under different directories, you can have a setting that's different between them, too. I had preference checked in one that wasn't checked in the other. It turned out to be the source of my crash. Good thing I found it.
I hate to say it, but I only diagnosed the crash in MS Visual Studio Community Edition; after having VS installed, letting my app crash in Qt Creator, and choosing to open it in Visual Studio's debugger. While my Qt app had no symbol info, it turns out that the Qt libraries had some. It led me to the offending line; since I could see what method was being called. (Still, I think Qt is a convenient, powerful, & cross-platform LGPL framework.)
I had this problem too. In my case, the RELEASE mode was having msvscrtd.dll in the linker definition. We removed it and the issue resolved.
Alternatively, adding /NODEFAULTLIB to the linker command line arguments also resolved the issue.
I'll add another possibility for future readers: Check if you're logging to stderr or stdout from an application with no console window (ie you linked with /SUBSYSTEM:WINDOWS). This can crash.
I had a GUI application where I logged to both stderr and a file in both debug and release, so logging was always enabled. I created a console window in debug for easy viewing of the logs, but not in release. However, if the VS debugger is attached to the release build, it'll automatically pipe stderr to the VS output window. So only in release with no debugger did it actually crash when I wrote to stderr.
To make things worse, printf debugging obviously didn't work, which I didn't understand why until I'd tracked down the root cause (by painfully bisecting the codebase by inserting an infinite loop in various spots).
I had this error and vs crashed even when trying to !clean! my project. So I deleted the obj files manually from the Release directory, and after that it built just fine.
I agree with Rolf. Because reproducibility is so important, you shouldn't have a non-debug mode. All your builds should be debuggable. Having two targets to debug more than doubles your debugging load. Just ship the "debug mode" version, unless it is unusable. In which case, make it usable.