Related
I'm debugging my C++ Win32 program in VS2010 and I always get "Windows has triggered a breakpoint in program.exe".
I've double-checked, triple-checked, and quadruple-checked the code. I can't find any reason it should be happening. But it happens at the same point each time so there must be something.
There is quite a lot of code involved (constructors, destructors, window messages, memory allocation and deallocation, etc...) so it's quite difficult to put something concrete here, but at the same time I understand that without the code you can't really do much to give an explanation.
Basically at the click of a button, a window is shown which shows an image. If a certain condition is met, I send a WM_DESTROY to that window and delete the variable which triggers the destructor which calls Release() on my LPPICTURE, and the freed variable gets set to NULL.
Then, when the user clicks the button again, it tries to dynamically allocate a new instance (in the exact same way it did previously), and that's where the breakpoint is generated. AFAIK (and I've been trying to debug this for over an hour now), the constructor doesn't even start. It seems to be breaking inside the new() function for dynamic memory allocation.
As far as I can tell, it breaks on return HeapAlloc(_crtheap, 0, size ? size : 1); which is line 54 or malloc.c
What's weird is that when I run the exe outside of VS2010, everything continues fine. The program doesn't crash, and it continues to work as expected!
It's difficult to diagnose without seeing the code, but based on your description, it sounds like heap corruption. My guess is that HeapAlloc detected corruption and generated a int 3 which will essentially trigger a breakpoint in the debugger. My advice is to review all of your object allocations/deallocations and make sure that you aren't stepping on memory that you have not allocated (or that has already been freed).
Also, you mentioned that you are sending a WM_DESTROY message explicitly. Typically, you want to let Windows generate the WM_DESTROY message for you, either by calling DestroyWindow, or by sending WM_CLOSE to the window and letting DefWindowProc call DestroyWindow for you. This may be unrelated to your issue, but just FYI.
In my experience when that happens you have heap correction/invalid pointer usage. The breakpoint occurs at the point where the fault is detected. This is almost never the actual failure - the problem occurred earlier. Those types of breakpoint occur only with a debugger present. Many times the corruption is not fatal or even is fixed by some other action.
In any event you should consider appverifier to see if it can find the problem. Be sure to use the heap checking options.
i think the problem is if your debugging inside visual studio you must put the needed files (in this case the image you are talking about) files in a certain directory in the debug folder ,the program crashes (while being debugged) because it does not find the file therefore when you run the exe outside of VS2010 it doesnt crash
I'm developing a game and when I do a specific action in the game, it crashes.
So I went debugging and I saw my application crashed at simple C++ statements like if, return, ... Each time when I re-run, it crashes randomly at one of 3 lines and it never succeeds.
line 1:
if (dynamic) { ... } // dynamic is a bool member of my class
line 2:
return m_Fixture; // a line of the Box2D physical engine. m_Fixture is a pointer.
line 3:
return m_Density; // The body of a simple getter for an integer.
I get no errors from the app nor the OS...
Are there hints, tips or tricks to debug more efficient and get known what is going on?
That's why I love Java...
Thanks
Random crashes like this are usually caused by stack corruption, since these are branching instructions and thus are sensitive to the condition of the stack. These are somewhat hard to track down, but you should run valgrind and examine the call stack on each crash to try and identify common functions that might be the root cause of the error.
Are there hints, tips or tricks to debug more efficient and get known what is going on?
Run game in debugger, on the point of crash, check values of all arguments. Either using visual studio watch window or using gdb. Using "call stack" check parent routines, try to think what could go wrong.
In suspicious(potentially related to crash) routines, consider dumping all arguments to stderr (if you're using libsdl or on *nixlike systems), or write a logfile, or send dupilcates of all error messages using (on Windows) OutputDebugString. This will make them visible in "output" window in visual studio or debugger. You can also write "traces" (log("function %s was called", __FUNCTION__))
If you can't debug immediately, produce core dumps on crash. On windows it can be done using MiniDumpWriteDump, on linux it is set somewhere in configuration variables. core dumps can be handled by debugger. I'm not sure if VS express can deal with them on Windows, but you still can debug them using WinDBG.
if crash happens within class, check *this argument. It could be invalid or zero.
If the bug is truly evil (elusive stack corruption in multithreaded app that leads to delayed crash), write custom memory manager, that will override new/delete, provide alternative to malloc(if your app for some reason uses it, which may be possible), AND that locks all unused memory memory using VirtualProtect (windows) or OS-specific alternative. In this case all potentially dangerous operation will crash app instantly, which will allow you to debug the problem (if you have Just-In-Time debugger) and instantly find dangerous routine. I prefer such "custom memory manager" to boundschecker and such - since in my experience it was more useful. As an alternative you could try to use valgrind, which is available on linux only. Note, that if your app very frequently allocates memory, you'll need a large amount of RAM in order to be able to lock every unused memory block (because in order to be locked, block should be PAGE_SIZE bytes big).
In areas where you need sanity check either use ASSERT, or (IMO better solution) write a routine that will crash the application (by throwing an std::exception with a meaningful message) if some condition isn't met.
If you've identified a problematic routine, walk through it using debugger's step into/step over. Watch the arguments.
If you've identified a problematic routine, but can't directly debug it for whatever reason, after every statement within that routine, dump all variables into stderr or logfile (fprintf or iostreams - your choice). Then analyze outputs and think how it could have happened. Make sure to flush logfile after every write, or you might miss the data right before the crash.
In general you should be happy that app crashes somewhere. Crash means a bug you can quickly find using debugger and exterminate. Bugs that don't crash the program are much more difficult (example of truly complex bug: given 100000 values of input, after few hundreds of manipulations with values, among thousands of outputs, app produces 1 absolutely incorrect result, which shouldn't have happened at all)
That's why I love Java...
Excuse me, if you can't deal with language, it is entirely your fault. If you can't handle the tool, either pick another one or improve your skill. It is possible to make game in java, by the way.
These are mostly due to stack corruption, but heap corruption can also affect programs in this way.
stack corruption occurs most of the time because of "off by one errors".
heap corruption occurs because of new/delete not being handled carefully, like double delete.
Basically what happens is that the overflow/corruption overwrites an important instruction, then much much later on, when you try to execute the instruction, it will crash.
I generally like to take a second to step back and think through the code, trying to catch any logic errors.
You might try commenting out different parts of the code and seeing if it affects how the program is compiled.
Besides those two things you could try using a debugger like Visual Studio or Eclipse etc...
Lastly you could try to post your code and the error you are getting on a website with a community that knows programming and could help you work through the error (read: stackoverflow)
Crashes / Seg faults usually happen when you access a memory location that it is not allowed to access, or you attempt to access a memory location in a way that is not allowed (for example, attempting to write to a read-only location).
There are many memory analyzer tools, for example I use Valgrind which is really great in telling what the issue is (not only the line number, but also what's causing the crash).
There are no simple C++ statements. An if is only as simple as the condition you evaluate. A return is only as simple as the expression you return.
You should use a debugger and/or post some of the crashing code. Can't be of much use with "my app crashed" as information.
I had problems like this before. I was trying to refresh the GUI from different threads.
If the if statements involve dereferencing pointers, you're almost certainly corrupting the stack (this explains why an innocent return 0 would crash...)
This can happen, for instance, by going out of bounds in an array (you should be using std::vector!), trying to strcpy a char[]-based string missing the ending '\0' (you should be using std::string!), passing a bad size to memcpy (you should be using copy-constructors!), etc.
Try to figure out a way to reproduce it reliably, then place a watch on the corrupted pointer. Run through the code line-by-line until you find the very line that corrupts the pointer.
Look at the disassembly. Almost any C/C++ debugger will be happy to show you the machine code and the registers where the program crashed. The registers include the Instruction Pointer (EIP or RIP on x86/x64) which is where the program was when it stopped. The other registers usually have memory addresses or data. If the memory address is 0 or a bad pointer, there is your problem.
Then you just have to work backward to find out how it got that way. Hardware breakpoints on memory changes are very helpful here.
On a Linux/BSD/Mac, using GDB's scripting features can help a lot here. You can script things so that after the breakpoint is hit 20 times it enables a hardware watch on the address of array element 17. Etc.
You can also write debugging into your program. Use the assert() function. Everywhere!
Use assert to check the arguments to every function. Use assert to check the state of every object before you exit the function. In a game, assert that the player is on the map, that the player has health between 0 and 100, assert everything that you can think of. For complicated objects write verify() or validate() functions into the object itself that checks everything about it and then call those from an assert().
Another way to write in debugging is to have the program use signal() in Linux or asm int 3 in Windows to break into the debugger from the program. Then you can write temporary code into the program to check if it is on iteration 1117321 of the main loop. That can be useful if the bug always happens at 1117322. The program will execute much faster this way than to use a debugger breakpoint.
some tips :
- run your application under a debugger, with the symbol files (PDB) together.
- How to set Visual Studio as the default post-mortem debugger?
- set default debugger for WinDbg Just-in-time Debugging
- check memory allocations Overriding new and delete, and Overriding malloc and free
One other trick: turn off code optimization and see if the crash points make more sense. Optimization is allowed to float little bits of your code to surprising places; mapping that back to source code lines can be less than perfect.
Check pointers. At a guess, you're dereferencing a null pointer.
I've found 'random' crashes when there are some reference to a deleted object. As the memory is not necessarily overwritten, in many cases you don't notice it and the program works correctly, and than crashes after the memory was updated and is not valid anymore.
JUST FOR DEBUGGING PURPOSES, try commenting out some suspicious 'deletes'. Then, if it doesn't crash anymore, there you are.
use the GNU Debugger
Refactoring.
Scan all the code, make it clearer if not clear at first read, try to understand what you wrote and immediately fix what seems incorrect.
You'll certainly discover the problem(s) this way and fix a lot of other problems too.
I've stumbled onto a very interesting issue where a function (has to deal with the Windows clipboard) in my app only works properly when a breakpoint is hit inside the function. This got me wondering, what exactly does the debugger do (VS2008, C++) when it hits a breakpoint?
Without directly answering your question (since I suspect the debugger's internal workings may not really be the problem), I'll offer two possible reasons this might occur that I've seen before:
First, your program does pause when it hits a breakpoint, and often that delay is enough time for something to happen (perhaps in another thread or another process) that has to happen before your function will work. One easy way to verify this is to add a pause for a few seconds beforehand and run the program normally. If that works, you'll have to look for a more reliable way of finding the problem.
Second, Visual Studio has historically (I'm not certain about 2008) over-allocated memory when running in debug mode. So, for example, if you have an array of int[10] allocated, it should, by rights, get 40 bytes of memory, but Visual Studio might give it 44 or more, presumably in case you have an out-of-bounds error. Of course, if you DO have an out-of-bounds error, this over-allocation might make it appear to be working anyway.
Typically, for software breakpoints, the debugger places an interrupt instruction at the location you set the breakpoint at. This transfers control of the program to the debugger's interrupt handler, and from there you're in a world where the debugger can decide what to do (present you with a command prompt, print the stack and continue, what have you.)
On a related note, "This works in the debugger but not when I run without a breakpoint" suggests to me that you have a race condition. So if your app is multithreaded, consider examining your locking discipline.
It might be a timing / thread synchronization issue. Do you do any multimedia or multithreading stuff in your program?
The reason your app only works properly when a breakpoint is hit might be that you have some watches with side effects still in your watch list from previous debugging sessions. When you hit the break point, the watch is executed and your program behaves differently.
http://en.wikipedia.org/wiki/Debugger
A debugger essentially allows you to step through your source code and examine how the code is working. If you set a breakpoint, and run in debug mode, your code will pause at that break point and allow you to step into the code. This has some distinct advantages. First, you can see what the status of your variables are in memory. Second, it allows you to make sure your code is doing what you expect it to do without having to do a whole ton of print statements. And, third, it let's you make sure the logic is working the way you expect it to work.
Edit: A debugger is one of the more valuable tools in my development toolbox, and I'd recommend that you learn and understand how to use the tool to improve your development process.
I'd recommend reading the Wikipedia article for more information.
The debugger just halts execution of your program when it hits a breakpoint. If your program is working okay when it hits the breakpoint, but doesn't work without the breakpoint, that would indicate to me that you have a race condition or another threading issue in your code. The breakpoint is stopping the execution of your code, perhaps allowing another process to complete normally?
It stops the program counter for your process (the one you are debugging), and shows the current value of your variables, and uses the value of your variables at the moment to calculate expressions.
You must take into account, that if you edit some variable value when you hit a breakpoint, you are altering your process state, so it may behave differently.
Debugging is possible because the compiler inserts debugging information (such as function names, variable names, etc) into your executable. Its possible not to include this information.
Debuggers sometimes change the way the program behaves in order to work properly.
I'm not sure about Visual Studio but in Eclipse for example. Java classes are not loaded the same when ran inside the IDE and when ran outside of it.
You may also be having a race condition and the debugger stops one of the threads so when you continue the program flow it's at the right conditions.
More info on the program might help.
On Windows there is another difference caused by the debugger. When your program is launched by the debugger, Windows will use a different memory manager (heap manager to be exact) for your program. Instead of the default heap manager your program will now get the debug heap manager, which differs in the following points:
it initializes allocated memory to a pattern (0xCDCDCDCD comes to mind but I could be wrong)
it fills freed memory with another pattern
it overallocates heap allocations (like a previous answer mentioned)
All in all it changes the memory use patterns of your program so if you have a memory thrashing bug somewhere its behavior might change.
Two useful tricks:
Use PageHeap to catch memory accesses beyond the end of allocated blocks
Build using the /RTCsu (older Visual C++ compilers: /GX) switch. This will initialize the memory for all your local variables to a nonzero bit pattern and will also throw a runtime error when an unitialized local variable is accessed.
So I'm trying to debug this strange problem where a process ends without calling some destructors...
In the VS (2005) debugger, I hit 'Break all' and look around in the call stacks of the threads of the misteriously disappearing process, when I see this:
smells like SO http://img6.imageshack.us/img6/7628/95434880.jpg
This definitely looks like a SO in the making, which would explain why the process runs to its happy place without packing its suitcase first.
The problem is, the VS debugger's call stack only shows what you can see in the image.
So my question is: how can I find where the infinite recursion call starts?
I read somewhere that in Linux you can attach a callback to the SIGSEGV handler and get more info on what's going on.
Is there anything similar on Windows?
To control what Windows does in case of an access violation (SIGSEGV-equivalent), call SetErrorMode (pass it parameter 0 to force a popup in case of errors, allowing you to attach to it with a debugger.)
However, based on the stack trace you have already obtained, attaching with a debugger on fault may yield no additional information. Either your stack has been corrupted, or the depth of recursion has exceeded the maximum number of frames displayable by VS. In the latter case, you may want to decrease the default stack size of the process (use the /F switch or equivalent option in the Project properties) in order to make the problem manifest itself sooner, and make sure that VS will display all frames. You may, alternatively, want to stick a breakpoint in std::basic_filebuf<>::flush() and walk through it until the destruction phase (or disable it until just prior to the destruction phase.)
Well, you know what thread the problem is on - it might be a simple matter of tracing through it from inception to see where it goes off into the weeds.
Another option is to use one of the debuggers in the Debugging Tools for Windows package - they may be able to show more than the VS debugger (maybe), even if they are generally more complex and difficult to use (actually maybe because of that).
That does look at first glance like an infinite recursion, you could try putting a breakpoint at the line before the one that terminates the process. Does it get there ok? If it does, you've got two fairly easy ways to go.
Either you just step forward and see which destructors get called and when it gets caught up. Or you could put a printf/OutputDebugString in every relevant objects destructor (ONly ones which are globals should need this). If the message is the first thing the destructor does, then the last message you see is from the destructor which hangs things up.
On the other hand, if it doesn't get to that breakpoint I originally mentioned, then can do something similar, but it will be more annoying since the program is still "doing stuff".
I wouldn't rule out there being such a handler in Windows, but I've never heard of it.
I think the traceback that you're showing may be bogus. If you broke into the process after some kind of corruption had already occurred, then the traceback isn't necessarily valid. However, if you're lucky the bottom of the stack trace still has some clues about what's going on.
Try putting Sleep() calls into selected functions in your source that might be involved in the recursion. That should give you a better chance of breaking into the process before the stack has completely overflowed.
I agree with Dan Breslau. Your stack is bogus. may be simply because you don't have the right symbols, though.
If a program simply disappears without the WER handling kicking in, it's usually an out of memory condition. Have you gone investigated that possibility ?
I've got a "Schroedinger's Cat" type of problem here -- my program (actually the test suite for my program, but a program nonetheless) is crashing, but only when built in release mode, and only when launched from the command line. Through caveman debugging (ie, nasty printf() messages all over the place), I have determined the test method where the code is crashing, though unfortunately the actual crash seems to happen in some destructor, since the last trace messages I see are in other destructors which execute cleanly.
When I attempt to run this program inside of Visual Studio, it doesn't crash. Same goes when launching from WinDbg.exe. The crash only occurs when launching from the command line. This is happening under Windows Vista, btw, and unfortunately I don't have access to an XP machine right now to test on.
It would be really nice if I could get Windows to print out a stack trace, or something other than simply terminating the program as if it had exited cleanly. Does anyone have any advice as to how I could get some more meaningful information here and hopefully fix this bug?
Edit: The problem was indeed caused by an out-of-bounds array, which I describe more in this post. Thanks everybody for your help in finding this problem!
In 100% of the cases I've seen or heard of, where a C or C++ program runs fine in the debugger but fails when run outside, the cause has been writing past the end of a function local array. (The debugger puts more on the stack, so you're less likely to overwrite something important.)
When I have encountered problems like this before it has generally been due to variable initialization. In debug mode, variables and pointers get initialized to zero automatically but in release mode they do not. Therefore, if you have code like this
int* p;
....
if (p == 0) { // do stuff }
In debug mode the code in the if is not executed but in release mode p contains an undefined value, which is unlikely to be 0, so the code is executed often causing a crash.
I would check your code for uninitialized variables. This can also apply to the contents of arrays.
No answer so far has tried to give a serious overview about the available techniques for debugging release applications:
Release and Debug builds behave differently for many reasons. Here is an excellent overview. Each of these differences might cause a bug in the Release build that doesn't exist in the Debug build.
The presence of a debugger may change the behavior of a program too, both for release and debug builds. See this answer. In short, at least the Visual Studio Debugger uses the Debug Heap automatically when attached to a program. You can turn the debug heap off by using environment variable _NO_DEBUG_HEAP . You can specify this either in your computer properties, or in the Project Settings in Visual Studio. That might make the crash reproducible with the debugger attached.
More on debugging heap corruption here.
If the previous solution doesn't work, you need to catch the unhandled exception and attach a post-mortem debugger the instance the crash occurs. You can use e.g. WinDbg for this, details about the avaiable post-mortem debuggers and their installation at MSDN
You can improve your exception handling code and if this is a production application, you should:
a. Install a custom termination handler using std::set_terminate
If you want to debug this problem locally, you could run an endless loop inside the termination handler and output some text to the console to notify you that std::terminate has been called. Then attach the debugger and check the call stack. Or you print the stack trace as described in this answer.
In a production application you might want to send an error report back home, ideally together with a small memory dump that allows you to analyze the problem as described here.
b. Use Microsoft's structured exception handling mechanism that allows you to catch both hardware and software exceptions. See MSDN. You could guard parts of your code using SEH and use the same approach as in a) to debug the problem. SEH gives more information about the exception that occurred that you could use when sending an error report from a production app.
Things to look out for:
Array overruns - the visual studio debugger inserts padding which may stop crashes.
Race conditions - do you have multiple threads involved if so a race condition many only show up when an application is executed directly.
Linking - is your release build pulling in the correct libraries.
Things to try:
Minidump - really easy to use (just look it up in msdn) will give you a full crash dump for each thread. You just load the output into visual studio and it is as if you were debugging at the time of the crash.
You can set WinDbg as your postmortem debugger. This will launch the debugger and attach it to the process when the crash occurs. To install WinDbg for postmortem debugging, use the /I option (note it is capitalized):
windbg /I
More details here.
As to the cause, it's most probably an unitialized variable as the other answers suggest.
After many hours of debugging, I finally found the cause of the problem, which was indeed caused by a buffer overflow, caused a single byte difference:
char *end = static_cast<char*>(attr->data) + attr->dataSize;
This is a fencepost error (off-by-one error) and was fixed by:
char *end = static_cast<char*>(attr->data) + attr->dataSize - 1;
The weird thing was, I put several calls to _CrtCheckMemory() around various parts of my code, and they always returned 1. I was able to find the source of the problem by placing "return false;" calls in the test case, and then eventually determining through trial-and-error where the fault was.
Thanks everybody for your comments -- I learned a lot about windbg.exe today! :)
Even though you have built your exe as a release one, you can still generate PDB (Program database) files that will allow you to stack trace, and do a limited amount of variable inspection.
In your build settings there is an option to create the PDB files. Turn this on and relink. Then try running from the IDE first to see if you get the crash. If so, then great - you're all set to look at things. If not, then when running from the command line you can do one of two things:
Run EXE, and before the crash do an Attach To Process (Tools menu on Visual Studio).
After the crash, select the option to launch debugger.
When asked to point to PDB files, browse to find them. If the PDB's were put in the same output folder as your EXE or DLL's they will probably be picked up automatically.
The PDB's provide a link to the source with enough symbol information to make it possible to see stack traces, variables etc. You can inspect the values as normal, but do be aware that you can get false readings as the optimisation pass may mean things only appear in registers, or things happen in a different order than you expect.
NB: I'm assuming a Windows/Visual Studio environment here.
Crashes like this are almost always caused because an IDE will usually set the contents of uninitialized variable to zeros, null or some other such 'sensible' value, whereas when running natively you'll get whatever random rubbish that the system picks up.
Your error is therefore almost certainly that you are using something like you are using a pointer before it has been properly initialized and you're getting away with it in the IDE because it doesn't point anywhere dangerous - or the value is handled by your error checking - but in release mode it does something nasty.
In order to have a crash dump that you can analyze:
Generate pdb files for your code.
You rebase to have your exe and dlls loaded in the same address.
Enable post mortem debugger such as Dr. Watson
Check the crash failures address using a tool such as crash finder.
You should also check out the tools in Debugging tools for windows.
You can monitor the application and see all the first chance exceptions that were prior to your second chance exception.
Hope it helps...
Sometimes this happens because you have wrapped important operation inside "assert" macro. As you may know, "assert" evaluates expressions only on debug mode.
A great way to debug an error like this is to enable optimizations for your debug build.
Once i had a problem when app behaved similarily to yours. It turned out to be a nasty buffer overrun in sprintf. Naturally, it worked when run with a debugger attached. What i did, was to install an unhandled exception filter (SetUnhandledExceptionFilter) in which i simply blocked infinitely (using WaitForSingleObject on a bogus handle with a timeout value of INFINITE).
So you could something along the lines of:
long __stdcall MyFilter(EXCEPTION_POINTERS *)
{
HANDLE hEvt=::CreateEventW(0,1,0,0);
if(hEvt)
{
if(WAIT_FAILED==::WaitForSingleObject(hEvt, INFINITE))
{
//log failure
}
}
}
// somewhere in your wmain/WinMain:
SetUnhandledExceptionFilter(MyFilter);
I then attached the debugger after the bug had manifested itself (gui program stopped responding).
Then you can either take a dump and work with it later:
.dump /ma path_to_dump_file
Or debug it right away. The simplest way is to track where processor context has been saved by the runtime exception handling machinery:
s-d esp Range 1003f
Command will search stack address space for CONTEXT record(s) provided the length of search. I usually use something like 'l?10000'. Note, do not use unsually large numbers as the record you're after usually near to the unhanded exception filter frame.
1003f is the combination of flags (i believe it corresponds to CONTEXT_FULL) used to capture the processor state.
Your search would look similar to this:
0:000> s-d esp l1000 1003f
0012c160 0001003f 00000000 00000000 00000000 ?...............
Once you get results back, use the address in the cxr command:
.cxr 0012c160
This will take you to this new CONTEXT, exactly at the time of crash (you will get exactly the stack trace at the time your app crashed).
Additionally, use:
.exr -1
to find out exactly which exception had occurred.
Hope it helps.
With regard to your problems getting diagnostic information, have you tried using adplus.vbs as an alternative to WinDbg.exe? To attach to a running process, use
adplus.vbs -crash -p <process_id>
Or to start the application in the event that the crash happens quickly:
adplus.vbs -crash -sc your_app.exe
Full info on adplus.vbs can be found at: http://support.microsoft.com/kb/286350
Ntdll.dll with debugger attached
One little know difference between launching a program from the IDE or WinDbg as opposed to launching it from command line / desktop is that when launching with a debugger attached (i.e. IDE or WinDbg) ntdll.dll uses a different heap implementation which performs some little validation on the memory allocation/freeing.
You may read some relevant information in unexpected user breakpoint in ntdll.dll. One tool which might be able to help you identifying the problem is PageHeap.exe.
Crash analysis
You did not write what is the "crash" you are experiencing. Once the program crashes and offers you to send the error information to the Microsoft, you should be able to click on the technical information and to check at least the exception code, and with some effort you can even perform post-mortem analysis (see Heisenbug: WinApi program crashes on some computers) for instructions)
Vista SP1 actually has a really nice crash dump generator built into the system. Unfortunately, it isn't turned on by default!
See this article:
http://msdn.microsoft.com/en-us/library/bb787181(VS.85).aspx
The benefit of this approach is that no extra software needs to be installed on the affected system. Grip it and rip it, baby!
As my experience, that are most being memory corruption issues.
For example :
char a[8];
memset(&a[0], 0, 16);
: /*use array a doing some thing */
it is very possible to be normal in debug mode when one runs the code.
But in release, that would/might be crash.
For me, to rummage where the memory is out of bound is too toilsome.
Use some tools like Visual Leak Detector (windows) or valgrind (linux) are more wise choise.
I've seen a lot of right answers. However, there is none that helped me. In my case, there was a wrong usage of the SSE instructions with the unaligned memory. Take a look at your math library (if you use one), and try to disable SIMD support, recompile and reproduce the crash.
Example:
A project includes mathfu, and uses the classes with STL vector: std::vector< mathfu::vec2 >. Such usage will probably cause a crash at the time of the construction of mathfu::vec2 item since the STL default allocator does not guarantee required 16-byte alignment. In this case to prove the idea, one can define #define MATHFU_COMPILE_WITHOUT_SIMD_SUPPORT 1 before each include of the mathfu, recompile in Release configuration and check again.
The Debug and RelWithDebInfo configurations worked well for my project, but not the Release one. The reason behind this behavior is probably because debugger processes allocation/deallocation requests and does some memory bookkeeping to check and verify the accesses to the memory.
I experienced the situation in Visual Studio 2015 and 2017 environments.
Something similar happend to me once with GCC. It turned out to be a too aggressive optimization that was enabled only when creating the final release and not during the development process.
Well, to tell the truth it was my fault, not gcc's, as I didn't noticed that my code was relying on the fact that that particular optimization wouldn't have been done.
It took me a lot of time to trace it and I only came to it because I asked on a newsgroup and somebody made me think about it. So, let me return the favour just in case this is happening to you as well.
I've found this this article useful for your scenario. ISTR the compiler options were a little out of date. Look around your Visual Studio project options to see how to generate pdb files for your release build, etc.
It's suspicious that it would happen outside the debugger and not inside; running in the debugger does not normally change the application behavior. I would check the environment differences between the console and the IDE. Also, obviously, compile release without optimizations and with debug information, and see if that affects the behavior. Finally, check out the post-mortem debugging tools other people have suggested here, usually you can get some clue from them.
Debugging release builds can be a pain due to optimizations changing the order in which lines of your code appear to be executed. It can really get confusing!
One technique to at least narrow down the problem is to use MessageBox() to display quick statements stating what part of the program your code has got to ("Starting Foo()", "Starting Foo2()"); start putting them at the top of functions in the area of your code that you suspect (what were you doing at the time when it crashed?). When you can tell which function, change the message boxes to blocks of code or even individual lines within that function until you narrow it down to a few lines. Then you can start printing out the value of variables to see what state they are in at the point of crashing.
Try using _CrtCheckMemory() to see what state the allocated memory is in .
If everything goes well , _CrtCheckMemory returns TRUE , else FALSE .
You might run your software with Global Flags enabled (Look in Debugging Tools for Windows). It will very often help to nail the problem.
Make your program generate a mini dump when the exception occurs, then open it up in a debugger (for example, in WinDbg). The key functions to look at: MiniDumpWriteDump, SetUnhandledExceptionFilter
Here's a case I had that somebody might find instructive. It only crashed in release in Qt Creator - not in debug. I was using .ini files (as I prefer apps that can be copied to other drives, vs. ones that lose their settings if the Registry gets corrupted). This applies to any apps that store their settings under the apps' directory tree. If the debug and release builds are under different directories, you can have a setting that's different between them, too. I had preference checked in one that wasn't checked in the other. It turned out to be the source of my crash. Good thing I found it.
I hate to say it, but I only diagnosed the crash in MS Visual Studio Community Edition; after having VS installed, letting my app crash in Qt Creator, and choosing to open it in Visual Studio's debugger. While my Qt app had no symbol info, it turns out that the Qt libraries had some. It led me to the offending line; since I could see what method was being called. (Still, I think Qt is a convenient, powerful, & cross-platform LGPL framework.)
I had this problem too. In my case, the RELEASE mode was having msvscrtd.dll in the linker definition. We removed it and the issue resolved.
Alternatively, adding /NODEFAULTLIB to the linker command line arguments also resolved the issue.
I'll add another possibility for future readers: Check if you're logging to stderr or stdout from an application with no console window (ie you linked with /SUBSYSTEM:WINDOWS). This can crash.
I had a GUI application where I logged to both stderr and a file in both debug and release, so logging was always enabled. I created a console window in debug for easy viewing of the logs, but not in release. However, if the VS debugger is attached to the release build, it'll automatically pipe stderr to the VS output window. So only in release with no debugger did it actually crash when I wrote to stderr.
To make things worse, printf debugging obviously didn't work, which I didn't understand why until I'd tracked down the root cause (by painfully bisecting the codebase by inserting an infinite loop in various spots).
I had this error and vs crashed even when trying to !clean! my project. So I deleted the obj files manually from the Release directory, and after that it built just fine.
I agree with Rolf. Because reproducibility is so important, you shouldn't have a non-debug mode. All your builds should be debuggable. Having two targets to debug more than doubles your debugging load. Just ship the "debug mode" version, unless it is unusable. In which case, make it usable.