I've run into a strange problem. The Release version of my application seems to run fine, but recently when I switched to the Debug version, I got an access violation immediately on start-up. The access violation is occurring when a block of allocated memory is freed. All of this occurs in the constructor for a static variable.
I believe the problem doesn't occur in the Release version simply because I have defined NDEBUG there, which I believe disables assertions in the C runtime.
I've been able to narrow things down a bit. If I add the following code to the constructor before the usual calls, then I get the same error:
int *temp = new int[3];
delete[] temp;
This makes me think that something outside of this block of code is causing the problem, e.g., perhaps there is a problem with the way the C runtime is being linked. However, I'm at a loss to say what that problem might be, and after a day of poking at the problem I'm running out of ideas for where to poke next.
Any help would be greatly appreciated. I am using Visual Studio 2010 to compile the application and running Windows 7.
Thanks so much!
In Debug mode, additional checks are added; therefore it's not unusual for a program to run perfectly well in Release mode but to give an access violation in Debug mode. This doesn't mean that the Release version is OK; it only means that some error made when the Release version is running is not catched but is when running in Debug mode.
Debugging a corrupted memory problem in C/C++ is very hard because the error can be made by any other instruction affecting the memory. For example, if you have two arrays that follow one each other in the allocated memory and the first array is overrun, then it will corrupt the header put before the second array (each memory allocation is prefixed by an header; this header is used by the operators delete and delete[] when deallocating the memory). However, only when you will try to deallocate the second array that the access violation will occurs and this, even if it's with the first array that there is error in the code.
Of course, you can have other problems with the second array. For example, you can find that some or all of its values have been corrupted when trying to read from it. However, it's not always the case and in many occasions, it might behave perfectly well when reading or writing to or from it and you can have the exact same good behavior with the first array. It's not because you don't have any problem reading and writing to and from some array that you don't overstep its boundary and corrupting the memory above (or below) it. Sometimes, the problem will only show up when trying to deallocate the array and other times, the problem will show up otherwise; for example with the display of corrupted values.
I was able to produce a minimal example by cutting away essentially all of my application code. I reduced the InitInstance function to the following:
BOOL CTempApp::InitInstance()
{
int *temp = new int[3] ;
delete[] temp ;
return FALSE ;
}
Even with all of my application code stripped away, the problem persisted. After this, I created a new project in Visual Studio 2010, once again replaced InitInstance with my minimal version, and then added all of the Additional Dependencies from my original project in the linker options. With this configuration, the problem was reproduced.
Next, I started removing libraries from the list of dependencies. I managed to whittle the list down to a single third-party library which was causing the following linker warning despite being labeled as the debug version:
LINK : warning LNK4098: defaultlib 'msvcrtd.lib' conflicts with use of other libs; use /NODEFAULTLIB:library
I think what's happening here is that the vendor has linked his debug library against the non-debug runtime. Presumably when my application calls delete[] there is some confusion as to what the parameters are for the call, and it is trying to delete a portion of memory I have not allocated.
I have tried adding the following to my Ignore Specific Default Libraries list:
libc.lib, libcmt.lib, msvcrt.lib, libcd.lib, libcmtd.lib
as suggested here. However, the problem persists. I think the solution will be to contact the vendor and have them resolve the conflict.
Thanks again for your suggestions. I hope this answer will help someone else with debugging a similar problem.
Related
I am calling into a statically linked .dll, and I see this error:
I wrote both the .dll and the calling code. This error should not be occurring. I am wondering if anyone else has encountered it before? The .dll only contains about 10 lines of code, its just a test .dll to see how dlls work in general. It blows up when I pass a std::string back out of the .dll.
I am using Visual Studio 2012 and C++.
What I will try next
From Debug assertion... _pFirstBlock == pHead:
This problem can occur if one uses the single-threading libraries in a
multithreaded module.
Tomorrow, I'll try recompiling the Boost static libraries in multi-threaded mode (my .dll is set to multi-threaded static mode).
What I will try next
See Using strings in an object exported from a DLL causes runtime error:
You need to do one of two things
Make both the DLL and the client that use it both link to the DLL version of the CRT (e.g. not statically).
OR You need to make sure you don't pass dynamically allocated memory (such as is contained in string objects) across DLL boundaries.
In other words, don't have DLL-exported functions that return string
objects.
Joe
This seems to match whats going on, it blows up at the precise point where I pass a string back across a .dll boundary. The problem only occurs when everything is linked in static mode. Now that's fixable.
See Passing reference to STL vector over dll boundary.
What I will try next
See Unable to pass std::wstring across DLL.
Solution
I have a nice solution, see the answer below.
In this case, the problem is that I was passing a std::string back across a .dll boundary.
Runtime Library config
If the MSVC Runtime library is set to Multi-threaded Debug DLL (/MDd), then this is no problem (it works fine).
If the MSVC Runtime library is set to Multi-threaded Debug (/MTd), then it will throw this error, which can be fixed with the following instructions.
Memory allocated in Memory Manager A and freed in Memory Manager B ...
The problem is that memory is allocated on the .dll side, then that same memory is freed on the application side. This means that memory manager A is allocating memory, and memory manager B is releasing that same memory, which generates errors.
The solution is to make sure that all memory passed back is not allocated in the DLL. In other words, the memory is always allocated on the application side, and freed on the application side.
Of course, the DLL can allocate/free memory internally - but it can't allocate memory that is later freed by the application.
Examples
This will not work:
// Memory is allocated on the .dll side, and freed on the app side, which throws error.
DLL std::string GetString();
This will work:
// Memory is allocated/freed on the application side, and never allocated in the .dll.
DLL int GetString(std::string& text);
However, this is not quite enough.
On the application side, the string has to be pre-allocated:
std::string text("");
text.reserve(1024); // Reserves 1024 bytes in the string "text".
On the .dll side, the text must be copied into the original buffer (rather than overwritten with memory that is allocated on the .dll side):
text.assign("hello");
Sometimes, C++ will insist on allocating memory anyway. Double check that the pre-allocation is still the same as it was:
if (text.capacity < 1024)
{
cout << "Memory was allocated on the .dll side. This will eventually throw an error.";
}
Another way that works is to use std::shared_ptr<std::string>, so even though memory is allocated in the .dll, it is released by the .dll (rather than the application side).
Yet another way is to accept a char * and a length which indicates the amount of pre-allocated memory. If the text that we want to pass back is longer than the length of pre-allocated memory, return an error.
This is what assert() looks like when its expression argument evaluates to false. This assert exists in the Debug build of the C runtime library, designed to check for allocation problems. The free() function in your case. The Debug build add extra checks to make sure you are writing your code correctly. And tell you when it detects a problem. Like calling free() on an allocation that was already freed, the simple case. Or calling free() passing the wrong pointer value, the trickier case. Or calling free() when the heap was corrupted by earlier code, the much harder case.
This is only as far as they can take it, they don't actually know why your code got it wrong. There is not any way they can put a Big Red arrow on the code that corrupted the heap for example. The easy case is covered by the Debug + Windows + Call Stack debugger window, it takes you to the code in your program that called free(). Or std::operator delete for a C++ program. The harder case is very, very hard indeed, heap corruption is often a Heisenbug. Getting the assert to be repeatable so you can set a data breakpoint on the reported address is the core strategy. Crossing fingers for the easy case, good luck with it!
After edit: yes, having cross-module problems with a C++ class like std::string is certainly one of the problems it can catch. Not a Heisenbug, good kind of problem to have. Two basic issues with that:
The modules might each have their own copy of the CRT, objects allocated by one copy of the CRT cannot be released by another copy of the CRT. They each have their own heap they allocate from. A problem that got addressed in VS2012, the CRT now allocates from a process-global heap.
The modules might not use the same implementation of std::string. With an object layout that does not match. Easily induced by having the modules compiled with different C++ library versions, particularly an issue with C++11 changes. Or different build settings, the _HAS_ITERATOR_DEBUGGING macro is quite notorious.
The only cure for that problem is to make sure that you build all of the modules in your program with the exact same compiler version using the exact same build settings. Using /MD is mandatory, it ensures that the CRT is shared so there's only one in the program.
The likely cause: binding to wrong version of the Qt DLLs, especially when moving a project from VS2010 to VS2012.
This is due to different versions of the standard library and associated dynamic allocation issues.
I had the same problem after a Windows reinstallation. My Runtime library build was Multi-threaded Debug DLL (/MDd).
My solution was to remove *.user file of the visual studio project, remove the debug folder and rebuild the project.
I'm currently facing an issue with VS08. I got the following (simplified) class structure:
class CBase
{
public:
virtual void Func() = 0;
};
class CDerived : public CBase
{
public:
void Func();
};
This code is working fine in Release Mode, but when I try to run a Debug Build it's instantly crashing at new CDerived.
Further Analysis brought me to the point where I was able to locate the crash. It's crashing at CBase::CBase (the compiler-generated constructor). More precisely it's crashing at 04AE46C6 mov dword ptr [eax],offset CBase::vftable' (505C2CCh) `.
Any clues? Release Mode is fine, but I can't properly Debug with it.
Release Mode is fine
Nope, it appears to be fine. My guess is in debug the memory is overwritten somehow. Since there's no way to tell just from the code you posted, here's what you can do.
I assume you create the object somewhere with:
CBase* p = new CDerived;
or similar. In debug mode, set a memory breakpoint at p's location. You can set it to monitor 4 bytes. Visual C++ (like most compilers) will keep the vfptr as the first thing in the class, so this breakpoint will track whether that location overwritten. If the breakpoint is hit before you call the function where it crashes, there's your problem (and the call-stack will show you why it's overwritten).
It could be a lot of reasons - you could be overrunning some memory and overwriting the object (as Erik suggested) - the release version might resolve the call directly to prevent the overhead of the dynamic dispatch and that would explain why it's not crashing.
It could also be that you call delete on the object and the debug version actually zeroes out the memory, whereas the release version doesn't. No way to tell just from that.
Necro-posting a bit here, but there's a point I want to make for future visitors...
As others have said, this was probably a memory corruption or free+reuse issue. You should not assume that it was a compiler bug just because you were able to eliminate the crash by changing compiler settings or rearranging code. If this is a corruption bug, what you probably did was to move the corruption to some memory that does not cause your program to crash--not in your current build, on your current OS & architecture, anyway.
Simply getting to the point of not crashing may have been sufficient for your immediate needs, but meanwhile you did not learn to avoid whatever practice led you to write the bug in the first place. There's a long-standing proverb amongst engineers and probably a fair number of other disciplines:
"What goes away by itself can come back by itself."
It may be one of the most true and important proverbs in any form of engineering. If you did not see the bug die, by your own hand, you should always feel anxious about it. It should bother you, deeply. The bug is probably still there, waiting for the night of your next milestone before it rears its head again.
Luchian Grigore gave good advice on finding the real problem with a memory breakpoint.
I get a segmentation fault when attempting to delete this.
I know what you think about delete this, but it has been left over by my predecessor. I am aware of some precautions I should take, which have been validated and taken care of.
I don't get what kind of conditions might lead to this crash, only once in a while. About 95% of the time the code runs perfectly fine but sometimes this seems to be corrupted somehow and crash.
The destructor of the class doesn't do anything btw.
Should I assume that something is corrupting my heap somewhere else and that the this pointer is messed up somehow?
Edit : As requested, the crashing code:
long CImageBuffer::Release()
{
long nRefCount = InterlockedDecrement(&m_nRefCount);
if(nRefCount == 0)
{
delete this;
}
return nRefCount;
}
The object has been created with a new, it is not in any kind of array.
The most obvious answer is : don't delete this.
If you insists on doing that, then use common ways of finding bugs :
1. use valgrind (or similar tool) to find memory access problems
2. write unit tests
3. use debugger (prepare for loooong staring at the screen - depends on how big your project is)
It seems like you've mismatched new and delete. Note that delete this; can only be used on an object which was allocated using new (and in case of overridden operator new, or multiple copies of the C++ runtime, the particular new that matches delete found in the current scope)
Crashes upon deallocation can be a pain: It is not supposed to happen, and when it happens, the code is too complicated to easily find a solution.
Note: The use of InterlockedDecrement have me assume you are working on Windows.
Log everything
My own solution was to massively log the construction/destruction, as the crash could well never happen while debugging:
Log the construction, including the this pointer value, and other relevant data
Log the destruction, including the this pointer value, and other relevant data
This way, you'll be able to see if the this was deallocated twice, or even allocated at all.
... everything, including the stack
My problem happened in Managed C++/.NET code, meaning that I had easy access to the stack, which was a blessing. You seem to work on plain C++, so retrieving the stack could be a chore, but still, it remains very very useful.
You should try to load code from internet to print out the current stack for each log. I remember playing with http://www.codeproject.com/KB/threads/StackWalker.aspx for that.
Note that you'll need to either be in debug build, or have the PDB file along the executable file, to make sure the stack will be fully printed.
... everything, including multiple crashes
I believe you are on Windows: You could try to catch the SEH exception. This way, if multiple crashes are happening, you'll see them all, instead of seeing only the first, and each time you'll be able to mark "OK" or "CRASHED" in your logs. I went even as far as using maps to remember addresses of allocations/deallocations, thus organizing the logs to show them together (instead of sequentially).
I'm at home, so I can't provide you with the exact code, but here, Google is your friend, but the thing to remember is that you can't have a __try/__except handdler everywhere (C++ unwinding and C++ exception handlers are not compatible with SEH), so you'll have to write an intermediary function to catch the SEH exception.
Is your crash thread-related?
Last, but not least, the "I happens only 5% of the time" symptom could be caused by different code path executions, or the fact you have multiple threads playing together with the same data.
The InterlockedDecrement part bothers me: Is your object living in multiple threads? And is m_nRefCount correctly aligned and volatile LONG?
The correctly aligned and LONG part are important, here.
If your variable is not a LONG (for example, it could be a size_t, which is not a LONG on a 64-bit Windows), then the function could well work the wrong way.
The same can be said for a variable not aligned on 32-byte boundaries. Is there #pragma pack() instructions in your code? Does your projet file change the default alignment (I assume you're working on Visual Studio)?
For the volatile part, InterlockedDecrement seem to generate a Read/Write memory barrier, so the volatile part should not be mandatory (see http://msdn.microsoft.com/en-us/library/f20w0x5e.aspx).
I'm changing over from Visual Studio 2008 -> 2010 and I've come across a weird bug in my code when evaluating a find on a std::set of pointers.
I know that this version brings about a change where set::iterator has the same type as set::const_iterator to bring about some compatability with the standard. But I can't figure out why this section of code which previously worked now causes a crash?
void checkStop(Stop* stop)
{
set<Stop*> m_mustFindStops;
if (m_mustFindStops.find(stop) != m_mustFindStops.end()) // this line crashes for some reason??
{
// do some stuff
}
}
PS m_mustFindStops is empty when it crashes.
EDIT: Thanks for the quick replies... I can't get it to reproduce with a simple case either - it's probably not a problem with the set its self. I think that heap corruption may be a culprit - I just wish I knew why changing compilers would suddenly cause corruption for the same code and same input data.
The only thing I can think of is that you have multiple threads, and m_mustfindStops is in fact a member or global variable and not a local to this function. There is no way the code above can cause problems, if correct and taken in isolation.
If you have multiple threads, then read access concurrent with write access will cause random errors - even if the container looks empty, it might not have been when the find call started.
Another possibility is that some other code has corrupted the heap, in which case however any of your code that uses heap memory could malfunction. With that in mind, if it's always this logic that breaks, my bet would be on a threading issue.
btw - there is absolutely nothing wrong with std::set in Visual C++ v10 - your code must have a bug.
I have a high memory requirement in my code and this statement is repeated a lot of times:
Node** x;
x = new Node*[11];
It fails at this allocation. I figured out this line by throwing output to the console!
I am building my code on Visual Studio. It works fine in Debug mode (both in VS2005 and VS2008)
However it throws the error in VS2005 Release mode.
A direct exe generated from
cl Program.cpp works if cl is from VS2010 but fails when it's from VS2005.
Any clues?
PS: Linux gives me a Bus Error(core dumped) for the same
Thanks
UPDATE:
And I guess, it can be due to 'unaligned' thing as I understand. I just made 11 to 12 (or any even number) and It works!!! I don't know why. It doesn't work with odd numbers!
Update 2 : http://www.devx.com/tips/Tip/13265 ?
I think you've done something somewhere else which corrupted the program heap: for example, writing past the end of an allocated chunk of memory, or writing to a chunk of memory after it's been freed.
I recommend that the easiest way to diagnose the problem would be to run the software using a kind of debugger that's intended to detect this kind of problem, for example valgrind.
I have a high memory requirement in my code
Are you actually running out of memory?
x = new Node*[11];
Are you deleting x like so:
delete [] x; // the correct way
or:
delete x; // incorrect
Or there could simply be something else corrupting the heap, though I would have expected that running in debug mode mode would make it more obvious, not less so. But with heap corruption there are rarely any guarantees that it'll do so in a nice, easy to debug way.
There is nothing wrong with this code.
Node **x;
x = new Node*[11];
You are allocating 11 pointers to class Node and storing it as a double-pointer in variable x. This is fine.
The fact that your program is crashing here is probably due to some memory error that is occurring elsewhere in your program. Perhaps you're writing past array bounds somewhere. If you load this array using a for loop, double-check your indexing.
If you have access to a memory profiler, I'd recommend using it. These bugs can be difficult to track down in large programs.
A valid C++98 implementation will throw an exception (std::bad_alloc) if allocation fails, not just crash. I'd agree with previous answers and suggest running your program in valgrind as this reeks of memory corruption. Valgrind should be available in your Linux distribution of choice.