Memory leak detection in C/C++ compiler - c++

Can heap memory leaks detection be built in a C/C++ compiler? For example, in it's simplest form, during semantic analysis it would simply count allocated memory segments (with new/malloc or whatever) and delete/free calls for each one. Then give a compile-time warning about it.

See the C++ Core Guidelines which is a tool that parses the code to find deviations from the GSL standard. This standard has statically enforceable coding rules that preclude the possibility of Memory Leaks.
https://isocpp.org/blog/2015/09/bjarne-stroustrup-announces-cpp-core-guidelines

Well, probably you can try to make it, but you should take an account of the following things:
You should build your application in the whole program mode (in other cases you can catch only very trivial things).
You should make a very complex data flow analysis for pointers - just imagine that you allocated memory for some object and than assigned this pointer to another. And you free only one of them in any place of a program. Or you can put this pointer into some container.
There can be hundreds thousands of pointers in a big application. So the analysis need a lot of memory and time. Much more than a user wants to spend.
If you make quite a cheap and quick analysis it will be inaccurate. So you will have to print a lot of false-positive warnings or only exact warnings but for most simple things
So as a an academic research that probably will be very interesting but in real live I doubt it is possible to make such analysis of satisfying quality.

Well, there are the clang (LLVM) sanitizers. They're not compile-time, but add runtime instrumentation to the emitted program.
In general, it's not possible to do a good job entirely statically, for code whose behaviour depends on inputs. Standalone static analysis tools have some heuristics for tracing various code paths, which is already sort-of dynamic-ish.
Just instrumenting the code and running it (or emulating like valgrind) is conceptually simpler, although of course it does slow down execution.

Our CheckPointer tool instruments MS or GCC C programs to find a wide variety of pointer management errors that occur at runtime.
It detects dangling pointers, pointers that step outside the language-defined memory region for which they originated, including pointers to areas inside the stack (these are issues that Valgrind cannot find because it believes the entire stack is a valid place for a pointer to reference). It also handles thread-local storage.
It reports an error at the earliest time that such an error could be reported, which makes finding the cause considerably simpler.
At the end of a run, it produces a list of allocated but not freed blocks of storage to help track down memory leaks.

Related

Testing C++ code and IsBadWritePtr

I am currently writing some basic tests for some functions of my C++ code (it is a game engine that I am writing mostly for educational purposes). One of the features I want to test is the memory allocation code. The tests currently consist of a function that runs every startup if the code is in debug mode. This forces me to always test the code when I am debugging.
To test my memory allocation code my instinct is to do something like this:
int* test = MemoryManager::AllocateMemory<int>();
assert(!IsBadWritePtr(test, sizeof(int)), "Memory allocation test failed: allocate");
MemoryManager::FreeMemory(test);
assert(IsBadWritePtr(test, sizeof(int)), "Memory free test failed: free");
This code is working fine, but all the resources I can find say not to use the IsBadWritePtr function (this is a WinAPI function for those unfamiliar). Is the use of this function OK in this case? The three main warnings against using it I found were:
This might cause issues with guard pages
This isn't an issue as the memory allocation code is right there and I know I am not allocating a guard page.
It's better to fail earlier
This isn't a consideration as I am literally using it to fail as early as possible.
It isn't thread safe
The test code is executed right at the beginning of execution, long before any other threads exist. It also acts on memory to which no other pointers are created, and therefore could not exist in other threads.
So basically I want to know if the use of this function is a good idea in this case, and if there is anything I am missing about the function. I am also aware that something that points to the wrong place will still pass this test, but it at least detects most memory allocation errors, right (what are the chances I get a pointer to valid memory if the allocation fails?)
I was going to write this as a comment but it's too long.
I'll bring up the elephant in the room: why don't you just test for failure in the traditional way (by returning null or throwing an exception)?
Nevermind the fact that IsBadWritePtr is so frowned upon (even its documentation says that it's obsolete and you shouldn't use it), but your use case doesn't even seem appropriate. From the MSDN documentation:
This function is typically used when working with pointers returned from third-party libraries, where you cannot determine the memory management behavior in the third-party DLL.
But you are not using it to test anything passed/returned from a DLL, you just seem to be using it to test for allocation success, which is not only unnecessary (because you already know that from the return value of HeapAlloc, GlobalAlloc, etc.), but it's not what IsBadWritePtr is intended for.
Also, testing for allocation success is not something you should only do in debug mode, or with asserts, as it's obviously out of your control and you can't try to "fix" it by debugging.
Building on #user1610015's answer, there one reason why IsBadReadPtr should NOT work in your scenario.
Basically IsBadReadPtr works on whole page granularity. This means for the above code to be correct each and every allocation you make will consume a whole page (min 4KB).
Modern allocators use a variety of tricks to pack lots of allocations into pages (low fragmentation bucket heaps, linked lists of allocations, etc). If you don't pack small allocation like this then thinks like stl maps and other libraries which use lots of small allocations will absolutely kill your game (both in memory use and the fact that cache coherency will be killed with so much unused padding).
As a side not, your last comment about thread safety is dangerous. Lots of apps and libraries you link to can spawn threads off with global object constructors (and so run before main is called) and other tricks to insert code into your process. So I would definitely check this is the case with your code right now but more importantly later as you add 3rd party libraries to you code check it then.

How to make a simple tool to detect double free or memory overflow in Linux for a large project?

I have a large embedded project that has Linux running. Also, it has various process and threads running. I can't log all the malloc and new calls as it will make the box - Embedded Set-top box sluggish. Also, sluggishness might cause a crash because of mutex time out or other things. Thus, I want to make a tool that can help to debug the issues related to memory like - memory overflow.
For example, when you do a malloc of 4 bytes. But, you write 8 bytes. This may create a problem on the other chunk of data allocated.The other chunk of data header can be tampered. Thus, free() will fail or crash. How can I make a tool to detect such issue. Also, a tool to track down the memory leaks. Is there a way to do so? I can't use valgrind as it slows down my STB. So, I want to develop my tool that can check for the memory header corruption or memory leaks. Just based on my choice, it can do either memory corruption check or memory leak detection. Also, it should be a light weight.
Firstly there is probably no way to call this "simple".
Secondly if you are using C++ I highly suggest not using malloc/free but rather new/delete. The options for overriding those operators are much more flexible.
C++ provides a number of tools to improve memory safety really:
smart pointers (the performance cost really is worth the safety improvement)
Encapsulating things in classes. for example if you use std::array::at(i) it will throw an exception if your access is out of bounds. ref
lastly having proper usage of asserts in your code can go a long way to catch errors.
My point is merely that you should not depend on your debugging tools to negate the necessity of using good C++ programming methods.
Ok so now next you need to override new and delete.
A google search will provide many ways to do this.
link1
For your problem it probably makes more sense to overload delete/new globally.
Buffer overflow detection
This is the first part of your problem.
What you need to do is allocate additional memory in your new overloaded instruction so that there are some memory buffer regions before and after the memory and then return only the centre part.
How big a buffer is your choice.
pseudo code:
inline void* operator new(size_t s)
{
void* mem = malloc(s+2*BUFFER);
memset(mem,0x5A,s+2*BUFFER);
return (mem+BUFFER)
}
At some stage in the future you need to check that the BUFFER regions kept the values of 0x5A. You should probably do this in the call to free() but you can also have your own function to do this which you call periodically. In order to speed up this process use a function like memcmp perhaps.
Memory leak detection
Detecting memory leaks is not trivial.
Firstly I suggest using stack-based objects when ever possible to all-together avoid allocating memory on the heap when not needed.
The main question regarding memory leaks is to know if a certain memory block shouldn't been deleted or not.
99% of your memory leak problems can probably be solved just by using smart pointers.
However one of the most difficult memory leaks to catch is that of a growing data structure. (say for example a linked list that grows slowly over time)
Firstly in your overloaded new/malloc functions keep a list of all memory currently allocated. And also a counter of the total number of memory allocated.
Method 1: threshold detection:
Essentially every-time your program's memory usage exceeds a threshold amount you report this and increase the threshold. If your program continues to exceed thresholds as it keeps running something is wrong.
Method 2: Comparative analysis:
In pseudeo code:
Value1 = currentAmountOfMemoryUsed;
runSomeCode()
if (currentAmountOfMemoryUsed != Value1) reportProblem()
If this is possible depends a lot on what happens in runSomeCode() as some code can legally "save" up some memory for when it runs again later.
Method 3: Leak detection on program exit:
The premise is that if your code is 100% correctly written every bit of memory allocated should be freed at the time your program exists.
This method once again is not always possible because perhaps your program needs to run indefinitely and also your program might segfault because of your errors long before this can be detected.
Compiler support
On a lower level most compilers have some support to get into the whole memory management system but the way to handle this is 100% compiler/platform specific. e.g. Visual Studio C++
This is why I highly suggest not using malloc/free directly as this is problematic for debugging in this way as well as breaks the constructor/destructor design patterns of C++.
overriding malloc/free
There is however a more hands-on approach to overriding malloc/free.
That is by defining your own malloc/free functions.
Typically under debugging this will then use macro's to include FILE and LINE in the call:
#ifdef NDEBUG
#define myMalloc(s) myMallocImplementation(s,__FILE__,__LINE__);
#else
#define myMalloc(s) malloc(s)
#endif
What this allows is that your malloc implementation can then save the source location where the memory was allocated. This approach will however not catch malloc/free usage within libraries you are using.
This is a bit harder to do with new/delete calls as it would normally require some amount of digging into the call-stack at run-time to find out who called your new() function and that again is fairly compiler specific.
Also see: MSDN blog article
Memory freezing
Given everything I like to also just mention something that is very common in safety critical code (as used in motor vehicles and/or airplanes ect)
Outside of initialization a safety-critical program is usually not allowed to use malloc/free/new/delete. So all memory allocations must happen during initialization and then once the program and then usually malloc/free is frozen in some way. Any call to malloc/free after that will cause an assert.
This can be quite a heavy limitation to work with in a C++ environment but it does make for very robust code.
Note this does nothing for buffer overflow access or invalid pointer access problems.

Should a warning or perhaps even an assertion failure be produced if delete is used to free memory obtained using malloc()?

In C++ using delete to free memory obtained with malloc() doesn't necessarily cause a program to blow up.
Should a warning or perhaps even an assertion failure should be produced if delete is used to free memory obtained using malloc()?
Why did Stroustrup not have this feature on C++?
In C++ using delete to free memory obtained with malloc() doesn't necessarily cause a program to blow up.
No, but it does necessarily result in undefined behavior, which means that anything can happen, including the program blowing up or the program continuing to run in what appears to be a correct manner.
Do you guys think a warning or perhaps even an assertion failure should be produced if delete is used to free memory obtained using malloc()??
No. This is difficult, if not impossible, to check at compile time. Runtime checks are expensive, and in C++, you don't get what you don't pay for.
It might be a useful option to turn on while debugging, but really, the correct answer is just don't mix and match them. I've not once had trouble with this.
delete has a special property that free() does not: it calls destructors, which in turn may call more deletes as that object may have allocated other things on the heap.
That said, new also calls the object's constructor, while malloc() does not. Don't mix these things unless you're absolutely sure you know what you're doing (and if you did, you wouldn't mix these things anyway).
In C++ using delete to free memory obtained with malloc() doesn't necessarily cause a program to blow up.
You should never do that. Although some compiler allocate memory for new using malloc, free() doesn't call destructors. So if you mix "new" with "free" you'll get memory leaks, and a lot of hard to deal with problems. Just forget it. It isn't worth the effort.
Do you guys think a warning or perhaps even an assertion failure should be produced if delete is used to free memory obtained using malloc()??
As far as I know, on MSVC trying to delete() memory allocated with malloc generates debug error (something like "pCrtBlock is not valid", although I don't remember exact message). That is - if project was built with debug crt libraries. Happens because debug new() allocates extra memory for every allocated block and this block doesn't exist in memory allocated with malloc.
Why do you think that Stroustrup did not had this feature on C++?
In my opinion, because a programmer should be allowed to shoot himself in the foot with rocket launcher, if he really wants to do that. delete() isn't even supposed to be compatible with malloc, so adding protection against total stupidity isn't worth the effort of making a feature.
A warning would be nice, but this probably does not happen because C++ was originally built upon C, so a runtime error could not be generated because malloc is valid C.
Its still extremely bad practice to do this, even if your program does not crash...
Personally the undefined behaviour of mixing and matching memory allocations schemes makes it undesirable to even mix them within the same program/library.
There are some static analysis tools that can identify this kind of memory allocation / deallocation imbalance and generate warnings for you. However static analysis is never 100% accurate or fool-proof, as there will always be limitations in the tools ability to track variables and memory locations in the program.
Full disclosure: I work at Red Lizard Software writing the goanna static analysis tool for C/C++ and it is capable of detecting this kind of mismatch in some, not all instances.
I don't know why compilers don't seem to do this, it seems to be useful.
The compiler could certainly associate a bit of data with each pointer saying (allocated with new or allocated with malloc, or unknown) and when it found a free or a delete if the allocation didn't match it could issue a warning. There would be no overhead with doing this at run time. It wouldn't catch all mistakes but it would catch some so be useful.
At run time, the c++ runtime already puts in lots of extra information on some compilers to catch buffer overruns and so on in debug builds so I don't see why it couldnd't add a flag to a memory block to say how it was allocated and somehow report an error if it fails.
I suppose the real answer is that it would actually catch many bugs at all. There seems to be an awful lot of questions on here with a new one every day about mixing new and malloc, and yet in 10 years of programming I've never seen a program that does it. C programs use malloc, c++ programs use new and I've almost never seen them mixed. If you are using C++ you just use new for everything. I guess there is legacy code out there that was originally C an has been reused in c++ but I doubt there is enough of that for this to be a big issue in real life.
Has anyone ever had a problem with this in a real program anyway?
No.
When programming a compiler, warnings are the second hardest thing to develop.
Therefore, you usually have warnings against important thigns, as in "something that could really happen to a professional", not in a vast variety of a completely dumb misuse of the APIs. Besides, malloc and new allocated pointers may be impossible to distinguish at compile time.
No. A warning cannot be produced because it is not possible to determine whether a pointer was assigned a piece of memory from malloc, new or is just pointing somewhere at compile time.
An assertion is a nice feature and so would array bound checking, NULL pointer checks, etc.., but C++ is an unmanaged programming language and assumes you know what you are doing.
No. Can't be done. I can (legitimately and in compliance with spec) overload new() and delete() to use malloc and free (as I have done before), at which point, well, it's not really clear what your assertion would do to me.

How to find memory leaks in source code

If it is known that an application leaks memory (when executed), what are the various ways to locate such memory leak bugs in the source code of the application.
I know of certain parsers/tools (which probably do static analysis of the code) which can be used here but are there any other ways/techniques to do that, specific to the language (C/C++)/platform?
compile Your code with -g flag
Download valgrind (if You work on Linux) an run it with --leak-check=yes option
I thinkt that valgrind is the best tool for this task.
For Windows: See this topic: Is there a good Valgrind substitute for Windows?
There's valgrind and probably other great tools out there.
But I'll tell you what I do, that works very well for me, given that many times I code in environments where you can't run valgrind:
Be sure to pair each allocation with a deallocation. I always count news or mallocs and search for the delete or free.
If in C++ and using exceptions, try to put them paired on constructors/destructors. If you like risk, or can't put them in Ctor/dtor, be sure no exception can make the program flow not to execute the deallocation.
Use of smart pointers and ptr containers.
One can monitor alloc/dealloc rewriting new or installing a malloc handler. At some point, if the code runs continuously it can be obvious if the memory usage becomes stationary and doesn't grow without bounds which would be the worst case of leak.
Be careful with containers that never shrink such as vectors. There are tricks to shrink them swapping them with an empty container.
There are two general techniques for memory leak detection, dynamic and static analysis.
In dynamic analysis, you run the code and a tool analyzes the run to see what memory has leaked at the end. Dynamic analysis tends to be highly accurate but will only correctly analyze that specific executions you do within your tool. So, if some of your leaks that only happens for certain input and you don't have a test that uses that input, dynamic analysis will not detect those leaks.
Static analysis analyzes the source code to create all possible code paths and see if a leak can occur in any of them. While static analysis is pretty good right now, it's not perfect - you can not only get false negatives (the analysis misses leaks), you can also get false positives (the tool claims you have a leak when there actually isn't one).
There are many dynamic analysis tools including such well known tools as Valgrind (open source but limited to x86 Linux and Mac) and Purify (commercial but also available for Windows, Solaris and AIX). Wikipedia has a decent list of some other dynamic analysis tools as well.
On the static analysis side, the only tool I've thought worthwhile is Coverity (commerical). Once again, Wikipedia has a list of many other static analysis tools.
Purify will do a seemingly miraculous job of doing this
Not only memory leaks, but many other kinds of memory errors.
It works by instrumenting your machine code in real time, so you don't need source or need to compile with any particular options.
Just instrument your code with Purify (simplest way to do this: CC="purify cc" make), run your program, and get a nice gui that will show your leaks and other errors.
Available for Windows, Linux, and various flavors of Unix. There's a free trial download available.
http://www.ibm.com/software/awdtools/purify
If you utilize smart pointers and keep a table of them, then you can analyze it to tell what memory you are still using. Either provide a window to view it, or more commonly, stream to a log before the program terminates.
As far as doing it manually is concerned I don't think there are any established practices. To go over the code with a fine-toothed comb, looking for news (allocs) without corresponding deletes (frees), is all that is there to it.
You can also use purify for detection of memory leak.
There aren't very many general purpose guidelines for finding memory leaks. Fortunately, there's one simple guideline for preventing most leaks, both of memory and of other resources: use RAII (Resource Acquisition Is Initialization), and they just won't happen to start with. The name is a lousy description, but if you Google on it, you should get quite a few useful hits.
Personally, I would recommend that you wrap all variables which you need to allocate/deallocate memory with the clone_ptrclass which performs all the deallocation of memory for you when it is no longer needed. Thus, you do not have to use delete. It is quite similar to auto_ptr. The major difference is that you do not have to deal with the tricky ownership transfer part. More information and code on clone_ptr can be found here.

Heap corruption under Win32; how to locate?

I'm working on a multithreaded C++ application that is corrupting the heap. The usual tools to locate this corruption seem to be inapplicable. Old builds (18 months old) of the source code exhibit the same behaviour as the most recent release, so this has been around for a long time and just wasn't noticed; on the downside, source deltas can't be used to identify when the bug was introduced - there are a lot of code changes in the repository.
The prompt for crashing behaviuor is to generate throughput in this system - socket transfer of data which is munged into an internal representation. I have a set of test data that will periodically cause the app to exception (various places, various causes - including heap alloc failing, thus: heap corruption).
The behaviour seems related to CPU power or memory bandwidth; the more of each the machine has, the easier it is to crash. Disabling a hyper-threading core or a dual-core core reduces the rate of (but does not eliminate) corruption. This suggests a timing related issue.
Now here's the rub:
When it's run under a lightweight debug environment (say Visual Studio 98 / AKA MSVC6) the heap corruption is reasonably easy to reproduce - ten or fifteen minutes pass before something fails horrendously and exceptions, like an alloc; when running under a sophisticated debug environment (Rational Purify, VS2008/MSVC9 or even Microsoft Application Verifier) the system becomes memory-speed bound and doesn't crash (Memory-bound: CPU is not getting above 50%, disk light is not on, the program's going as fast it can, box consuming 1.3G of 2G of RAM). So, I've got a choice between being able to reproduce the problem (but not identify the cause) or being able to idenify the cause or a problem I can't reproduce.
My current best guesses as to where to next is:
Get an insanely grunty box (to replace the current dev box: 2Gb RAM in an E6550 Core2 Duo); this will make it possible to repro the crash causing mis-behaviour when running under a powerful debug environment; or
Rewrite operators new and delete to use VirtualAlloc and VirtualProtect to mark memory as read-only as soon as it's done with. Run under MSVC6 and have the OS catch the bad-guy who's writing to freed memory. Yes, this is a sign of desperation: who the hell rewrites new and delete?! I wonder if this is going to make it as slow as under Purify et al.
And, no: Shipping with Purify instrumentation built in is not an option.
A colleague just walked past and asked "Stack Overflow? Are we getting stack overflows now?!?"
And now, the question: How do I locate the heap corruptor?
Update: balancing new[] and delete[] seems to have gotten a long way towards solving the problem. Instead of 15mins, the app now goes about two hours before crashing. Not there yet. Any further suggestions? The heap corruption persists.
Update: a release build under Visual Studio 2008 seems dramatically better; current suspicion rests on the STL implementation that ships with VS98.
Reproduce the problem. Dr Watson will produce a dump that might be helpful in further analysis.
I'll take a note of that, but I'm concerned that Dr Watson will only be tripped up after the fact, not when the heap is getting stomped on.
Another try might be using WinDebug as a debugging tool which is quite powerful being at the same time also lightweight.
Got that going at the moment, again: not much help until something goes wrong. I want to catch the vandal in the act.
Maybe these tools will allow you at least to narrow the problem to certain component.
I don't hold much hope, but desperate times call for...
And are you sure that all the components of the project have correct runtime library settings (C/C++ tab, Code Generation category in VS 6.0 project settings)?
No I'm not, and I'll spend a couple of hours tomorrow going through the workspace (58 projects in it) and checking they're all compiling and linking with the appropriate flags.
Update: This took 30 seconds. Select all projects in the Settings dialog, unselect until you find the project(s) that don't have the right settings (they all had the right settings).
My first choice would be a dedicated heap tool such as pageheap.exe.
Rewriting new and delete might be useful, but that doesn't catch the allocs committed by lower-level code. If this is what you want, better to Detour the low-level alloc APIs using Microsoft Detours.
Also sanity checks such as: verify your run-time libraries match (release vs. debug, multi-threaded vs. single-threaded, dll vs. static lib), look for bad deletes (eg, delete where delete [] should have been used), make sure you're not mixing and matching your allocs.
Also try selectively turning off threads and see when/if the problem goes away.
What does the call stack etc look like at the time of the first exception?
I have same problems in my work (we also use VC6 sometimes). And there is no easy solution for it. I have only some hints:
Try with automatic crash dumps on production machine (see Process Dumper). My experience says Dr. Watson is not perfect for dumping.
Remove all catch(...) from your code. They often hide serious memory exceptions.
Check Advanced Windows Debugging - there are lots of great tips for problems like yours. I recomend this with all my heart.
If you use STL try STLPort and checked builds. Invalid iterator are hell.
Good luck. Problems like yours take us months to solve. Be ready for this...
We've had pretty good luck by writing our own malloc and free functions. In production, they just call the standard malloc and free, but in debug, they can do whatever you want. We also have a simple base class that does nothing but override the new and delete operators to use these functions, then any class you write can simply inherit from that class. If you have a ton of code, it may be a big job to replace calls to malloc and free to the new malloc and free (don't forget realloc!), but in the long run it's very helpful.
In Steve Maguire's book Writing Solid Code (highly recommended), there are examples of debug stuff that you can do in these routines, like:
Keep track of allocations to find leaks
Allocate more memory than necessary and put markers at the beginning and end of memory -- during the free routine, you can ensure these markers are still there
memset the memory with a marker on allocation (to find usage of uninitialized memory) and on free (to find usage of free'd memory)
Another good idea is to never use things like strcpy, strcat, or sprintf -- always use strncpy, strncat, and snprintf. We've written our own versions of these as well, to make sure we don't write off the end of a buffer, and these have caught lots of problems too.
Run the original application with ADplus -crash -pn appnename.exe
When the memory issue pops-up you will get a nice big dump.
You can analyze the dump to figure what memory location was corrupted.
If you are lucky the overwrite memory is a unique string you can figure out where it came from. If you are not lucky, you will need to dig into win32 heap and figure what was the orignal memory characteristics. (heap -x might help)
After you know what was messed-up, you can narrow appverifier usage with special heap settings. i.e. you can specify what DLL you monitor, or what allocation size to monitor.
Hopefully this will speedup the monitoring enough to catch the culprit.
In my experience, I never needed full heap verifier mode, but I spent a lot of time analyzing the crash dump(s) and browsing sources.
P.S:
You can use DebugDiag to analyze the dumps.
It can point out the DLL owning the corrupted heap, and give you other usefull details.
You should attack this problem with both runtime and static analysis.
For static analysis consider compiling with PREfast (cl.exe /analyze). It detects mismatched delete and delete[], buffer overruns and a host of other problems. Be prepared, though, to wade through many kilobytes of L6 warning, especially if your project still has L4 not fixed.
PREfast is available with Visual Studio Team System and, apparently, as part of Windows SDK.
Is this in low memory conditions? If so it might be that new is returning NULL rather than throwing std::bad_alloc. Older VC++ compilers didn't properly implement this. There is an article about Legacy memory allocation failures crashing STL apps built with VC6.
The apparent randomness of the memory corruption sounds very much like a thread synchronization issue - a bug is reproduced depending on machine speed. If objects (chuncks of memory) are shared among threads and synchronization (critical section, mutex, semaphore, other) primitives are not on per-class (per-object, per-class) basis, then it is possible to come to a situation where class (chunk of memory) is deleted / freed while in use, or used after deleted / freed.
As a test for that, you could add synchronization primitives to each class and method. This will make your code slower because many objects will have to wait for each other, but if this eliminates the heap corruption, your heap-corruption problem will become a code optimization one.
You tried old builds, but is there a reason you can't keep going further back in the repository history and seeing exactly when the bug was introduced?
Otherwise, I would suggest adding simple logging of some kind to help track down the problem, though I am at a loss of what specifically you might want to log.
If you can find out what exactly CAN cause this problem, via google and documentation of the exceptions you are getting, maybe that will give further insight on what to look for in the code.
My first action would be as follows:
Build the binaries in "Release" version but creating debug info file (you will find this possibility in project settings).
Use Dr Watson as a defualt debugger (DrWtsn32 -I) on a machine on which you want to reproduce the problem.
Repdroduce the problem. Dr Watson will produce a dump that might be helpful in further analysis.
Another try might be using WinDebug as a debugging tool which is quite powerful being at the same time also lightweight.
Maybe these tools will allow you at least to narrow the problem to certain component.
And are you sure that all the components of the project have correct runtime library settings (C/C++ tab, Code Generation category in VS 6.0 project settings)?
So from the limited information you have, this can be a combination of one or more things:
Bad heap usage, i.e., double frees, read after free, write after free, setting the HEAP_NO_SERIALIZE flag with allocs and frees from multiple threads on the same heap
Out of memory
Bad code (i.e., buffer overflows, buffer underflows, etc.)
"Timing" issues
If it's at all the first two but not the last, you should have caught it by now with either pageheap.exe.
Which most likely means it is due to how the code is accessing shared memory. Unfortunately, tracking that down is going to be rather painful. Unsynchronized access to shared memory often manifests as weird "timing" issues. Things like not using acquire/release semantics for synchronizing access to shared memory with a flag, not using locks appropriately, etc.
At the very least, it would help to be able to track allocations somehow, as was suggested earlier. At least then you can view what actually happened up until the heap corruption and attempt to diagnose from that.
Also, if you can easily redirect allocations to multiple heaps, you might want to try that to see if that either fixes the problem or results in more reproduceable buggy behavior.
When you were testing with VS2008, did you run with HeapVerifier with Conserve Memory set to Yes? That might reduce the performance impact of the heap allocator. (Plus, you have to run with it Debug->Start with Application Verifier, but you may already know that.)
You can also try debugging with Windbg and various uses of the !heap command.
MSN
Graeme's suggestion of custom malloc/free is a good idea. See if you can characterize some pattern about the corruption to give you a handle to leverage.
For example, if it is always in a block of the same size (say 64 bytes) then change your malloc/free pair to always allocate 64 byte chunks in their own page. When you free a 64 byte chunk then set the memory protection bits on that page to prevent reads and wites (using VirtualQuery). Then anyone attempting to access this memory will generate an exception rather than corrupting the heap.
This does assume that the number of outstanding 64 byte chunks is only moderate or you have a lot of memory to burn in the box!
If you choose to rewrite new/delete, I have done this and have simple source code at:
http://gandolf.homelinux.org/~smhanov/blog/?id=10
This catches memory leaks and also inserts guard data before and after the memory block to capture heap corruption. You can just integrate with it by putting #include "debug.h" at the top of every CPP file, and defining DEBUG and DEBUG_MEM.
The little time I had to solve a similar problem.
If the problem still exists I suggest you do this :
Monitor all calls to new/delete and malloc/calloc/realloc/free.
I make single DLL exporting a function for register all calls. This function receive parameter for identifying your code source, pointer to allocated area and type of call saving this information in a table.
All allocated/freed pair is eliminated. At the end or after you need you make a call to an other function for create report for left data.
With this you can identify wrong calls (new/free or malloc/delete) or missing.
If have any case of buffer overwritten in your code the information saved can be wrong but each test may detect/discover/include a solution of failure identified. Many runs to help identify the errors.
Good luck.
Do you think this is a race condition? Are multiple threads sharing one heap? Can you give each thread a private heap with HeapCreate, then they can run fast with HEAP_NO_SERIALIZE. Otherwise, a heap should be thread safe, if you're using the multi-threaded version of the system libraries.
A couple of suggestions. You mention the copious warnings at W4 - I would suggest taking the time to fix your code to compile cleanly at warning level 4 - this will go a long way to preventing subtle hard to find bugs.
Second - for the /analyze switch - it does indeed generate copious warnings. To use this switch in my own project, what I did was to create a new header file that used #pragma warning to turn off all the additional warnings generated by /analyze. Then further down in the file, I turn on only those warnings I care about. Then use the /FI compiler switch to force this header file to be included first in all your compilation units. This should allow you to use the /analyze switch while controling the output