Memory Leaks not detected by CRT Memory Leak Detection - c++

I got the Problem, that my application has an infinite growing Memory Leak, which is not detected. What I do very simplified, is to create an object, run a method on it and then delete the object. Each time I do this, the memory usage in the TaskManager grows by around 50-100MB. This exhausts my whole memory after some runs. I do this by multithreading, but there are no static variables, so there is no collision between the different objects in my threads. They only use static methods of other objects, that don't modify any other memory than passed in the parameters - so it's thread-safe.
What I tried to find out the reason:
Using crtdbg.h (CRT-Memeory-Leak-Detection), but there are only leaks which exist since the start of my application - they will get deleted at shutdown and they are not that big.
I was looking for the virtual destructors in all the objects that I inherit from, but they are all OK
What else can I try to find out where my application leaks? I can't find any leaks in the HEAP and I don't know any other reasons than the destructor problem that can cause Leaks in the STACK (by this I mean that an object doesn't destroy a local std::string objects which has allocated space in heap). I don't know if there are other reasons for "STACK-Leaks", but I know that in the parts of my method, where the memory grows the most, there are no HEAP allocation.

You probably want to use a nicer, more robust leak detector. You may also need to use a leak detector that can output a heap report at different times while your program is running. Finally, you should consider that your problem might be due to heap fragmentation rather than just a leak.
You can try Visual Leak Detector which is free from Google.
This question contains a list of other memory check products, from the basic to the quite advanced/expensive. CRTDBG is the lowest-common-denominator solution; I've had good luck with BoundsChecker, although it is not free.

Not sure how you have used CRTDBG library but it provides lots of goodies:
http://msdn.microsoft.com/en-us/library/x98tx3cf.aspx
You can use _CrtMemCheckpoint in divide and conquer manner. It allows you to measure difference in memory use between two points in your code. With multithreading this can be difficult.
Another is _CrtDumpMemoryLeaks (which i suppose is executed anyway on app end) with _CRTDBG_MAP_ALLOC enabled, this should show exact location of memory allocations.
Another hint is that maybe you have your CRTDBG overconfigured, with lots of small allocations it can create huge internal memory structures.
Try switching off parts of your code, and check if problem persists.
If you build your app on daily basis, try running previous versions to spot where problem appeared, then compare changes in source code repository.
...

Related

How to find "weak" memory leaks

In C++ programs, I have sometimes had problems with "weak" memory leaks. By that, I mean that some objects accumulate resources, but, eventually, these objects are destroyed properly and their memory is released, so that these leaks do not show up using the traditional memory debugging tools like valgrind or address sanitizers.
A typical example would be a poorly-written cache that keeps all the cached results from the beginning of the program. It grows forever, but its memory is reclaimed at the end of the program, when the cache is destroyed.
How can one debug this ? Are there tools available to see where are the largest objects allocated by the program ? To dump the current state of allocated memory (including call stack) ? To see which objects are growing ? I'm using Linux, but I am interested in other platforms as well.
If other platforms are an option, I would recommend Visual Studio on Windows.
It has powerful profiling options, including one for memory usage.
https://learn.microsoft.com/en-us/visualstudio/profiling/memory-usage
While debugging you can take a snapshot to see where memory is being used.
You can also take memory usage snapshots at different times and compare them.
You can use a profiler like e.g. Intel VTune (this is available for Linux as well) to trace memory consumption of your application. In VTune, you can see the memory consumption over time, and select a time window to see where memory was allocated during that window.
It still will be difficult to detect such problems if your application allocates and deallocates a lot of memory correctly, and only a small fraction is deallocated too late. In that case you need to check a lot of allocations/deallocations before you can find the bad one(s).

How to make a simple tool to detect double free or memory overflow in Linux for a large project?

I have a large embedded project that has Linux running. Also, it has various process and threads running. I can't log all the malloc and new calls as it will make the box - Embedded Set-top box sluggish. Also, sluggishness might cause a crash because of mutex time out or other things. Thus, I want to make a tool that can help to debug the issues related to memory like - memory overflow.
For example, when you do a malloc of 4 bytes. But, you write 8 bytes. This may create a problem on the other chunk of data allocated.The other chunk of data header can be tampered. Thus, free() will fail or crash. How can I make a tool to detect such issue. Also, a tool to track down the memory leaks. Is there a way to do so? I can't use valgrind as it slows down my STB. So, I want to develop my tool that can check for the memory header corruption or memory leaks. Just based on my choice, it can do either memory corruption check or memory leak detection. Also, it should be a light weight.
Firstly there is probably no way to call this "simple".
Secondly if you are using C++ I highly suggest not using malloc/free but rather new/delete. The options for overriding those operators are much more flexible.
C++ provides a number of tools to improve memory safety really:
smart pointers (the performance cost really is worth the safety improvement)
Encapsulating things in classes. for example if you use std::array::at(i) it will throw an exception if your access is out of bounds. ref
lastly having proper usage of asserts in your code can go a long way to catch errors.
My point is merely that you should not depend on your debugging tools to negate the necessity of using good C++ programming methods.
Ok so now next you need to override new and delete.
A google search will provide many ways to do this.
link1
For your problem it probably makes more sense to overload delete/new globally.
Buffer overflow detection
This is the first part of your problem.
What you need to do is allocate additional memory in your new overloaded instruction so that there are some memory buffer regions before and after the memory and then return only the centre part.
How big a buffer is your choice.
pseudo code:
inline void* operator new(size_t s)
{
void* mem = malloc(s+2*BUFFER);
memset(mem,0x5A,s+2*BUFFER);
return (mem+BUFFER)
}
At some stage in the future you need to check that the BUFFER regions kept the values of 0x5A. You should probably do this in the call to free() but you can also have your own function to do this which you call periodically. In order to speed up this process use a function like memcmp perhaps.
Memory leak detection
Detecting memory leaks is not trivial.
Firstly I suggest using stack-based objects when ever possible to all-together avoid allocating memory on the heap when not needed.
The main question regarding memory leaks is to know if a certain memory block shouldn't been deleted or not.
99% of your memory leak problems can probably be solved just by using smart pointers.
However one of the most difficult memory leaks to catch is that of a growing data structure. (say for example a linked list that grows slowly over time)
Firstly in your overloaded new/malloc functions keep a list of all memory currently allocated. And also a counter of the total number of memory allocated.
Method 1: threshold detection:
Essentially every-time your program's memory usage exceeds a threshold amount you report this and increase the threshold. If your program continues to exceed thresholds as it keeps running something is wrong.
Method 2: Comparative analysis:
In pseudeo code:
Value1 = currentAmountOfMemoryUsed;
runSomeCode()
if (currentAmountOfMemoryUsed != Value1) reportProblem()
If this is possible depends a lot on what happens in runSomeCode() as some code can legally "save" up some memory for when it runs again later.
Method 3: Leak detection on program exit:
The premise is that if your code is 100% correctly written every bit of memory allocated should be freed at the time your program exists.
This method once again is not always possible because perhaps your program needs to run indefinitely and also your program might segfault because of your errors long before this can be detected.
Compiler support
On a lower level most compilers have some support to get into the whole memory management system but the way to handle this is 100% compiler/platform specific. e.g. Visual Studio C++
This is why I highly suggest not using malloc/free directly as this is problematic for debugging in this way as well as breaks the constructor/destructor design patterns of C++.
overriding malloc/free
There is however a more hands-on approach to overriding malloc/free.
That is by defining your own malloc/free functions.
Typically under debugging this will then use macro's to include FILE and LINE in the call:
#ifdef NDEBUG
#define myMalloc(s) myMallocImplementation(s,__FILE__,__LINE__);
#else
#define myMalloc(s) malloc(s)
#endif
What this allows is that your malloc implementation can then save the source location where the memory was allocated. This approach will however not catch malloc/free usage within libraries you are using.
This is a bit harder to do with new/delete calls as it would normally require some amount of digging into the call-stack at run-time to find out who called your new() function and that again is fairly compiler specific.
Also see: MSDN blog article
Memory freezing
Given everything I like to also just mention something that is very common in safety critical code (as used in motor vehicles and/or airplanes ect)
Outside of initialization a safety-critical program is usually not allowed to use malloc/free/new/delete. So all memory allocations must happen during initialization and then once the program and then usually malloc/free is frozen in some way. Any call to malloc/free after that will cause an assert.
This can be quite a heavy limitation to work with in a C++ environment but it does make for very robust code.
Note this does nothing for buffer overflow access or invalid pointer access problems.

C++ program dies with std::bad_alloc, BUT valgrind reports no memory leaks

My program fails with 'std::bad_alloc' error message. The program is scalable, so I've tested on a smaller version with valgrind and there are no memory leaks.
This is an application of statistical mechanics, so I am basically making hundreds of objects, changing their internal data (in this case stl vectors of doubles), and writing to a datafile. The creation of objects lies inside a loop, so when it ends the memory is free. Something like:
for (cont=0;cont<MAX;cont++){
classSection seccion;
seccion.GenerateObjects(...);
while(somecondition){
seccion.evolve();
seccion.writedatatofile();
}}
So there are two variables which set the computing time of the program, the size of the system and the number of runs. There is only crash for big systems with many runs. Any ideas on how to catch this memory problem?
Thanks,
Run the program under debugger so that it stops once that exception is thrown and you can observe the call stack.
Three most probable problems are:
heap fragmentation
too many objects created on heap (but still pointed to from the program)
a request for an unreasonably large block of memory
valgrind would not show a memory leak because you may well not have one that valgrind would find.
You can actually have memory leaks in garbage-collected languages like Java. Although the memory is cleaned up there, it does not mean a bad programmer cannot hold on indefinitely to data they no longer need (eg building up a hash-map indefinitely). The garbage collector cannot determine that the user does not really need that data anymore.
You may be doing something like that here but we would need to see more of your code.
By the way, if you have a collection that really does have masses of data you are often better off using std::deque rather than std::vector unless you really really need it all to be contiguous.

Increase in memory footprint. False alarm or a memory leak?

I have a graphics program where I am creating and destroying the same objects over and over again. All in all, there are 140 objects. They get deleted and newed such that the number never increases 140. This is a requirement as it is a stress test, that is I cannot have a memory pool or dummy objects. Now I am fairly certain there aren't any memory leaks. I am also using a memory leak detector which is not reporting any leaks.
The problem is that the memory footprint of the program keeps increasing (albeit quite slowly, slower than the rate at which the objects are being destroyed/created). So my question then is whether an increasing memory footprint is a solid sign for memory leaks or can it sometimes be deceiving?
EDIT: I am using new/delete to create/destroy the objects
It does seem possible that this behavior could come from a situation in which there is no leak.
Is there any chance that your heap is getting fragmented?
Say you make lots of allocations of size n. You free them all, which makes your C library insert those buffers into a free list. Some other code path then makes allocations smaller than n, so those blocks in the free list get chunked up into smaller units. Then the next iteration of the loop does another batch of allocations of size n, and the free list no longer contains contiguous memory at that size, and malloc has to ask the kernel for more memory. Eventually those "smaller-than-n" allocations get freed as would your "n-sized" ones, but if you run enough iterations where the fragmentation exists, I could see the process gradually increasing its memory footprint.
One way to avoid this might be to allocate all your objects once, and not keep allocating/freeing them. Since you're using C++ this might necessitate placement new or something similar. Since you are using Windows, I might also mention that Win32 supports having multiple heaps in a process, so if your objects come from a different heap than other allocations you may avoid this.
It depends if you're under a CLR (or a virtual machine with garbage collector) or your still in the old mode (like C++, MFC ect...)
When you have a GC around - you can't really tell, only if you test it long enough. GC can decide not to clean your objects for now... (there is a way to force it)
In the native applications, yes, a footprint increase might mean a leak.
there are some tools (very good tools) for c++ that find these leaks (google devpartner or boundschecker)
I guess there are some tools for c# and Java as well.
If your application's process footprint increases beyond a reasonable limit, which depends on your application and what it does, and continues to increase until eventually you (will) run out of virtual memory, you definitely have a memory leak.
Try memory allocation tests included in CRT: http://msdn.microsoft.com/en-us/library/e5ewb1h3%28VS.80%29.aspx
They help A LOT.
But I've noticed that apps do tend to vary their memory consumption a little if you look at some factors. Windows 7 might also create extra padding in memory allocation to fix bugs: http://msdn.microsoft.com/en-us/library/dd744764%28VS.85%29.aspx
I strongly suggest to try Visual Studio 2015 (the Community Edition is free). It comes with Diagnostic Tools that helps you analyze Memory Usage; it allows you to take snapshots and view the heap

Memory leak in c++

I am running my c++ application on an intel Xscale device. The problem is, when I run my application offtarget (Ubuntu) with Valgrind, it does not show any memory leaks.
But when I run it on the target system, it starts with 50K free memory, and reduces to 2K overnight. How to catch this kind of leakage, which is not being shown by Valgrind?
A common culprit with these small embedded deviecs is memory fragmentation. You might have free memory in your application between 2 objects. A common solution to this is the use of a dedicated allocator (operator new in C++) for the most common classes. Memory pools used purely for objects of size N don't fragment - the space between two objects will always be a multiple of N.
It might not be an actual memory leak, but maybe a situation of increasing memory usage. For example it could be allocating a continually increasing string:
string s;
for (i=0; i<n; i++)
s += "a";
50k isn't that much, maybe you should go over your source by hand and see what might be causing the issue.
This may be not a leak, but just the runtime heap not releasing memory to the operating system. This can also be fragmentation.
Possible ways to overcome this:
Split into two applications. The master application will have the simple logic with little or no dynamic memory usage. It will start the worker application to actually do work in such chunks that the worker application will not run out of memory and will restart that application periodically. This way memory is periodically returned to the operating system.
Write your own memory allocator. For example you can allocate a dedicated heap and only allocate memory from there, then free the dedicated heap entirely. This requires the operating system to support multiple heaps.
Also note that it's possible that your program runs differently on Ubuntu and on the target system and therefore different execution paths are taken and the code resulting in memory leaks is executed on the target system, but not on Ubuntu.
This does sounds like fragmentation. Fragmentation is caused by you allocating objects on the stack, say:
object1
object2
object3
object4
And then deleting some objects
object1
object3
object4
You now have a hole in the memory that is unused. If you allocate another object that's too big for the hole, the hole will remain wasted. Eventually with enough memory churn, you can end up with so many holes that they waste you memory.
The way around this is to try and decide your memory requirements up front. If you've got particular objects that you know you are creating many of, try and ensure they're the same size.
You can use a pool to make the allocations more efficient for a particular class... or at least let you track it better so you can understand what's going on and come up with a good solution.
One way of doing this is to create a single static:
struct Slot
{
Slot() : free(true) {}
bool free;
BYTE data[20]; // you'll need to tune the value 20 to what your program needs
};
Slot pool[500]; // you'll need to pick a good pool size too.
Create the pool up front when your program starts and pre-allocate it so that it is as big as the maximum requirements for your program. You may want to HeapAlloc it (or the equivalent in your OS so that you can control when it appears from somewhere in you application startup).
Then override the new and delete operators for a suspect class so that they return slots from this vector. So, your objects will be stored in this vector.
You can override new and delete for classes of the same size to be put in this vector.
Create pools of different sizes for different objects.
Just go for the worst offenders at first.
I've done something like this before and it solved my problem on an embedded device. I also was using a lot of STL, so I created a custom allocator (google for stl custom allocator - there are loads of links). This was useful for records stored in a mini-database my program used.
If your memory usage goes down, i don't think it can be defined as a memory leak.
Where are you getting reports of memory usage ? The system might just have put most of your program's memory use in virtual memory.
All i can add is that Valgrind is known to be pretty efficient at finding memory leaks !
Also, are you sure when you profiled your code, the code-coverage was enough to cover all the code-paths which might be executed on target platform?
Valgrind for sure does not lie. As has been pointed out, this might indeed be the runtime heap not releasing the memory, but i would think otherwise.
Are you using any sophisticated technique to track the scope of object..?
if yes, than valgrind is not smart enough, Though you can try by setting xscale related option with valgrind
Most applications show a pattern of memory use like this:
they use very little when they start
as they create data structures they use more and more
as they start deleting old data structures or reusing existing ones, they reach a steady state where memory use stays roughly constant
If your app is continuosly increasing in size, you may have aleak. If it increases in sizze over aperiod and then reaches arelatively steady state, you probably don't.
You can use the massif tool from Valgrind, which will show you where the most memory is allocated and how it evolves over time.