Why this code is not causing memory leak? - c++

I wanted to simulate memory leak in my application. I write following code, and tried to see in perfmon.
int main()
{
int *i;
while(1)
{
i = (int *) malloc(1000);
//just to avoid lazy allocation
*i = 100;
if(i == NULL)
{
printf("Memory Not Allocated\n");
}
Sleep(1000);
}
}
When I see used memory in Task Manager, it is fluctuate between 52K and 136K, but not going beyond that. Means, somethings it shows 52K and sometimes 136K, I do not understand how this code once goes to 136K, coming back to 52K, and not going beyond that.
I tried to use perfmon, but not able to exactly what to see in perfmon, snapshot of counters,
Please suggest how to simulate memory leak and how to detect it.

While an OS may defer actual allocation of dynamically allocated memory until it is used, the compiler optimizer may eliminate allocations that are only written to, and never read from. Because your writes have no well defined observable behaviour (you never read from it), the compiler may well optimize it away. I would suggest examing the generated assembly code to see what the compiler is actually generating. Really, this ought be one of the first steps in answering questions like "why doesn't this code behave like I think it should?".

Strictly, a memory leak is a bit context dependent: something in your program keeps allocating memory over time and not freeing it, when it should have been freed.
You code produces a "leak" on each subsequent pass through the while loop, because your program loses knowledge of a previously allocated pointer at that point. This is only visible by inspection however in this case; from the code posted it looks more like you are actually doing, albeit very slowly, is attempting to create a memory stress situation.
To 'find' a leak without inspection you need to run a tool like valgrind (Unix/Linux/OSX) or in Visual Studio enable allocation tracing with the DEBUG_NEW macro and view the output using a debugger.
If you really want to stress memory in a hurry, allocate 1024 x 1024 x 1024 bytes at a time...

Related

Debugging Memory Leak on C/C++ Application running on Embedded Linux Device

I have an application running on an ARM Cortex-A9. When I enter a certain portion of the code, I can see in the Linux tasks view 'top' that the application grows in memory usage until it gets Killed due to running out of physical memory.
Now, I have done some research on this and tried to implement mtrace, but it didn't give me very concise results. Basically I get something like this
Memory not freed:
-----------------
Address Size Caller
0x03aafe18 0x38 at 0x76e73c18
0x53a004a8 0x38 at 0x76e73c18
And I do not even think this is the big problem (maybe another smaller issue).
I also cannot use Valgrind (which would probably work great) because there is not enough space on the device to install it and a compiler...
So I fear that I just have to go through the code and look for something that could be causing growing memory usage. Is there a guide for this somewhere? In the code, "malloc" or "new" is almost never used.
I do have access to use gdb, if that can help.
One thing I am not clear on is if the following is a problem:
while(someloop){
...
double *someptr;
...
}
or like
while(someloop){
...
int32 someArray[100] = {0};
...
}
Of which there is a lot of in the code. When that loop comes around, and instantiates those variables or pointers, does it just keep using free space, or use the spaces from the last iteration?
If it is alocated on the stack, the memory is reused. However by alocating on heap you need to delete.
Also if you alocate with double * ptr; ... ptr = new double [5], you need to delete by delete [] ptr.
In C++ you can overwrite the new and delete operators to print some message for debuging.
Best would be to debug using gdb and see what object is created and not deleted.
It is possible that you use a class in your code that does not delete something internal.
Tip: for small objects alocating on stack is both faster and safer.

Unexpected Behaviour from tcmalloc

I have been using tcmalloc for a few months in a large project, and so far I must say that I am pretty happy about it, most of all for its HeapProfiling features which allowed to track memory leaks and remove them.
In the past couple of weeks though we experienced random crashes in our application, and we could not find the source of the random crash. In a very particular situation, When the application crashed, we found ourselves with a completely corrupted stack for one of the application threads. Several times instead I found that threads were stuck in tcmalloc::PageHeap::AllocLarge(), but since I dont have debug symbols of tcmalloc linked, I could not understand what the issue was.
After nearly one week of investigation, today I tried the most simple of things: removed tcmalloc from linkage to avoid using it, just to see what happened. Well... I finally found out what the problem was, and the offending code looks very much like this:
void AllocatingFunction()
{
Object object_on_stack;
ProcessObject(&object_on_stack);
}
void ProcessObject(Object* object)
{
...
// Do Whatever
...
delete object;
}
Using libc the application still crashed but I finally saw that I was calling delete on an object which was allocated on the stack.
What I still cant figure out is why tcmalloc instead kept the application running regardless of this very risky (if not utterly wrong) object deallocation, and the double deallocation when object_on_stack goes out of scope when AllocatingFunction ends. The fact is that the offending code could be called repeatedly without any hint of the underlying abomination.
I know that memory deallocation is one of those "undefined behaviour" when not used properly, but my surprise is such a different behaviour between "standard" libc and tcmalloc.
Does anyone have some sort of explanation of insight, on why tcmalloc keeps the application running?
Thanks in advance :)
Have a Nice Day
very risky (if not utterly wrong) object deallocation
well, I disagree here, it is utterly wrong, and since you invoke UB, anything can happen.
It very much depends on what the tcmalloc code acutally does on deallocation, and how it uses the (possibly garbage) data around the stack at that location.
I have seen tcmalloc crash on such occasions too, as well as glibc going into an infinite loop. What you see is just coincidence.
Firstly, there was no double free in your case. When object_on_stack goes out of scope there is no free call, just the stack pointer is decreased (or rather increased, as stack grows down...).
Secondly, during delete TcMalloc should be able to recognize that the address from a stack does not belong to the program heap. Here is a part of free(ptr) implementation:
const PageID p = reinterpret_cast<uintptr_t>(ptr) >> kPageShift;
Span* span = NULL;
size_t cl = Static::pageheap()->GetSizeClassIfCached(p);
if (cl == 0) {
span = Static::pageheap()->GetDescriptor(p);
if (!span) {
// span can be NULL because the pointer passed in is invalid
// (not something returned by malloc or friends), or because the
// pointer was allocated with some other allocator besides
// tcmalloc. The latter can happen if tcmalloc is linked in via
// a dynamic library, but is not listed last on the link line.
// In that case, libraries after it on the link line will
// allocate with libc malloc, but free with tcmalloc's free.
(*invalid_free_fn)(ptr); // Decide how to handle the bad free request
return;
}
Call to invalid_free_fn crashes.

Howto debug double deletes in C++?

I'm maintaining a legacy application written in C++. It crashes every now and then and Valgrind tells me its a double delete of some object.
What are the best ways to find the bug that is causing a double delete in an application you don't fully understand and which is too large to be rewritten ?
Please share your best tips and tricks!
Here's some general suggestion's that have helped me in that situation:
Turn your logging level up to full debug, if you are using a logger. Look for suspicious stuff in the output. If your app doesn't log pointer allocations and deletes of the object/class under suspicion, it's time to insert some cout << "class Foo constructed, ptr= " << this << endl; statements in your code (and corresponding delete/destructor prints).
Run valgrind with --db-attach=yes. I've found this very handy, if a bit tedious. Valgrind will show you a stack trace every time it detects a significant memory error or event and then ask you if you want to debug it. You may find yourself repeatedly pressing 'n' many many times if your app is large, but keep looking for the line of code where the object in question is first (and secondly) deleted.
Just scour the code. Look for construction/deletion of the object in question. Sadly, sometimes it winds up being in a 3rd party library :-(.
Update: Just found this out recently: Apparently gcc 4.8 and later (if you can use GCC on your system) has some new built-in features for detecting memory errors, the "address sanitizer". Also available in the LLVM compiler system.
Yep. What #OliCharlesworth said. There's no surefire way of testing a pointer to see if it points to allocated memory, since it really is just the memory location itself.
The biggest problem your question implies is the lack of reproducability. Continuing with that in mind, you're stuck with changing simple 'delete' constructs to delete foo;foo = NULL;.
Even then the best case scenario is "it seems to occur less" until you've really stamped it down.
I'd also ask by what evidence Valgrind suggests it's a double-delete problem. Might be a better clue lingering around in there.
It's one of the simpler truly nasty problems.
This may or may not work for you.
Long time ago I was working on 1M+ lines program that was 15 years old at the time. Faced with the exact same problem - double delete with huge data set. With such data any out of the box "memory profiler" would be a no go.
Things that were on my side:
It was very reproducible - we had macro language and running same script exactly the same way reproduced it every time
Sometime during the history of the project someone decided that "#define malloc my_malloc" and "#define free my_free" had some use. These didn't do much more than call built-in malloc() and free() but project already compiled and worked this way.
Now the trick/idea:
my_malloc(int size)
{
static int allocation_num = 0; // it was single threaded
void* p = builtin_malloc(size+16);
*(int*)p = ++allocation_num;
*((char*)p+sizeof(int)) = 0; // not freed
return (char*)p+16; // check for NULL in order here
}
my_free(void* p)
{
if (*((char*)p+sizeof(int)))
{
// this is double free, check allocation_number
// then rerun app with this in my_alloc
// if (alloc_num == XXX) debug_break();
}
*((char*)p+sizeof(int)) = 1; // freed
//built_in_free((char*)p-16); // do not do this until problem is figured out
}
With new/delete it might be trickier, but still with LD_PRELOAD you might be able to replace malloc/free without even recompiling your app.
you are probably upgrading from a version that treated delete differently then the new version.
probably what the previous version did was when delete was called it did a static check for if (X != NULL){ delete X; X = NULL;} and then in the new version it just does the delete action.
you might need to go through and check for pointer assignments, and tracking references of object names from construction to deletion.
I've found this useful: backtrace() on linux. (You have to compile with -rdynamic.) This lets you find out where that double free is coming from by putting a try/catch block around all memory operations (new/delete) then in the catch block, print out your stack trace.
This way you can narrow down the suspects much faster than running valgrind.
I wrapped backtrace in a handy little class so that I can just say:
try {
...
} catch (...) {
StackTrace trace;
std::cerr << "Double free!!!\n" << trace << std::endl;
throw;
}
On Windows, assuming the app is built with MSVC++, you can take advantage of the extensive heap debugging tools built into the debug version of the standard library.
Also on Windows, you can use Application Verifier. If I recall correctly, it has a mode the forces each allocation onto a separate page with protected guard pages in between. It's very effective at finding buffer overruns, but I suspect it would also be useful for a double-free situation.
Another thing you could do (on any platform) would be to make a copy of the sources that are transformed (perhaps with macros) so that every instance of:
delete foo;
is replaced with:
{ delete foo; foo = nullptr; }
(The braces help in many cases, though it's not perfect.) That will turn many instances of double-free into a null pointer reference, making it much easier to detect. It doesn't catch everything; you might have a copy of a stale pointer, but it can help squash a lot of the common use-after-delete scenarios.

Dereferencing deleted pointers always result in an Access Violation?

I have a very simple C++ code here:
char *s = new char[100];
strcpy(s, "HELLO");
delete [] s;
int n = strlen(s);
If I run this code from Visual C++ 2008 by pressing F5 (Start Debugging,) this always result in crash (Access Violation.) However, starting this executable outside the IDE, or using the IDE's Ctrl+F5 (Start without Debugging) doesn't result in any crash. What could be the difference?
I also want to know if it's possible to stably reproduce the Access Violation crash caused from accessing deleted area? Is this kind of crash rare in real-life?
Accessing memory through a deleted pointer is undefined behavior. You can't expect any reliable/repeatable behavior.
Most likely it "works" in the one case because the string is still "sitting there" in the now available memory -= but you cannot rely on that. VS fills memory with debug values to help force crashes to help find these errors.
The difference is that a debugger, and debug libraries, and code built in "debug" mode, likes to break stuff that should break. Your code should break (because it accesses memory it no longer technically owns), so it breaks easier when compiled for debugging and run in the debugger.
In real life, you don't generally get such unsubtle notice. All that stuff that makes things break when they should in the debugger...that stuff's expensive. So it's not checked as strictly in release. You might be able 99 times out of 100 to get away with freeing some memory and accessing it right after, cause the runtime libs don't always hand the memory back to the OS right away. But that 100th time, either the memory's gone, or another thread owns it now and you're getting the length of a string that's no longer a string, but a 252462649-byte array of crap that runs headlong into unallocated (and thus non-existent, as far as you or the runtime should care) memory. And there's next to nothing to tell you what just happened.
So don't do that. Once you've deleted something, consider it dead and gone. Or you'll be wasting half your life tracking down heisenbugs.
Dereferencing a pointer after delete is undefined behavior - anything can happen, including but not limited to:
data corruption
access violation
no visible effects
exact results will depend on multiple factors most of which are out of your control. You'll be much better off not triggering undefined behavior in the first place.
Usually, there is no difference in allocated and freed memory from a process perspective. E.g the process only has one large memory map that grows on demand.
Access violation is caused by reading/writing memory that is not available, ususally not paged in to the process. Various run-time memory debugging utilities uses the paging mechanism to track invalid memory accesses without the severe run time penalty that software memory checking would have.
Anyway your example proves only that an error is sometimes detected when running the program in one environment, but not detected in another environment, but it is still an error and the behaviour of the code above is undefined.
The executable with debug symbols is able to detect some cases of access violations. The code to detect this is contained in the executable, but will not be triggered by default.
Here you'll find an explanation of how you can control behaviour outside of a debugger: http://msdn.microsoft.com/en-us/library/w500y392%28v=VS.80%29.aspx
I also want to know if it's possible
to stably reproduce the Access
Violation crash caused from accessing
deleted area?
Instead of plain delete you could consider using an inline function that also sets the value of the deleted pointer to 0/NULL. This will typically crash if you reference it. However, it won't complain if you delete it a second time.
Is this kind of crash rare in
real-life?
No, this kind of crash is probably behind the majority of the crashes you and I see in software.

How can I get the size of a memory block allocated using malloc()? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How can I get the size of an array from a pointer in C?
Is there any way to determine the size of a C++ array programmatically? And if not, why?
I get a pointer to a chunk of allocated memory out of a C style function.
Now, it would be really interesting for debugging purposes to know how
big the allocated memory block that this pointer points is.
Is there anything more elegant than provoking an exception by blindly running over its boundaries?
Thanks in advance,
Andreas
EDIT:
I use VC++2005 on Windows, and GCC 4.3 on Linux
EDIT2:
I have _msize under VC++2005
Unfortunately it results in an exception in debug mode....
EDIT3:
Well. I have tried the way I described above with the exception, and it works.
At least while I am debugging and ensuring that immediately after the call
to the library exits I run over the buffer boundaries. Works like a charm.
It just isn't elegant and in no way usable in production code.
It's not standard but if your library has a msize() function that will give you the size.
A common solution is to wrap malloc with your own function that logs each request along with the size and resulting memory range, in the release build you can switch back to the 'real' malloc.
If you don't mind sleazy violence for the sake of debugging, you can #define macros to hook calls to malloc and free and pad the first 4 bytes with the size.
To the tune of
void *malloc_hook(size_t size) {
size += sizeof (size_t);
void *ptr = malloc(size);
*(size_t *) ptr = size;
return ((size_t *) ptr) + 1;
}
void free_hook (void *ptr) {
ptr = (void *) (((size_t *) ptr) - 1);
free(ptr);
}
size_t report_size(ptr) {
return * (((size_t *) ptr) - 1);
}
then
#define malloc(x) malloc_hook(x)
and so on
The C runtime library does not provide such a function. Furthermore, deliberately provoking an exception will not tell you how big the block is either.
Usually the way this problem is solved in C is to maintain a separate variable which keeps track of the size of the allocated block. Of course, this is sometimes inconvenient but there's generally no other way to know.
Your C runtime library may provide some heap debug functions that can query allocated blocks (after all, free() needs to know how big the block is), but any of this sort of thing will be nonportable.
With gcc and the GNU linker, you can easily wrap malloc
#include <stdlib.h>
#include <stdio.h>
void* __real_malloc(size_t sz);
void* __wrap_malloc(size_t sz)
{
void *ptr;
ptr = __real_malloc(sz);
fprintf(stderr, "malloc of size %d yields pointer %p\n", sz, ptr);
/* if you wish to save the pointer and the size to a data structure,
then remember to add wrap code for calloc, realloc and free */
return ptr;
}
int main()
{
char *x;
x = malloc(103);
return 0;
}
and compile with
gcc a.c -o a -Wall -Werror -Wl,--wrap=malloc
(Of course, this will also work with c++ code compiled with g++, and with the new operator (through it's mangled name) if you wish.)
In effect, the statically/dynamically loaded library will also use your __wrap_malloc.
No, and you can't rely on an exception when overrunning its boundaries, unless it's in your implementation's documentation. It's part of the stuff you really don't need to know about to write programs. Dig into your compiler's documentation or source code if you really want to know.
There is no standard C function to do this. Depending on your platform, there may be a non-portable method - what OS and C library are you using?
Note that provoking an exception is unreliable - there may be other allocations immediately after the chunk you have, and so you might not get an exception until long after you exceed the limits of your current chunk.
Memory checkers like Valgrind's memcheck and Google's TCMalloc (the heap checker part) keep track of this sort of thing.
You can use TCMalloc to dump a heap profile that shows where things got allocated, or you can just have it check to make sure your heap is the same at two points in program execution using SameHeap().
Partial solution: on Windows you can use the PageHeap to catch a memory access outside the allocated block.
PageHeap is an alternate memory manager present in the Windows kernel (in the NT varieties but nobody should be using any other version nowadays). It takes every allocation in a process and returns a memory block that has its end aligned with the end of a memory page, then it makes the following page unaccessible (no read, no write access). If the program tries to read or write past the end of the block, you'll get an access violation you can catch with your favorite debugger.
How to get it: Download and install the package Debugging Tools for Windows from Microsoft: http://www.microsoft.com/whdc/devtools/debugging/default.mspx
then launch the GFlags utility, go to the 3rd tab and enter the name of your executable, then Hit the key. Check the PageHeap checkbox, click OK and you're good to go.
The last thing: when you're done with debugging, don't ever forget to launch GFlags again, and disable PageHeap for the application. GFlags enters this setting into the Registry (under HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\), so it is persistent, even across reboots.
Also, be aware that using PageHeap can increase the memory needs of your application tremendously.
The way to do what you want is to BE the allocator. If you filter all requests, and then record them for debugging purposes, then you can find out what you want when the memory is free'd.
Additionally, you can check at the end of the program to see if all allocated blocks were freed, and if not, list them. An ambitious library of this sort could even take FUNCTION and LINE parameters via a macro to let you know exactly where you are leaking memory.
Finally, Microsoft's MSVCRT provides a a debuggable heap that has many useful tools that you can use in your debug version to find memory problems: http://msdn.microsoft.com/en-us/library/bebs9zyz.aspx
On Linux, you can use valgrind to find many errors. http://valgrind.org/