When can a memory leak occur? - c++

I don't know what to think here...
We have a component that runs as a service. It runs perfectly well on my local machine, but on some other machine (on both machine RAM's are equal to 2GB) it starts to generate bad_alloc exceptions on the second and consecutive days. The thing is that the memory usage of the process stays the same at aproximately 50Mb level. The other weird thing is that by means of tracing messages we have localized the exception to be thrown from a stringstream object which does but insert no more than 1-2 Kb data into the stream. We're using STL-Port if that matters.
Now, when you get a bad_alloc exception, you think it's a memory leak. But all our manual allocations are wrapped into a smart pointer. Also, I can't understand how a stringstream object lacks memory when the whole process uses only ~50Mb (the memory usage stays approximtely constant(and sure doesn't rise) from day to day).
I can't provide you with code, because the project is really big, and the part which throws the exception really does nothing else but create a stringstream and << some data and then log it.
So, my question is... How can a memory leak/bad_alloc occur when the process uses only 50Mb memory out of 2GB ? What other wild guesses do you have as to what could possibly be wrong?
Thanks in advance, I know the question is vague etc., I'm just sort of desperate and I tried my best to explain the problem.

One likely reason within your description is that you try to allocate a block of some unreasonably big size because of an error in your code. Something like this;
size_t numberOfElements;//uninitialized
if( .... ) {
numberOfElements = obtain();
}
elements = new Element[numberOfElements];
now if numberOfElements is left uninitialized it can contain some unreasonably big number and so you effectively try to allocate a block of say 3GB which the memory manager refuses to do.
So it can be not that your program is short on memory, but that it tries to allocate more memory than it could possibly be allowed to under even the best condition.

bad_alloc doesn't necessarily mean there is not enough memory. The allocation functions might also fail because the heap is corrupted. You might have some buffer overrun or code writing into deleted memory, etc.
You could also use Valgrind or one of its Windows replacements to find the leak/overrun.

Just a hunch,
But I have had trouble in the past when allocating arrays as so
int array1[SIZE]; // SIZE limited by COMPILER to the size of the stack frame
when SIZE is a large number.
The solution was to allocate with the new operator
int* array2 = new int[SIZE]; // SIZE limited only by OS/Hardware
I found this very confusing, the reason turned out to be the stack frame as discussed here in the solution by Martin York:
Is there a max array length limit in C++?
All the best,
Tom

Check the profile of other processes on the machine using Process Explorer from sysinternals - you will get bad_alloc if memory is short, even if it's not you that's causing memory pressure.
Check your own memory usage using umdh to get snapshots and compare usage profile over time. You'll have to do this early in the cycle to avoid blowing up the tool, but if your process's behaviour is not degrading over time (ie. no sudden pathological behaviour) you should get accurate info on its memory usage at time T vs time T+t.

Another long shot: you don't say in which of the three operations the error occurs (construction, << or log), but the problem may be memory fragmentation, rather than memory consumption. Maybe stringstream can't find a contiguous memory block long enough to hold a couple of Kb.
If this is the case, and if you exercise that function on the first day (without mishap) then you could make the stringstream a static variable and reuse it. As far as I know stringstream does not deallocate it's buffer space during its lifetime, so if it establishes a big buffer on the first day it will continue to have it from then on (for added safety you could run a 5Kb dummy string through it when it is first constructed).

I fail to see why a stream would throw. Don't you have a dump of the failed process? Or perhaps attach a debugger to it to see what the allocator is trying to allocate?
But if you did overload the operator <<, then perhaps your code does have a bug.
Just my 2 (euro) cts...
1. Fragmentation ?
The memory could be fragmented.
At one moment, you try to allocate SIZE bytes, but the allocator finds no contiguous chunk of SIZE bytes in memory, and then throw a bad_alloc.
Note: This answer was written before I read this possibility was ruled out.
2. signed vs. unsigned ?
Another possibility would be the use of a signed value for the size to be allocated:
char * p = new char[i] ;
If the value of i is negative (e.g. -1), the cast into the unsigned integral size_t will make it go beyond what is available to the memory allocator.
As the use of signed integral is quite common in user code, if only to be used as a negative value for an invalid value (e.g. -1 for a failed search), this is a possibility.

~className(){
//delete stuff in here
}

By way of example, Memory leaks can occur when you use the new operator in c++ and forget to use the delete operator.
Or, in other words, when you allocate a block of memory and you forget to deallocate it.

Related

Memory use keep increasing in for loop while using dynamic array.(C++)

The following is my C++ code.
I found the memory use will keep increasing if I try to use test1 array to calculate anything.
double **test1;
test1=new double *[1000];
for(int i=0;i<1000;i++){
test1[i]=new double [100000000];
test1[i][0]=rand() / (double)RAND_MAX*100;
}
for(int j=1;j<100000000;j++){
for(int i=0;i<1000;i++){
test1[i][j]=test1[i][j-1]; //this cause memory use increase.
}
}
If I delete the line.
test1[i][j]=test1[i][j-1];
The memory use will become a small constant value.
I thought I have already declare the dynamic array at the first part, the memory use should be a constant if I didn't new any array.
What cause the memory use increasing? And how do I avoid it?
(I use linux command "top" to monitor the memory use.)
In the first loop you create 100,000,000 doubles, which is 800 MB of allocation. You write to the first one only.
Later you write to the rest. When you do this, the operating system needs to actually give you the memory to write into, whereas initially it just gave you a mapping which would page fault later (when you write to it).
So basically, since each allocation is so large, the memory required to back it is not physically allocated until it is used.
The code is nonsensical, because eventually you try to write to 800 GB of memory. I doubt that will ever complete on a typical computer.
On a virtual memory system, the Linux kernel will (by default) not actually allocate any physical memory when your program does an allocation. Instead it will just adjust your virtual address space size.
Think of it like the kernel going "hmm, yeah, you say you want this much memory. I'll remember I promised you that, but let's see if you are really going to use it before I go fetch it for you".
If you then actually go and write to the memory, the kernel will get a page fault for the virtual address that that is not actually backed by real memory and at that point it will go and allocate some real memory to back the page you wrote to.
Many programs never write to all the memory they allocate, so by only fulfilling the promise when it really has to, the kernel saves huge amounts of memory.
You can see the difference between the amount you have allocated and the amount that is actually occupying real memory bu looking at the VSS (Virtual Set Size) and RSS (Resident Set Size) columns in the output of ps aux.
If you want all allocations to be backed by physical memory all the time (you probably do not), then you can change the kernels overcommit policy via the vm.overcommit_memory sysctl switch.

Find huge blocks of allocated memory

I have a program (daemon) that is written in c/c++. It runs flawlessly, but after some period of time( it can be 5 days, week, 2 weeks ) it becomes to allocate a lot of megabytes of memory. I can't understand what parts of code do not free allocated memory. At startup memory usage is about 20-30 megabytes. Then after some period, or maybe event, it grows slowly about 1Mb per hour, and if not terminated can crash because no memory is available.
I've tried to use Valgrind and did shutdown the daemon in usual way when it has already allocated about 500Mb of memory. Shutdown process was really long, but when it finished Valgrind said no memory leaks were found, except for mysql_init/mysql_close procedures(about 504bytes are definetly lost). Google says not to worry about this Mysql leak, and gives some reasons why memory diagnostic tools like Valgrind think that it is a leak.
I don't really know what parts of code allocate memory but free it only on program shutdown. Help me to find out this
Valgrind only detects pointers that aren't deleted, more or less. Keeping them around when you don't need them is a different problem.
Firstly, all objects and memory are freed at shutdown. If there's a leak, valgrind will detect it as memory not referenced by an object, etc. Any leaks however are freed by the operating system in the end.
If you're catching all exceptions (...) and not doing anything with them, well, don't do that. It's a common cause.
Secondly, a logfile of destructors that are called during shutdown might be helpful. Perhaps at the end of main(), set a global flag; any destructors called while that flag is set can output that they exist. See if there are lots of objects that shouldn't be there.
A bit easier, you can use a global variable, each ctor can increment it by 1, and dtor decrement by 1. If you find that the number of objects isn't staying relatively the same, you can investigate which ones are making the problem using similar techniques.
Thirdly, use Boost and its scoped smart pointers to help, but do not rely on smart pointers as the holy grail.
There is a possible underlying issue that I have come across. For long-running programs, memory fragmentation can lead to large memory usage. You may delete a 1mb object, then try to create a 2mb object; the creation will be in new space because that 1mb 'free chunk' is not big enough. Then when you make a 512kb object it may go into that 1mb object's space, only using 1/2 of available space, but making it so that your next 1mb object needs to be allocated in big space.
Unfortunately this problem can become bad, due to small objects being allocated in persistent places. There may be, say, 50-byte classes 300kb apart in memory, and like 100 of them, but no 512kb objects can be allocated in that space, so it allocates an additional 512kb for each new object, effectively wasting 90% of actual 'free' space even though your program owns more than enough already.
This problem is hard to track down as the definite cause, but if you examine your program's flow, look for small allocations. Remember std::list/vector/etc. can all cause this; if you're looking to make a daemon that does lots of memory ops run for weeks, it's a good idea to pre-allocate memory using reserve(). Memory pools are even better.
Depending on the time you want to put in, you can also either make (or find) a custom memory allocator that will report on objects when it shuts down, too.
Try to use Valgrind Massif tool. From Massif manual:
Also, there are certain space leaks that aren't detected by
traditional leak-checkers, such as Memcheck's. That's because the
memory isn't ever actually lost -- a pointer remains to it -- but it's
not in use. Programs that have leaks like this can unnecessarily
increase the amount of memory they are using over time. Massif can
help identify these leaks.
Massif should show you what's happening with memory and where it is allocated and not freeing until shutdown.
Since you are sure, there's no memory leak, your program might be allocating memory and storing data without leaking.
For example, let's say your program uses a linked list...
struct list{
DATA_ARRAY arr; //Some data
struct *list next;
};
While(true) //infinite loop
{
// Add new nodes to list
// Store some data in the node
}
There's no leak here. But the loop adds new nodes forever and stores data and everything is perfectly valid. But memory usage increases all the time. Since you are running for 2-5 days, something like this is certainly possible.
You may have to inspect the code and free memory if no longer needed.

How to allocate long array of struct?

I have such struct:
struct Heap {
int size;
int *heap_array;
};
And I need to create an array:
Heap *rooms = new Heap[k];
k may be even equal 1000000. For k about 1000 it works, with k about 10000 I got:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
Edit:
I forgot to add, I cant use vector, it is task in my school... Only <cstdio> and <math> allowed.
Are you using 32 or 64 bit?
Depending on this your process can only consume memory up to a maximum size. I am guessing you are on 32 bits. Maybe you don't even have so much memory to begin with.
Also take a look here :
http://www.cplusplus.com/reference/std/new/bad_alloc/
Updated
You should ensure you are not leaking anything and that your heap allocations do not live longer than needed. Attempt to reduce the allocation requirements per Heap. As well, if you are allocating that many Heaps: Where is the storage for heap_array? Are those all also new[]ed?
If you exceed the amount of addressable memory for your system, you may need to run your program as a 64 bit executable.
bad_alloc basically means that new is unable to allocate the requested space. It surprises me that you see it already when attempting to allocate 10 000. How much memory are you using besides that?
You might want to check what your alignment is set to (how you do this is compiler specific) Using a vector shouldn't really help in avoiding the bad_alloc exception, especially not if you know from start the number of elements needed.
You might be running your head against the wall here if you are trying to allocate more memory than you have (2 Gb on Win 32 bit), if this is the case try looking at this answer:
C++ memory allocation: 'new' throws bad_alloc?
You can also risk running into fragmentation issues, there might be enough space counting the number of free bytes, but not enough space in a single cluster. The link above brings some suggestions for that as well in the post by the user Crashworks he suggests using the (although OS specific) functions HeapAlloc and VirtualAlloc. But then again this would conflict with your school assignment.
Instead try investigating if you receive the same problem on a different computer.
Perhaps if it is truly necessary to allocate and process enough structs to cause a bad_alloc exception you could consider processing only a few at a time, preferrably reusing already allocated structs. This would improve your memory usage numbers, and might even prove to be faster.

new[] doesn't decrease available memory until populated

This is in C++ on CentOS 64bit using G++ 4.1.2.
We're writing a test application to load up the memory usage on a system by n Gigabytes. The idea being that the overall system load gets monitored through SNMP etc. So this is just a way of exercising the monitoring.
What we've seen however is that simply doing:
char* p = new char[1000000000];
doesn't affect the memory used as shown in either top or free -m
The memory allocation only seems to become "real" once the memory is written to:
memcpy(p, 'a', 1000000000); //shows an increase in mem usage of 1GB
But we have to write to all of the memory, simply writing to the first element does not show an increase in the used memory:
p[0] = 'a'; //does not show an increase of 1GB.
Is this normal, has the memory actually been allocated fully? I'm not sure if it's the tools we are using (top and free -m) that are displaying incorrect values or whether there is something clever going on in the compiler or in the runtime and/or kernel.
This behavior is seen even in a debug build with optimizations turned off.
It was my understanding that a new[] allocated the memory immediately. Does the C++ runtime delay this actual allocation until later on when it is accessed. In that case can an out of memory exception be deferred until well after the actual allocation of the memory until the memory is accessed?
As it is it is not a problem for us, but it would be nice to know why this is occurring the way it is!
Cheers!
Edit:
I don't want to know about how we should be using Vectors, this isn't OO / C++ / the current way of doing things etc etc. I just want to know why this is happening the way it is, rather than have suggestions for alternative ways of trying it.
When your library allocates memory from the OS, the OS will just reserve an address range in the process's virtual address space. There's no reason for the OS to actually provide this memory until you use it - as you demonstrated.
If you look at e.g. /proc/self/maps you'll see the address range. If you look at top's memory use you won't see it - you're not using it yet.
Please look up for overcommit. Linux by default doesn't reserve memory until it is accessed. And if you end up by needing more memory than available, you don't get an error but a random process is killed. You can control this behavior with /proc/sys/vm/*.
IMO, overcommit should be a per process setting, not a global one. And the default should be no overcommit.
About the second half of your question:
The language standard doesn't allow any delays in throwing a bad_alloc. That must happen as an alternative to new[] returning a pointer. It cannot happen later!
Some OSs might try to overcommit memory allocations, and fail later. That is not conforming to the C++ language standard.

C++ memory leaks: are dynamically created arrays removed on leaving a function call?

So I have a function that creates a dynamic array, I then delete the array before I leave the function (as I thought I am supposed to), however I am getting a 'Heap Corruption Detected' warning in VS2008. If I remove the line that deallocates the memory everything works fine:
void myFunc()
{
char* c = new char[length];
memset(c, 0, length);
//.. do somsething with array
delete[] c; //this line throws an error??
}
Thanks for any advice
Most likely you are doing something else bad (like under/overflowing your buffer) and corrupting the heap at that point, but it isn't detected until you call delete[] and try to interpret the now corrupted heap structures.
Post the 'do something' section if you need more assistance.
I think you have a problem with your //.. do something with array code (or even some other code) since the rest of what you have is okay.
Often, memory arena corruption is only detected when freeing the memory which is probably why removing the line seems to fix it.
But rest assured, the arena is still corrupted whether or not you're detecting it. You need to fix it.
One way this might happen is if your memory allocation routines actually allocate extra bits before and possibly after what you're given. For example, if you ask for 1024 bytes, what might actually be allocated from the heap is a 1040-byte block of which you're given the address of the 16th byte. This gives the arena manager 16 bytes at the start to store housekeeping information (including a sentinal value at the very start of it).
Then, when the block is deleted, the arena manager knows that its housekeeping is in the 16 bytes before the address you're trying to free and can check the sentinal value (or all the sentinals in the arena or just the ones on either side of the one you're freeing - this is all implementation detail) to make sure they're unchanged. If they have been changed, that's detected as corruption.
As I said earlier, the corruption couuld be caused by your //.. do something with array code or it could be somewhere totally different - all that matters is that the arena is being trashed.
You're probably underflowing the buffer, actually - the VC heap (and most heap implementations) keep book-keeping information immediately before the allocation they hand out. It includes some data validation (sentinel bytes, etc), which is it doesn't pass, this error is thrown.
Every time you allocate memory using new, you will need to free that memory using a matching delete. The code you quote should work.
C++ memory manager implementations typically interleave their control data structures with areas of memory you allocate. C++ does not bounds-check arrays for you. If your code writes data off the end or before the start of an array, this will corrupt the heap. It is very likely that this is what is happening here. Carefully examine the code that performs work on the array.
First, to answer the title, no dynamically allocated memory (with new, malloc, etc) is not freed when function exits. You are responsible for freeing it.
Second, just an advice that might help you debug your problem.
One great option is to use a free tool from Microsoft, called Application Verifier. It's a great tool to have in your toolbox, it's really great at helping you finding bugs in your applications.
Another option, not involving use of other tools would be, instead of your allocated array, you could try using std::vector, it might help detecting your heap corruption in debug mode. It has a huge amount of various checks in debug mode, which would likely cause it to break into debugger at the right time. Here's what you could try:
{
const size_t size_of_array = 64;
// use constructor with size and value
// do _not_ use memset on this object
std::vector<char> your_array(size_of_array, 0);
// do something here with it e.g.:
snprintf(&your_array[0], your_array.size(), "hello");
// do whatever you do with your array
// use debug build and run it under debugger,
// likely you will spot your problem pretty soon
// no need to delete anything here
}
This warning means you probably wrote to memory you don't own, perhaps by overrunning a buffer, freeing memory more than once, or forgetting to initialize a pointer before using it.
Good luck
delete does not throw. This is guaranteed. If you are allocating for some "length", and if you are using the entire char array without having '/0' at the end, then you would get this error. Eg:
char* arr = new char[5];
strcpy(arr, "Jagan");
delete[] arr;
Instead, allocate arr of length 6 in this case.