I have such struct:
struct Heap {
int size;
int *heap_array;
};
And I need to create an array:
Heap *rooms = new Heap[k];
k may be even equal 1000000. For k about 1000 it works, with k about 10000 I got:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
Edit:
I forgot to add, I cant use vector, it is task in my school... Only <cstdio> and <math> allowed.
Are you using 32 or 64 bit?
Depending on this your process can only consume memory up to a maximum size. I am guessing you are on 32 bits. Maybe you don't even have so much memory to begin with.
Also take a look here :
http://www.cplusplus.com/reference/std/new/bad_alloc/
Updated
You should ensure you are not leaking anything and that your heap allocations do not live longer than needed. Attempt to reduce the allocation requirements per Heap. As well, if you are allocating that many Heaps: Where is the storage for heap_array? Are those all also new[]ed?
If you exceed the amount of addressable memory for your system, you may need to run your program as a 64 bit executable.
bad_alloc basically means that new is unable to allocate the requested space. It surprises me that you see it already when attempting to allocate 10 000. How much memory are you using besides that?
You might want to check what your alignment is set to (how you do this is compiler specific) Using a vector shouldn't really help in avoiding the bad_alloc exception, especially not if you know from start the number of elements needed.
You might be running your head against the wall here if you are trying to allocate more memory than you have (2 Gb on Win 32 bit), if this is the case try looking at this answer:
C++ memory allocation: 'new' throws bad_alloc?
You can also risk running into fragmentation issues, there might be enough space counting the number of free bytes, but not enough space in a single cluster. The link above brings some suggestions for that as well in the post by the user Crashworks he suggests using the (although OS specific) functions HeapAlloc and VirtualAlloc. But then again this would conflict with your school assignment.
Instead try investigating if you receive the same problem on a different computer.
Perhaps if it is truly necessary to allocate and process enough structs to cause a bad_alloc exception you could consider processing only a few at a time, preferrably reusing already allocated structs. This would improve your memory usage numbers, and might even prove to be faster.
Related
The following is my C++ code.
I found the memory use will keep increasing if I try to use test1 array to calculate anything.
double **test1;
test1=new double *[1000];
for(int i=0;i<1000;i++){
test1[i]=new double [100000000];
test1[i][0]=rand() / (double)RAND_MAX*100;
}
for(int j=1;j<100000000;j++){
for(int i=0;i<1000;i++){
test1[i][j]=test1[i][j-1]; //this cause memory use increase.
}
}
If I delete the line.
test1[i][j]=test1[i][j-1];
The memory use will become a small constant value.
I thought I have already declare the dynamic array at the first part, the memory use should be a constant if I didn't new any array.
What cause the memory use increasing? And how do I avoid it?
(I use linux command "top" to monitor the memory use.)
In the first loop you create 100,000,000 doubles, which is 800 MB of allocation. You write to the first one only.
Later you write to the rest. When you do this, the operating system needs to actually give you the memory to write into, whereas initially it just gave you a mapping which would page fault later (when you write to it).
So basically, since each allocation is so large, the memory required to back it is not physically allocated until it is used.
The code is nonsensical, because eventually you try to write to 800 GB of memory. I doubt that will ever complete on a typical computer.
On a virtual memory system, the Linux kernel will (by default) not actually allocate any physical memory when your program does an allocation. Instead it will just adjust your virtual address space size.
Think of it like the kernel going "hmm, yeah, you say you want this much memory. I'll remember I promised you that, but let's see if you are really going to use it before I go fetch it for you".
If you then actually go and write to the memory, the kernel will get a page fault for the virtual address that that is not actually backed by real memory and at that point it will go and allocate some real memory to back the page you wrote to.
Many programs never write to all the memory they allocate, so by only fulfilling the promise when it really has to, the kernel saves huge amounts of memory.
You can see the difference between the amount you have allocated and the amount that is actually occupying real memory bu looking at the VSS (Virtual Set Size) and RSS (Resident Set Size) columns in the output of ps aux.
If you want all allocations to be backed by physical memory all the time (you probably do not), then you can change the kernels overcommit policy via the vm.overcommit_memory sysctl switch.
While investigating a memory link in one of our projects, I've run into a strange issue. Somehow, the memory allocated for objects (vector of shared_ptr to object, see below) is not fully reclaimed when the parent container goes out of scope and can't be used except for small objects.
The minimal example: when the program starts, I can allocate a single continuous block of 1.5Gb without problem. After I use the memory somewhat (by creating and destructing an number of small objects), I can no longer do big block allocation.
Test program:
#include <iostream>
#include <memory>
#include <vector>
using namespace std;
class BigClass
{
private:
double a[10000];
};
void TestMemory() {
cout<< "Performing TestMemory"<<endl;
vector<shared_ptr<BigClass>> list;
for (int i = 0; i<10000; i++) {
shared_ptr<BigClass> p(new BigClass());
list.push_back(p);
};
};
void TestBigBlock() {
cout<< "Performing TestBigBlock"<<endl;
char* bigBlock = new char [1024*1024*1536];
delete[] bigBlock;
}
int main() {
TestBigBlock();
TestMemory();
TestBigBlock();
}
Problem also repeats if using plain pointers with new/delete or malloc/free in cycle, instead of shared_ptr.
The culprit seems to be that after TestMemory(), the application's virtual memory stays at 827125760 (regardless of number of times I call it). As a consequence, there's no free VM regrion big enough to hold 1.5 GB. But I'm not sure why - since I'm definitely freeing the memory I used. Is it some "performance optimization" CRT does to minimize OS calls?
Environment is Windows 7 x64 + VS2012 + 32-bit app without LAA
Sorry for posting yet another answer since I am unable to comment; I believe many of the others are quite close to the answer really :-)
Anyway, the culprit is most likely address space fragmentation. I gather you are using Visual C++ on Windows.
The C / C++ runtime memory allocator (invoked by malloc or new) uses the Windows heap to allocate memory. The Windows heap manager has an optimization in which it will hold on to blocks under a certain size limit, in order to be able to reuse them if the application requests a block of similar size later. For larger blocks (I can't remember the exact value, but I guess it's around a megabyte) it will use VirtualAlloc outright.
Other long-running 32-bit applications with a pattern of many small allocations have this problem too; the one that made me aware of the issue is MATLAB - I was using the 'cell array' feature to basically allocate millions of 300-400 byte blocks, causing exactly this issue of address space fragmentation even after freeing them.
A workaround is to use the Windows heap functions (HeapCreate() etc.) to create a private heap, allocate your memory through that (passing a custom C++ allocator to your container classes as needed), and then destroy that heap when you want the memory back - This also has the happy side-effect of being very fast vs delete()ing a zillion blocks in a loop..
Re. "what is remaining in memory" to cause the issue in the first place: Nothing is remaining 'in memory' per se, it's more a case of the freed blocks being marked as free but not coalesced. The heap manager has a table/map of the address space, and it won't allow you to allocate anything which would force it to consolidate the free space into one contiguous block (presumably a performance heuristic).
There is absolutely no memory leak in your C++ program. The real culprit is memory fragmentation.
Just to be sure(regarding memory leak point), I ran this program on Valgrind, and it did not give any memory leak information in the report.
//Valgrind Report
mantosh#mantosh4u:~/practice$ valgrind ./basic
==3227== HEAP SUMMARY:
==3227== in use at exit: 0 bytes in 0 blocks
==3227== total heap usage: 20,017 allocs, 20,017 frees, 4,021,989,744 bytes allocated
==3227==
==3227== All heap blocks were freed -- no leaks are possible
Please find my response to your query/doubt asked in original question.
The culprit seems to be that after TestMemory(), the application's
virtual memory stays at 827125760 (regardless of number of times I
call it).
Yes, real culprit is hidden fragmentation done during the TestMemory() function.Just to understand the fragmentation, I have taken the snippet from wikipedia
"
when free memory is separated into small blocks and is interspersed by allocated memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is divided into pieces that are too small individually to satisfy the demands of the application.
For example, consider a situation wherein a program allocates 3 continuous blocks of memory and then frees the middle block. The memory allocator can use this free block of memory for future allocations. However, it cannot use this block if the memory to be allocated is larger in size than this free block."
The above explains paragraph explains very nicely about memory fragmentation.Some allocation patterns(such as frequent allocation and deal location) would lead to memory fragmentation,but its end impact(.i.e. memory allocation 1.5GBgets failed) would greatly vary on different system as different OS/heap manager has different strategy and implementation.
As an example, your program ran perfectly fine on my machine(Linux) however you have encountered the memory allocation failure.
Regarding your observation on VM size remains constant: VM size seen in task manager is not directly proportional to our memory allocation calls. It mainly depends on the how much bytes is in committed state. When you allocate some dynamic memory(using new/malloc) and you do not write/initialize anything in those memory regions, it would not go committed state and hence VM size would not get impacted due to this. VM size depends on many other factors and bit complicated so we should not rely completely on this while understanding about dynamic memory allocation of our program.
As a consequence, there's no free VM regrion big enough to hold 1.5
GB.
Yes, due to fragmentation, there is no contiguous 1.5GB memory. It should be noted that total remaining(free) memory would be more than 1.5GB but not in fragmented state. Hence there is not big contiguous memory.
But I'm not sure why - since I'm definitely freeing the memory I used.
Is it some "performance optimization" CRT does to minimize OS calls?
I have explained about why it may happen even though you have freed all your memory. Now in order to fulfil user program request, OS will call to its virtual memory manager and try to allocate the memory which would be used by heap memory manager. But grabbing the additional memory does depend on many other complex factor which is not very easy to understand.
Possible Resolution of Memory Fragmentation
We should try to reuse the memory allocation rather than frequent memory allocation/free. There could be some patterns(like a particular request size allocation in particular order) which may lead overall memory into fragmented state. There could be substantial design change in your program in order to improve memory fragmentation. This is complex topic and require internal understanding of memory manager to understand the complete root cause of such things.
However there are tools exists on Windows based system which I am not much aware. But I found one excellent SO post regarding the which tool(on windows) can be useful to understand and check the fragmentation status of your program by yourself.
https://stackoverflow.com/a/1684521/2724703
This is not memory leak. The memory U used was allocated by C\C++ Runtime. The Runtime apply a a bulk of memory from OS once and then each new you called will allocated from that bulk memory. when delete one object, the Runtime not return memory to OS immediately, it may hold that memory for performance.
There is nothing here which indicates a genuine "leak". The pattern of memory you describe is not unexpected. Here are a few points which might help to understand. What happens is highly OS dependent.
A program often has a single heap which can be extended or shrunk in length. It is however one contiguous memory area, so changing the size is just changing where the end of the heap is. This makes it very difficult to ever "return" memory to the OS, since even one little tiny object in that space will prevent its shrinking. On Linux you can lookup the function 'brk' (I know you're on Windows, but I presume it does something similar).
Large allocations are often done with a different strategy. Rather than putting them in the general purpose heap, an extra block of memory is created. When it is deleted this memory can actually be "returned" to the OS since its guaranteed nothing is using it.
Large blocks of unused memory don't tend to consume a lot of resources. If you generally aren't using the memory any more they might just get paged to disk. Don't presume that because some API function says you're using memory that you are actually consuming significant resources.
APIs don't always report what you think. Due to a variety of optimizations and strategies it may not actually be possible to determine how much memory is in use and/or available on a system at a particular moment. Unless you have intimate details of the OS you won't know for sure what those values mean.
The first two points can explain why a bunch of small blocks and one large block result in different memory patterns. The latter points indicate why this approach to detecting leaks is not useful. To detect genuine object-based "leaks" you generally need a dedicated profiling tool which tracks allocations.
For example, in the code provided:
TestBigBlock allocates and deletes array, assume this uses a special memory block, so memory is returned to OS
TestMemory extends the heap for all the small objects, and never returns any heap to the OS. Here the heap is entirely available from the applications point-of-view, but from the OS's point of view it is assigned to the application.
TestBigBlock now fails, since although it would use a special memory block, it shares the overall memory space with heap, and there just isn't enough left after 2 is complete.
This was an question I asked myself when I was a student, but failing to get a satisfying answer, I got it little by little out my mind... till today.
I known I can deal with an allocation memory error either by checking if the returned pointer is NULL or by handling the bad_alloc exception.
Ok, but I wonder: How and why the call of new can fail? Up to my knowledge, an allocation memory can fail if there is not enough space in the free store. But does this situation really occur nowadays, with several GB of RAM (at least on a regular computer; I am not talking about embedded systems)? Can we have other situations where an allocation memory failure may occur?
Although you've gotten a number of answers about why/how memory could fail, most of them are sort of ignoring reality.
In reality, on real systems, most of these arguments don't describe how things really work. Although they're right from the viewpoint that these are reasons an attempted memory allocation could fail, they're mostly wrong from the viewpoint of describing how things are typically going to work in reality.
Just for example, in Linux, if you try to allocate more memory than the system has available, your allocation will not fail (i.e., you won't get a null pointer or a strd::bad_alloc exception). Instead, the system will "over commit", so you get what appears to be a valid pointer -- but when/if you attempt to use all that memory, you'll get an exception, and/or the OOM Killer will run, trying to free memory by killing processes that use a lot of memory. Unfortunately, this may about as easily kill the program making the request as other programs (in fact, many of the examples given that attempt to cause allocation failure by just repeatedly allocating big chunks of memory should probably be among the first to be killed).
Windows works a little closer to how the C and C++ standards envision things (but only a little). Windows is typically configured to expand the swap file if necessary to meet a memory allocation request. This means that what as you allocate more memory, the system will go semi-crazy with swapping memory around, creating bigger and bigger swap files to meet your request.
That will eventually fail, but on a system with lots of drive space, it might run for hours (most of it madly shuffling data around on the disk) before that happens. At least on a typical client machine where the user is actually...well, using the computer, he'll notice that everything has dragged to a grinding halt, and do something to stop it well before the allocation fails.
So, to get a memory allocation that truly fails, you're typically looking for something other than a typical desktop machine. A few examples include a server that runs unattended for weeks at a time, and is so lightly loaded that nobody notices that it's thrashing the disk for, say, 12 hours straight, or a machine running MS-DOS or some RTOS that doesn't supply virtual memory.
Bottom line: you're basically right, and they're basically wrong. While it's certainly true that if you allocate more memory than the machine supports, that something's got to give, it's generally not true that the failure will necessarily happen in the way prescribed by the C++ standard -- and, in fact, for typical desktop machines that's more the exception (pardon the pun) than the rule.
Apart from the obvious "out of memory", memory fragmentation can also cause this. Imagine a program that does the following:
until main memory is almost full:
allocate 1020 bytes
allocate 4 bytes
free all the 1020 byte blocks
If the memory manager puts all these sequentially in memory in the order they are allocated, we now have plenty of free memory, but any allocation larger than 1020 bytes will not be able to find a contiguous space to put them, and fail.
Usually on modern machines it will fail due to scarcity of virtual address space; if you have a 32 bit process that tries to allocate more than 2/3 GB of memory1, even if there would be physical RAM (or paging file) to satisfy the allocation, simply there won't be space in the virtual address space to map such newly allocated memory.
Another (similar) situation happens when the virtual address space is heavily fragmented, and thus the allocation fails because there's not enough contiguous addresses for it.
Also, running out of memory can happen, and in fact I got in such a situation last week; but several operating systems (notably Linux) in this case don't return NULL: Linux will happily give you a pointer to an area of memory that isn't already committed, and actually allocate it when the program tries to write in it; if at that moment there's not enough memory, the kernel will try to kill some memory-hogging processes to free memory (an exception to this behavior seems to be when you try to allocate more than the whole capacity of the RAM and of the swap partition - in such a case you get a NULL upfront).
Another cause of getting NULL from a malloc may be due to limits enforced by the OS over the process; for example, trying to run this code
#include <cstdlib>
#include <iostream>
#include <limits>
void mallocbsearch(std::size_t lower, std::size_t upper)
{
std::cout<<"["<<lower<<", "<<upper<<"]\n";
if(upper-lower<=1)
{
std::cout<<"Found! "<<lower<<"\n";
return;
}
std::size_t mid=lower+(upper-lower)/2;
void *ptr=std::malloc(mid);
if(ptr)
{
free(ptr);
mallocbsearch(mid, upper);
}
else
mallocbsearch(lower, mid);
}
int main()
{
mallocbsearch(0, std::numeric_limits<std::size_t>::max());
return 0;
}
on Ideone you find that the maximum allocation size is about 530 MB, which is probably a limit enforced by setrlimit (similar mechanisms exist on Windows).
it varies between OSes and can often be configured; the total virtual address space of a 32 bit process is 4 GB, but on all the current mainstream OSes a big chunk of it (the upper 2 GB by for 32 bit Windows with default settings) is reserved for kernel data.
The amount of memory available to the given process is finite. If the process exhausts its memory, and tries to allocate more, the allocation would fail.
There are other reasons why an allocation could fail. For example, the heap could get fragmented and not have a single free block large enough to satisfy the allocation request.
I'm working with some C++ code that implements a graph algorithm that uses a lot of small chunks of memory (a relative of gSpan, but that doesn't matter). The code is implemented in C++ and uses std::vectors to store many small elements (on the order of 64 bytes each). However, I'm using this on much larger data sets than the original authors, and I'm running out of memory.
It appears, however, that I'm running out of memory prematurely. Fragmentation? I suspect it is because std::vectors are trying to increase in size every time they need more memory, and vectors insist on contiguous memory. I have 8GB of ram and 18GB of swap, yet when std::bad_alloc is thrown, I'm only using 6.5GB resident and ~8GB virtual. I've caught the bad_alloc calls and printed out the vector sizes and here's what I see:
size: 536870912
capacity: 536870912
maxsize: 1152921504606846975
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
So, clearly, we've hit the maximum size of the vector and the library is trying to allocate more, and failing.
So my questions are:
Am I correct in assuming that is what the problem is?
What is the solution (besides "buy more RAM"). I'm willing to trade CPU time for fitting in memory.
Should i convert the entire code to use std::list (and somehow implement operator[] for the places the code uses it?).. would that even be more ram efficient? at the very least it would allow the list elements to be non-contiguous...right?
Is there a better allocator out there that I can use to override the standard on for vectors for this use case?
What other solutions am I missing?
Since I don't know how much memory will ultimately used, I'm aware that even if I make changes there still might not be enough memory to do my calculations, but I suspect I can get at least a lot further then I'm getting now, which seems to be giving up very quickly.
I would try using std::deque as a direct drop-in for vector. There's a possibility that since it (often) uses a collection of chunks, extending the deque could be much cheaper than extending a vector (in terms of extra memory needed).
I don't know what to think here...
We have a component that runs as a service. It runs perfectly well on my local machine, but on some other machine (on both machine RAM's are equal to 2GB) it starts to generate bad_alloc exceptions on the second and consecutive days. The thing is that the memory usage of the process stays the same at aproximately 50Mb level. The other weird thing is that by means of tracing messages we have localized the exception to be thrown from a stringstream object which does but insert no more than 1-2 Kb data into the stream. We're using STL-Port if that matters.
Now, when you get a bad_alloc exception, you think it's a memory leak. But all our manual allocations are wrapped into a smart pointer. Also, I can't understand how a stringstream object lacks memory when the whole process uses only ~50Mb (the memory usage stays approximtely constant(and sure doesn't rise) from day to day).
I can't provide you with code, because the project is really big, and the part which throws the exception really does nothing else but create a stringstream and << some data and then log it.
So, my question is... How can a memory leak/bad_alloc occur when the process uses only 50Mb memory out of 2GB ? What other wild guesses do you have as to what could possibly be wrong?
Thanks in advance, I know the question is vague etc., I'm just sort of desperate and I tried my best to explain the problem.
One likely reason within your description is that you try to allocate a block of some unreasonably big size because of an error in your code. Something like this;
size_t numberOfElements;//uninitialized
if( .... ) {
numberOfElements = obtain();
}
elements = new Element[numberOfElements];
now if numberOfElements is left uninitialized it can contain some unreasonably big number and so you effectively try to allocate a block of say 3GB which the memory manager refuses to do.
So it can be not that your program is short on memory, but that it tries to allocate more memory than it could possibly be allowed to under even the best condition.
bad_alloc doesn't necessarily mean there is not enough memory. The allocation functions might also fail because the heap is corrupted. You might have some buffer overrun or code writing into deleted memory, etc.
You could also use Valgrind or one of its Windows replacements to find the leak/overrun.
Just a hunch,
But I have had trouble in the past when allocating arrays as so
int array1[SIZE]; // SIZE limited by COMPILER to the size of the stack frame
when SIZE is a large number.
The solution was to allocate with the new operator
int* array2 = new int[SIZE]; // SIZE limited only by OS/Hardware
I found this very confusing, the reason turned out to be the stack frame as discussed here in the solution by Martin York:
Is there a max array length limit in C++?
All the best,
Tom
Check the profile of other processes on the machine using Process Explorer from sysinternals - you will get bad_alloc if memory is short, even if it's not you that's causing memory pressure.
Check your own memory usage using umdh to get snapshots and compare usage profile over time. You'll have to do this early in the cycle to avoid blowing up the tool, but if your process's behaviour is not degrading over time (ie. no sudden pathological behaviour) you should get accurate info on its memory usage at time T vs time T+t.
Another long shot: you don't say in which of the three operations the error occurs (construction, << or log), but the problem may be memory fragmentation, rather than memory consumption. Maybe stringstream can't find a contiguous memory block long enough to hold a couple of Kb.
If this is the case, and if you exercise that function on the first day (without mishap) then you could make the stringstream a static variable and reuse it. As far as I know stringstream does not deallocate it's buffer space during its lifetime, so if it establishes a big buffer on the first day it will continue to have it from then on (for added safety you could run a 5Kb dummy string through it when it is first constructed).
I fail to see why a stream would throw. Don't you have a dump of the failed process? Or perhaps attach a debugger to it to see what the allocator is trying to allocate?
But if you did overload the operator <<, then perhaps your code does have a bug.
Just my 2 (euro) cts...
1. Fragmentation ?
The memory could be fragmented.
At one moment, you try to allocate SIZE bytes, but the allocator finds no contiguous chunk of SIZE bytes in memory, and then throw a bad_alloc.
Note: This answer was written before I read this possibility was ruled out.
2. signed vs. unsigned ?
Another possibility would be the use of a signed value for the size to be allocated:
char * p = new char[i] ;
If the value of i is negative (e.g. -1), the cast into the unsigned integral size_t will make it go beyond what is available to the memory allocator.
As the use of signed integral is quite common in user code, if only to be used as a negative value for an invalid value (e.g. -1 for a failed search), this is a possibility.
~className(){
//delete stuff in here
}
By way of example, Memory leaks can occur when you use the new operator in c++ and forget to use the delete operator.
Or, in other words, when you allocate a block of memory and you forget to deallocate it.