I am writing a simple C++ program on Mac OS. I have just
int main()
{
int *n = new int[50000000];
}
I launch this program in lldb, and put a breakpoint at the line where n is allocated. Then I launch top in another tab, I see that memory usage is 336K pre-allocation. When I do n inside lldb, so that the allocation for n happens, I expect to my memory usage to go up. However, top shows me the same amount of memory used by my program. What could be the reason for this? I am trying to understand how memory allocation happens in C++, which is why I am doing this.
I have not exited the scope of main. When I check top again, I am sitting at closing curly brace for main.
The top command shows the process stats as viewed by the operating system. It shows how much memory was allocated to the process, but not how much of this memory is effectively in use. It's not accurate for monitoring memory allocation.
Memory allocation with heap and free store is implementation dependent in C++. But tt's usually not mapped one to one with OS allocation calls. For performance reasons (calls to the OS are slower than calls inside your userland code), the memory is received from OS in larger chunks:
when the c++ runtime starts, it usually allocates some memory from the OS, in order to allocate memory it needs for standard library objects, and to initalize the free store to quickly satisfy allocation request.
Only if this initial memory is exhausted will the standard library allocate more memory from the operating system.
And allocation is done again in larger chunks, so that not every new would raise an OS call.
From your observations, I guess that this initial allocation is larger than 50 MB. Try with a much larger value to see the difference.
If you want to track memory consumption more precisely, you need some profiling tools, for example valgrind or heap command
Related
I am observing the following behavior in my test program:
I am doing malloc() for 1 MB and then free() it after sleep(10). I am doing this five times. I am observing memory consumption in top while the program is running.
Once free()-d, I am expecting the program's virtual memory (VIRT) consumption to be down by 1 MB. But actually it isn't. It stays stable. What is the explanation for this behavior? Does malloc() do some reserve while allocating memory?
Once free()-d, I am expecting program's virtual memory (VIRT) consumption to be down by 1MB.
Well, this is not guaranteed by the C standard. It only says, once you free() the memory, you should not be accessing that any more.
Whether the memory block is actually returned to the available memory pool or kept aside for future allocations is decided by the memory manager.
The C standard doesn't force on the implementer of malloc and free to return the memory to the OS directly. So different C library implementations will behave differently. Some of them might give it back directly and some might not. In fact, the same implementation will also behave differently depending on the allocation sizes and patterns.
This behavior, of course, is for good reasons:
It is not always possible. OS-level memory allocations usually are done in pages (4KB, 4MB, or ... sizes at once). And if a small part of the page is still being used after freeing another part then the page cannot be given back to the operating system until that part is also freed.
Efficiency. It is very likely that an application will ask for memory again. So why give it back to the OS and ask for it again soon after. (of course, there is probably a limit on the size of the memory kept.)
In most cases, you are not accountable for the memory you free if the implementation decided to keep it (assuming it is a good implementation). Sooner or later it will be reallocated or returned to the OS. Hence, optimizing for memory usage should be based on the amount you have malloc-ed and you haven't free-d. The case where you have to worry about this, is when your allocation patterns/sizes start causing memory fragmentation which is a very big topic on its own.
If you are, however, on an embedded system and the amount of memory available is limited and you need more control over when/how memory is allocated and freed then you need to ask for memory pages from the OS directly and manage it manually.
Edit: I did not explain why you are not accountable for memory you free.
The reason is, on a modern OS, allocated memory is virtual. Meaning if you allocate 512MB on 32-bit system or 10TB of 64-bit system, as long as you don't read or write to that memory, it will not reserve any physical space for it. Actually, it will only reserve physical memory for the pages you touch from that big block and not the entire block. And after "a while of not using that memory", its contents will be copied to disk and the underlying physical memory will be used for something else.
This is very dependent on the actual malloc implementation in use.
Under Linux, there is a threshold (MMAP_THRESHOLD) to decide where the memory for a given malloc() request comes from.
If the requested amount is below or equal to MMAP_THRESHOLD, the request is satisfied by either taking it from the so-called "free list", if any memory blocks have already been free()d. Otherwise, the "break line" of the program (i. e. the end of the data segment) is increased and the memory made available to the program by this process is used for the request.
On free(), the freed memory block is added to the free list. If there is enough free memory at the very end of the data segment, the break line (mentionned above) is moved again to shrink the data segment, returning the excess memory to the OS.
If the requested amount exceeds MMAP_THRESHOLD, a separate memory block is requested by the OS and returned again during free().
See also https://linux.die.net/man/3/malloc for details.
I have a C++ application (gcc 4.9.1, glibc 2.17) running on Linux (Centos 7). It uses various third-party libraries, notably Boost 1.61. As the application runs, I can watch its memory usage increasing steadily via htop's VIRT and RES columns, or the ps command, etc. If I let it go long enough, it will use enormous amounts of that memory and swamp the box.
Sounds like a leak, but it passes valgrind with only a few bytes leaked, all in places I'd expect. Debug print messages indicate program flow is as expected.
Digging further via the debugger, I find that most of this memory is still in use when __run_exit_handlers is invoked at the end of main. I can step through various calls to free as it works through the global destructor chain. After it finishes those, I observe only a minimal downward change in the apparent memory usage. Then, finally it calls _exit(), and only then is the memory restored to the operating system, all at once.
Can anyone offer me additional tips on how to proceed debugging this? Why won't my program give that memory back?
Everything here is based on GNU libc implementation of malloc running on Linux.
Test program below does not seem to give away any memory to the system after freeing memory (strace does not show sbrk calls that return memory back to the kernel):
int main()
{
static const int N = 5000000;
static void *arr[N];
for (int i = 0; i < N; i++)
arr[i] = std::malloc(1024);
// reverse to simplify allocators job
for (int i = N - 1; i >= 0; i--)
std::free(arr[i]);
}
Looks like glibc does not give away memory back at all. According to mallopt(3) man page, parameter M_TRIM_THRESHOLD is responsible for giving away memory. By default it is 128kb, while test program allocates and frees 5 GB of memory. Looks like some other details of malloc implementation do not let it free memory.
At the moment I can recommend following solutions:
If you can, try calling malloc_trim once in a while or after freeing a lot of memory. This should force trimming and should give memory back to OS using MADV_DONTNEED.
Avoid allocating large number of small objects using malloc or operator new, instead allocate them from a memory pool of a size greater than M_MMAP_THRESHOLD. Try destroing that pool afterwards if program logic allows this. Memory chunks of size greater than M_MMAP_THRESHOLD are immediately released back to OS.
Same as previous one, but should be faster: allocate memory pools for small objects using mmap and release memory back to OS using madvise and MADV_DONTNEED/MADV_FREE.
Try using another allocator that might take advantage of MADV_FREE to return memory back to the system (jemalloc?).
I have found this old (2006) ticket on glibc's bugzilla. It says there that free never returns memory back to the kernel, unless malloc_trim is called.
Newer versions of free seem to have code that executes internal systrim function that should trim top of the arena, but I wasn't able to make it work.
You can profile your memory allocation using valgrind --tool=massif ./executable
Check out the documentation at http://valgrind.org/docs/manual/ms-manual.html
Then once you have profiling data you can apply memory pools and other techniques. Since you already use Boost you can find several such tools in Boost.
I have the following loop, which pops a C++ concurrent queue I have, from the implementation here. https://juanchopanzacpp.wordpress.com/2013/02/26/concurrent-queue-c11/
while (!interrupted)
{
pxData data = queue->pop();
if (data.value == -1)
{
break; // exit loop on terminating condition
}
usleep(7000); // stub to simulate processing
}
I am looking at the memory history using System Monitor in CentOS7.
I'm trying to free up the memory taken up by the queue, after reading the value from the queue. However, as the following while loop runs, I don't see the memory usage going down. I've verified that the queue length does go down.
It does go down, however, when -1 is encountered and the loop exits. (program is still running) But I can't have this, because where usleep is, I want to do some intensive processing.
Question: Why doesn't the memory occupied by data get free-ed? (according to System Monitor) Isn't the stack allocated memory supposed to be free-ed when the variable goes out of scope?
The struct is defined as follows, and populated at the beginning of the program.
typedef struct pxData
{
float value; // -1 value terminates the loop
float x, y, z;
std::complex<float> valueData[65536];
} pxData;
It's populated with ~10000 pxData, which roughly translates to 5GB. System only has ~8GB.
So it's important that the memory is free-ed up for doing other processing in the system.
There are a few things at play here.
Virtual Memory
First, you need to understand that just because your program is "using" 5 GB of memory does not mean that there are only 3 GB of RAM left for other programs. Virtual memory means that those 5 GB might be only 1 GB of actual "resident" data, and the other 4 GB may actually be on disk rather than in RAM. So it's important to look at the "resident set size" rather than the "virtual size" when you're looking at your program. And note that if your system actually runs low on RAM, the OS may shrink the RSS of some programs by "paging out" some of their memory. So don't worry too much about "5 GB" appearing in the system monitor--worry if you have a real, concrete performance problem.
Heap Allocation
The second aspect is why your virtual size does not decrease as you remove items from the queue. We can guess that you put those elements into the queue by creating them with malloc or new one-by-one, then pushing them onto the back of the queue. This means that the first element you allocated will come out of the queue first. And that in turn means that when you have drained 90% of the queue, your memory allocation might look like this:
[program|------------------unused-------------------|pxData]
The problem here is that in the real world, just because you free or delete something does not mean the operating system instantly reclaims that memory. In fact, it may not be able to reclaim any unused spans unless they are at the "end" (i.e. most recently allocated). Since C++ does not have garbage collection and cannot move items around in memory without your consent, you end up with this big "hole" in your program's virtual memory. That hole would be used to satisfy future memory allocation requests, but if you haven't got any, it just sits there, until the queue is completely empty:
[program|------------------unused--------------------------]
Then the system is able to shrink your virtual address space back down:
[program]
Which brings you back to where you started.
Solutions
If you want to "fix" this, one option is to allocate your memory in "reverse", i.e. put the last items allocated into the front of the queue.
Another option is to allocate the elements for the queue via mmap, which is something that e.g. Linux will do automatically for allocations which are "large." You can change the threshold for this by calling mallopt(3) with M_MMAP_THRESHOLD and setting it to be a little bit smaller than your struct size. This makes the allocations independent of each other, so the OS can reclaim them individually. This technique can even be applied to existing programs without recompilation, so is often useful if you need to solve this problem in a program you cannot modify.
A C++ implementation would call some operator delete to release dynamically allocated (using some operator new) memory. In several C++ standard libraries, new calls malloc and delete calls free.
(I am focusing with a Linux point of view, but the principles are similar on other OSes)
But while malloc (or ::operator new) is sometimes asking the OS kernel some more memory by system calls changing the virtual address space like mmap(2), free (or ::operator delete) is often simply marking the released memory zone as re-available to future calls to malloc (or to new)
So from the kernel point of view (e.g. as seen thru /proc/, see proc(5)...), the virtual address space is not changing, and the memory remains consumed, even if inside the application it is marked as "freed" and will be reused at some future allocation (by future calls to malloc or new)
And most C++ standard containers are internally using heap data. In particular your local (stack-allocated) std::map or std::vector (or std::deque) variable will call new & delete for internal data.
BTW, I find quite strange your declaration. Unless every struct pxData has exactly 65536 used valueData slots, I would suggest to use some std::vector so have
std::vector<std::complex<float>> valueData;
and improve your code accordingly. You'll probably need to do some valueData.reserve(somesize); and/or valueData.resize(somesize); and/or valueData.push_back(somecomplexnumber); etc....
While investigating a memory link in one of our projects, I've run into a strange issue. Somehow, the memory allocated for objects (vector of shared_ptr to object, see below) is not fully reclaimed when the parent container goes out of scope and can't be used except for small objects.
The minimal example: when the program starts, I can allocate a single continuous block of 1.5Gb without problem. After I use the memory somewhat (by creating and destructing an number of small objects), I can no longer do big block allocation.
Test program:
#include <iostream>
#include <memory>
#include <vector>
using namespace std;
class BigClass
{
private:
double a[10000];
};
void TestMemory() {
cout<< "Performing TestMemory"<<endl;
vector<shared_ptr<BigClass>> list;
for (int i = 0; i<10000; i++) {
shared_ptr<BigClass> p(new BigClass());
list.push_back(p);
};
};
void TestBigBlock() {
cout<< "Performing TestBigBlock"<<endl;
char* bigBlock = new char [1024*1024*1536];
delete[] bigBlock;
}
int main() {
TestBigBlock();
TestMemory();
TestBigBlock();
}
Problem also repeats if using plain pointers with new/delete or malloc/free in cycle, instead of shared_ptr.
The culprit seems to be that after TestMemory(), the application's virtual memory stays at 827125760 (regardless of number of times I call it). As a consequence, there's no free VM regrion big enough to hold 1.5 GB. But I'm not sure why - since I'm definitely freeing the memory I used. Is it some "performance optimization" CRT does to minimize OS calls?
Environment is Windows 7 x64 + VS2012 + 32-bit app without LAA
Sorry for posting yet another answer since I am unable to comment; I believe many of the others are quite close to the answer really :-)
Anyway, the culprit is most likely address space fragmentation. I gather you are using Visual C++ on Windows.
The C / C++ runtime memory allocator (invoked by malloc or new) uses the Windows heap to allocate memory. The Windows heap manager has an optimization in which it will hold on to blocks under a certain size limit, in order to be able to reuse them if the application requests a block of similar size later. For larger blocks (I can't remember the exact value, but I guess it's around a megabyte) it will use VirtualAlloc outright.
Other long-running 32-bit applications with a pattern of many small allocations have this problem too; the one that made me aware of the issue is MATLAB - I was using the 'cell array' feature to basically allocate millions of 300-400 byte blocks, causing exactly this issue of address space fragmentation even after freeing them.
A workaround is to use the Windows heap functions (HeapCreate() etc.) to create a private heap, allocate your memory through that (passing a custom C++ allocator to your container classes as needed), and then destroy that heap when you want the memory back - This also has the happy side-effect of being very fast vs delete()ing a zillion blocks in a loop..
Re. "what is remaining in memory" to cause the issue in the first place: Nothing is remaining 'in memory' per se, it's more a case of the freed blocks being marked as free but not coalesced. The heap manager has a table/map of the address space, and it won't allow you to allocate anything which would force it to consolidate the free space into one contiguous block (presumably a performance heuristic).
There is absolutely no memory leak in your C++ program. The real culprit is memory fragmentation.
Just to be sure(regarding memory leak point), I ran this program on Valgrind, and it did not give any memory leak information in the report.
//Valgrind Report
mantosh#mantosh4u:~/practice$ valgrind ./basic
==3227== HEAP SUMMARY:
==3227== in use at exit: 0 bytes in 0 blocks
==3227== total heap usage: 20,017 allocs, 20,017 frees, 4,021,989,744 bytes allocated
==3227==
==3227== All heap blocks were freed -- no leaks are possible
Please find my response to your query/doubt asked in original question.
The culprit seems to be that after TestMemory(), the application's
virtual memory stays at 827125760 (regardless of number of times I
call it).
Yes, real culprit is hidden fragmentation done during the TestMemory() function.Just to understand the fragmentation, I have taken the snippet from wikipedia
"
when free memory is separated into small blocks and is interspersed by allocated memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is divided into pieces that are too small individually to satisfy the demands of the application.
For example, consider a situation wherein a program allocates 3 continuous blocks of memory and then frees the middle block. The memory allocator can use this free block of memory for future allocations. However, it cannot use this block if the memory to be allocated is larger in size than this free block."
The above explains paragraph explains very nicely about memory fragmentation.Some allocation patterns(such as frequent allocation and deal location) would lead to memory fragmentation,but its end impact(.i.e. memory allocation 1.5GBgets failed) would greatly vary on different system as different OS/heap manager has different strategy and implementation.
As an example, your program ran perfectly fine on my machine(Linux) however you have encountered the memory allocation failure.
Regarding your observation on VM size remains constant: VM size seen in task manager is not directly proportional to our memory allocation calls. It mainly depends on the how much bytes is in committed state. When you allocate some dynamic memory(using new/malloc) and you do not write/initialize anything in those memory regions, it would not go committed state and hence VM size would not get impacted due to this. VM size depends on many other factors and bit complicated so we should not rely completely on this while understanding about dynamic memory allocation of our program.
As a consequence, there's no free VM regrion big enough to hold 1.5
GB.
Yes, due to fragmentation, there is no contiguous 1.5GB memory. It should be noted that total remaining(free) memory would be more than 1.5GB but not in fragmented state. Hence there is not big contiguous memory.
But I'm not sure why - since I'm definitely freeing the memory I used.
Is it some "performance optimization" CRT does to minimize OS calls?
I have explained about why it may happen even though you have freed all your memory. Now in order to fulfil user program request, OS will call to its virtual memory manager and try to allocate the memory which would be used by heap memory manager. But grabbing the additional memory does depend on many other complex factor which is not very easy to understand.
Possible Resolution of Memory Fragmentation
We should try to reuse the memory allocation rather than frequent memory allocation/free. There could be some patterns(like a particular request size allocation in particular order) which may lead overall memory into fragmented state. There could be substantial design change in your program in order to improve memory fragmentation. This is complex topic and require internal understanding of memory manager to understand the complete root cause of such things.
However there are tools exists on Windows based system which I am not much aware. But I found one excellent SO post regarding the which tool(on windows) can be useful to understand and check the fragmentation status of your program by yourself.
https://stackoverflow.com/a/1684521/2724703
This is not memory leak. The memory U used was allocated by C\C++ Runtime. The Runtime apply a a bulk of memory from OS once and then each new you called will allocated from that bulk memory. when delete one object, the Runtime not return memory to OS immediately, it may hold that memory for performance.
There is nothing here which indicates a genuine "leak". The pattern of memory you describe is not unexpected. Here are a few points which might help to understand. What happens is highly OS dependent.
A program often has a single heap which can be extended or shrunk in length. It is however one contiguous memory area, so changing the size is just changing where the end of the heap is. This makes it very difficult to ever "return" memory to the OS, since even one little tiny object in that space will prevent its shrinking. On Linux you can lookup the function 'brk' (I know you're on Windows, but I presume it does something similar).
Large allocations are often done with a different strategy. Rather than putting them in the general purpose heap, an extra block of memory is created. When it is deleted this memory can actually be "returned" to the OS since its guaranteed nothing is using it.
Large blocks of unused memory don't tend to consume a lot of resources. If you generally aren't using the memory any more they might just get paged to disk. Don't presume that because some API function says you're using memory that you are actually consuming significant resources.
APIs don't always report what you think. Due to a variety of optimizations and strategies it may not actually be possible to determine how much memory is in use and/or available on a system at a particular moment. Unless you have intimate details of the OS you won't know for sure what those values mean.
The first two points can explain why a bunch of small blocks and one large block result in different memory patterns. The latter points indicate why this approach to detecting leaks is not useful. To detect genuine object-based "leaks" you generally need a dedicated profiling tool which tracks allocations.
For example, in the code provided:
TestBigBlock allocates and deletes array, assume this uses a special memory block, so memory is returned to OS
TestMemory extends the heap for all the small objects, and never returns any heap to the OS. Here the heap is entirely available from the applications point-of-view, but from the OS's point of view it is assigned to the application.
TestBigBlock now fails, since although it would use a special memory block, it shares the overall memory space with heap, and there just isn't enough left after 2 is complete.
This is in C++ on CentOS 64bit using G++ 4.1.2.
We're writing a test application to load up the memory usage on a system by n Gigabytes. The idea being that the overall system load gets monitored through SNMP etc. So this is just a way of exercising the monitoring.
What we've seen however is that simply doing:
char* p = new char[1000000000];
doesn't affect the memory used as shown in either top or free -m
The memory allocation only seems to become "real" once the memory is written to:
memcpy(p, 'a', 1000000000); //shows an increase in mem usage of 1GB
But we have to write to all of the memory, simply writing to the first element does not show an increase in the used memory:
p[0] = 'a'; //does not show an increase of 1GB.
Is this normal, has the memory actually been allocated fully? I'm not sure if it's the tools we are using (top and free -m) that are displaying incorrect values or whether there is something clever going on in the compiler or in the runtime and/or kernel.
This behavior is seen even in a debug build with optimizations turned off.
It was my understanding that a new[] allocated the memory immediately. Does the C++ runtime delay this actual allocation until later on when it is accessed. In that case can an out of memory exception be deferred until well after the actual allocation of the memory until the memory is accessed?
As it is it is not a problem for us, but it would be nice to know why this is occurring the way it is!
Cheers!
Edit:
I don't want to know about how we should be using Vectors, this isn't OO / C++ / the current way of doing things etc etc. I just want to know why this is happening the way it is, rather than have suggestions for alternative ways of trying it.
When your library allocates memory from the OS, the OS will just reserve an address range in the process's virtual address space. There's no reason for the OS to actually provide this memory until you use it - as you demonstrated.
If you look at e.g. /proc/self/maps you'll see the address range. If you look at top's memory use you won't see it - you're not using it yet.
Please look up for overcommit. Linux by default doesn't reserve memory until it is accessed. And if you end up by needing more memory than available, you don't get an error but a random process is killed. You can control this behavior with /proc/sys/vm/*.
IMO, overcommit should be a per process setting, not a global one. And the default should be no overcommit.
About the second half of your question:
The language standard doesn't allow any delays in throwing a bad_alloc. That must happen as an alternative to new[] returning a pointer. It cannot happen later!
Some OSs might try to overcommit memory allocations, and fail later. That is not conforming to the C++ language standard.