Memory usage isn't decreasing when using free? - c++

Somehow this call to free() is not working. I ran this application on Windows and followed the memory using in Task Manager, but saw no reduction in memory usage after the call to free().
int main(int argc, char *argv[])
{
int i=0;
int *ptr;
ptr = (int*) malloc(sizeof(int) * 1000);
for (i=0; i < 1000; i++)
{
ptr[i] = 0;
}
free(ptr); // After this call, the program memory usage doesn't decrease
system("PAUSE");
return 0;
}

Typical C implementations do not return free:d memory to the operating system. It is available for use by the same program, but not to others.

You can not assume that just after doing the free the memory will be returned back to OS. Generally the CRT implementation have some optimization because of which they may not return this memory immediately. This allows the CRT to allocate the subsequent memory allocation requests in a faster way.

Note that the Task manager will show the memory "borrowed" by libc from the system. But not all mallocs will go through libc to the operating system and similarly not all free will free the system memory.
Usually, libc will allocate memory in larger chunks to supply for several malloc calls.

Related

Linux promises more memory than it can give [duplicate]

This question already has answers here:
Malloc on linux without overcommitting
(2 answers)
Closed 3 years ago.
Consider the following little program running on Linux:
#include <iostream>
#include <unistd.h>
#include <cstring>
int main() {
size_t array_size = 10ull * 1000 * 1000 * 1000;
size_t number_of_arrays = 20;
char* large_arrays[number_of_arrays];
// allocate more memory than the system can give
for (size_t i = 0; i < number_of_arrays; i++)
large_arrays[i] = new char[array_size];
// amount of free memory didn't actually change
sleep(10);
// write on that memory, so it is actually used
for (size_t i = 0; i < number_of_arrays; i++)
memset(large_arrays[i], 0, array_size);
sleep(10);
for (size_t i = 0; i < number_of_arrays; i++)
delete [] large_arrays[i];
return 0;
}
It allocates a lots of memory, more than the system can give. However, if I monitor the memory usage with top, it actually doesn't decrease. The program waits a bit, then it starts to write to the allocated memory and only then the amount of available free memory drops... until the system becomes unresponsive and the program is killed by oom-killer.
My questions are:
Why Linux promises to allocate more memory than it actually can provide? Shouldn't new[] throw a std::bad_alloc at some point?
How can I make sure, that Linux actually takes a piece of memory without having to write to it? I am writing some benchmarks where I would like to allocate lots of memory fast, but at the same time, I need to stay below certain memory limit.
Is it possible to monitor the amount of this "promised" memory?
The kernel version is 3.10.0-514.21.1.el7.x86_64. Maybe it behaves differently on newer versions?
Why Linux promises to allocate more memory than it actually can provide?
Because that is how your system had been configured. You can change the behaviour with. sysctl 'vm.overcommit_memory'.
Shouldn't new[] throw a std::bad_alloc at some point?
Not if the system over commits the memory.
How can I make sure, that Linux actually takes a piece of memory without having to write to it?
You can't as far as I know. Linux maps memory upon page fault when unmapped memory is accessed.
Is it possible to monitor the amount of this "promised" memory?
I think that "virtual" size of the process memory is what you're looking for.

Fail to malloc big block memory after many malloc/free small blocks memory

Here is the code.
First I try to malloc and free a big block memory, then I malloc many small blocks memory till it run out of memory, and I free ALL those small blocks.
After that, I try to malloc a big block memory.
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char **argv)
{
static const int K = 1024;
static const int M = 1024 * K;
static const int G = 1024 * M;
static const int BIG_MALLOC_SIZE = 1 * G;
static const int SMALL_MALLOC_SIZE = 3 * K;
static const int SMALL_MALLOC_TIMES = 1 * M;
void **small_malloc = (void **)malloc(SMALL_MALLOC_TIMES * sizeof(void *));
void *big_malloc = malloc(BIG_MALLOC_SIZE);
printf("big malloc first time %s\n", (big_malloc == NULL)? "failed" : "succeeded");
free(big_malloc);
for (int i = 0; i != SMALL_MALLOC_TIMES; ++i)
{
small_malloc[i] = malloc(SMALL_MALLOC_SIZE);
if (small_malloc[i] == NULL)
{
printf("small malloc failed at %d\n", i);
break;
}
}
for (int i = 0; i != SMALL_MALLOC_TIMES && small_malloc[i] != NULL; ++i)
{
free(small_malloc[i]);
}
big_malloc = malloc(BIG_MALLOC_SIZE);
printf("big malloc second time %s\n", (big_malloc == NULL)? "failed" : "succeeded");
free(big_malloc);
return 0;
}
Here is the result:
big malloc first time succeeded
small malloc failed at 684912
big malloc second time failed
It looks like there are memory fragments.
I know memory fragmentation happens when there are many small empty space in memory but there is no big enough empty space for big size malloc.
But I've already free EVERYTHING I malloc, the memory should be empty.
Why I can't malloc big block at the second time?
I use Visual Studio 2010 on Windows 7, I build 32-bits program.
The answer, sadly, is still fragmentation.
Your initial large allocation ends up tracked by one allocation block; however when you start allocating large numbers of 3k blocks of memory your heap gets sliced into chunks.
Even when you free the memory, small pieces of the block remain allocated within the process's address space. You can use a tool like Sysinternals VMMap to see these allocations visually.
It looks like 16M blocks are used by the allocator, and once these blocks are freed up they never get returned to the free pool (i.e. the blocks remain allocated).
As a result you don't have enough contiguous memory to allocate the 1GB block the second time.
Even I know just a little about this, I found the following thread Why does malloc not work sometimes? which covers the similar topic as yours.
It contains the following links:
http://www.eskimo.com/~scs/cclass/int/sx7.html (Pointer Allocation Strategies)
http://www.gidforums.com/t-9340.html (reasons why malloc fails? )
The issue is likely that even if you free every allocation, malloc does not return all the memory to the operating system.
When your program requested the numerous smaller allocations, malloc had to increase the size of the "arena" from which it allocates memory.
There is no guarantee that if you free all the memory, the arena will shrink to the original size. It's possible that the arena is still there, and all the blocks have been put into a free list (perhaps coalesced into larger blocks).
The presence of this lingering arena in your address space may be making it impossible to satisfy the large allocation request.

Delete class pointer does not free memory [duplicate]

I need help understanding problems with my memory allocation and deallocation on Windows. I'm using VS11 compiler (VS2012 IDE) with latest update at the moment (Update 3 RC).
Problem is: I'm allocating dynamically some memory for a 2-dimensional array and immediately deallocating it. Still, before memory allocation, my process memory usage is 0,3 MB before allocation, on allocation 259,6 MB (expected since 32768 arrays of 64 bit ints (8bytes) are allocated), 4106,8 MB during allocation, but after deallocation memory does not drop to expected 0,3 MB, but is stuck at 12,7 MB. Since I'm deallocating all heap memory I've taken, I've expected memory to be back to 0,3 MB.
This is the code in C++ I'm using:
#include <iostream>
#define SIZE 32768
int main( int argc, char* argv[] ) {
std::getchar();
int ** p_p_dynamic2d = new int*[SIZE];
for(int i=0; i<SIZE; i++){
p_p_dynamic2d[i] = new int[SIZE];
}
std::getchar();
for(int i=0; i<SIZE; i++){
for(int j=0; j<SIZE; j++){
p_p_dynamic2d[i][j] = j+i;
}
}
std::getchar();
for(int i=0; i<SIZE; i++) {
delete [] p_p_dynamic2d[i];
}
delete [] p_p_dynamic2d;
std::getchar();
return 0;
}
I'm sure this is a duplicate, but I'll answer it anyway:
If you are viewing Task Manager size, it will give you the size of the process. If there is no "pressure" (your system has plenty of memory available, and no process is being starved), it makes no sense to reduce a process' virtual memory usage - it's not unusual for a process to grow, shrink, grow, shrink in a cyclical pattern as it allocates when it processes data and then releases the data used in one processing cycle, allocating memory for the next cycle, then freeing it again. If the OS were to "regain" those pages of memory, only to need to give them back to your process again, that would be a waste of processing power (assigning and unassigning pages to a particular process isn't entirely trivial, especially if you can't know for sure who those pages belonged to in the first place, since they need to be "cleaned" [filled with zero or some other constant to ensure the "new owner" can't use the memory for "fishing for old data", such as finding my password stored in the memory]).
Even if the pages are still remaining in the ownership of this process, but not being used, the actual RAM can be used by another process. So it's not a big deal if the pages haven't been released for some time.
Further, in debug mode, the C++ runtime will store "this memory has been deleted" in all memory that goes through delete. This is to help identify "use after free". So, if your application is running in debug mode, then don't expect any freed memory to be released EVER. It will get reused tho'. So if you run your code three times over, it won't grow to three times the size.

C++ delete does not free all memory (Windows)

I need help understanding problems with my memory allocation and deallocation on Windows. I'm using VS11 compiler (VS2012 IDE) with latest update at the moment (Update 3 RC).
Problem is: I'm allocating dynamically some memory for a 2-dimensional array and immediately deallocating it. Still, before memory allocation, my process memory usage is 0,3 MB before allocation, on allocation 259,6 MB (expected since 32768 arrays of 64 bit ints (8bytes) are allocated), 4106,8 MB during allocation, but after deallocation memory does not drop to expected 0,3 MB, but is stuck at 12,7 MB. Since I'm deallocating all heap memory I've taken, I've expected memory to be back to 0,3 MB.
This is the code in C++ I'm using:
#include <iostream>
#define SIZE 32768
int main( int argc, char* argv[] ) {
std::getchar();
int ** p_p_dynamic2d = new int*[SIZE];
for(int i=0; i<SIZE; i++){
p_p_dynamic2d[i] = new int[SIZE];
}
std::getchar();
for(int i=0; i<SIZE; i++){
for(int j=0; j<SIZE; j++){
p_p_dynamic2d[i][j] = j+i;
}
}
std::getchar();
for(int i=0; i<SIZE; i++) {
delete [] p_p_dynamic2d[i];
}
delete [] p_p_dynamic2d;
std::getchar();
return 0;
}
I'm sure this is a duplicate, but I'll answer it anyway:
If you are viewing Task Manager size, it will give you the size of the process. If there is no "pressure" (your system has plenty of memory available, and no process is being starved), it makes no sense to reduce a process' virtual memory usage - it's not unusual for a process to grow, shrink, grow, shrink in a cyclical pattern as it allocates when it processes data and then releases the data used in one processing cycle, allocating memory for the next cycle, then freeing it again. If the OS were to "regain" those pages of memory, only to need to give them back to your process again, that would be a waste of processing power (assigning and unassigning pages to a particular process isn't entirely trivial, especially if you can't know for sure who those pages belonged to in the first place, since they need to be "cleaned" [filled with zero or some other constant to ensure the "new owner" can't use the memory for "fishing for old data", such as finding my password stored in the memory]).
Even if the pages are still remaining in the ownership of this process, but not being used, the actual RAM can be used by another process. So it's not a big deal if the pages haven't been released for some time.
Further, in debug mode, the C++ runtime will store "this memory has been deleted" in all memory that goes through delete. This is to help identify "use after free". So, if your application is running in debug mode, then don't expect any freed memory to be released EVER. It will get reused tho'. So if you run your code three times over, it won't grow to three times the size.

Very simple program passes VS2010 c++ memory leak checker, but still uses more memory at program end after destroying all objects?

I've been having trouble with a memory leak in a large-scale project I've been working on, but the project has no leaks according to the VS2010 memory checker (and I've checked everything extensively).
I decided to write a simple test program to see if the leak would occur on a smaller scale.
struct TestStruct
{
std::string x[100];
};
class TestClass
{
public:
std::vector<TestStruct*> testA;
//TestStruct** testA;
TestStruct xxx[100];
TestClass()
{
testA.resize(100, NULL);
//testA = new TestStruct*[100];
for(unsigned int a = 0; a < 100; ++a)
{
testA[a] = new TestStruct;
}
}
~TestClass()
{
for(unsigned int a = 0; a < 100; ++a)
{
delete testA[a];
}
//delete [] testA;
testA.clear();
}
};
int _tmain(int argc, _TCHAR* argv[])
{
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
char inp;
std::cin >> inp;
{
TestClass ttt[2];
TestClass* bbbb = new TestClass[2];
std::cin >> inp;
delete [] bbbb;
}
std::cin >> inp;
std::cin >> inp;
return 0;
}
Using this code, the program starts at about 1 meg of memory, goes up to more than 8 meg, then at the end drops down to 1.5 meg. Where does the additional .5 meg go? I am having a similar problem with a particle system but on the scale of hundreds of megabytes.
I cannot for the life of me figure out what is wrong.
As an aside, using the array (which I commented out) greatly reduces the wasted memory, but does not completely reduce it. I would expect for the memory usage to be the same at the last cin as the first.
I am using the task manager to monitor memory usage.
Thanks.
"I cannot for the life of me figure out what is wrong."
Probably nothing.
"[Program] still uses more memory at program end after destroying all objects."
You should not really care about memory usage at program end. Any modern operating system cares about "freeing" all memory associated with a process, when the process ends. (Technically speaking, the address space of the process is simply released.)
Freeing memory at program end can actually slow down the termination of your program, since it unnecessarily needs to access memory pages which may even lie on swap space.
That additional 0.5MB probably remains at your allocator (malloc/free, new/delete, std::allocator). These allocators usually work in a way that they request memory from the operating system when necessary, and give memory back the OS when convenient. Fragmentation could be one of the reasons why the allocator has to hold more memory than strictly required at a moment in time. It is also usually faster to keep some memory in reserve, since requesting memory from the operating system is slow.
"I am using the task manager to monitor memory usage."
Measuring memory usage is in fact more sophisticated than observing a single number, and it requires good understanding of virtual memory and the memory management between a process and the operating system. Unfortunately I cannot recommend any good tools for Windows.
Overall, I think there is no issue with your simple program.