#include <iostream>
#include <string>
#include <deque>
#include <vector>
#include <unistd.h>
using namespace std;
struct Node
{
string str;
vector<string> vec;
Node(){};
~Node(){};
};
int main ()
{
deque<Node> deq;
for(int i = 0; i < 100; ++i)
{
Node tmp;
tmp.vec.resize(100000);
deq.push_back(tmp);
}
while(!deq.empty())
{
deq.pop_front();
}
{
deque<Node>().swap(deq);
}
cout<<"releas\n";
sleep(80000000);
return 0;
}
By top ,I found my program's memory was about 61M, why? And it's ok if there is a copy-constructor in Node.I would like to know why , not how to make it correct.
gcc (GCC) 4.9.1, centos
Generally, new/delete and malloc/realloc/free arrange for more memory from the OS using sbrk() or OS-specific-equivalent, and divide the pages up however they like to satisfy the program's allocation requests. It's not worth the bother to try to release small pages back to the OS - too much extra overhead tracking the pages that are / are not still part of the pool, rerequesting them etc.. In low memory situations, normal caching mechanisms will allow long-unused memory pages to be swapped out of physical RAM anyway.
FWIW, GNU libC's malloc et al. makes an exception for particularly large requests so they can be fully released for the OS / other programs to use before program termination; quoting from the NOTES section here:
When allocating blocks of memory larger than MMAP_THRESHOLD bytes, the glibc malloc()
implementation allocates the memory as a private anonymous mapping
using mmap(2). MMAP_THRESHOLD is 128 kB by default, but is
adjustable using mallopt(3). Allocations performed using mmap(2) are
unaffected by the RLIMIT_DATA resource limit (see getrlimit(2)).
If container is vector, you can use swap to release memory, container is deque, you should use clear to release memory, like this:
int main ()
{
deque<Node> deq;
for(int i = 0; i < 100; ++i)
{
Node tmp;
tmp.vec.resize(100000);
deq.push_back(tmp);
}
while(!deq.empty())
{
deq.pop_front();
}
deq.clear();
// Or, you should try to use `deque<Node>().swap(deq);`, not `local`.
cout<<"releas\n";
sleep(80000000);
return 0;
}
Related
I am reading Effective Modern C++ (Scott Meyers) and trying out something from item 21. The book says a side effect of using std::make_shared is that memory cannot be freed until all shared_ptrs and weak_ptrs are gone (because the control block is allocated together with the memory).
I expected that this would mean that if I keep a cache around holding a bunch of weak_ptrs that no memory would ever be freed. I tried this using the code below, but as the shared_ptrs are removed from the vector, I can see using pmap that memory is actually being freed. Can anyone explain me where I am going wrong? Or if my understanding is wrong?
Note: the function loadWidget is not the same as in the book for the purpose of this experiment.
#include <iostream>
#include <memory>
#include <unordered_map>
#include <vector>
#include <thread>
#include <chrono>
class Widget {
public:
Widget()
: values(1024*1024, 3.14)
{ }
std::vector<double> values;
};
std::shared_ptr<Widget> loadWidget(unsigned id) {
return std::make_shared<Widget>();
}
std::unordered_map<unsigned, std::weak_ptr<Widget>> cache;
std::shared_ptr<Widget> fastLoadWidget(unsigned id) {
auto objPtr = cache[id].lock();
if (!objPtr) {
objPtr = loadWidget(id);
cache[id] = objPtr;
}
return objPtr;
}
int main() {
std::vector<std::shared_ptr<Widget>> widgets;
for (unsigned i=0; i < 20; i++) {
std::cout << "Adding widget " << i << std::endl;
widgets.push_back(fastLoadWidget(i));
std::this_thread::sleep_for(std::chrono::milliseconds(500));
}
while (!widgets.empty()) {
widgets.pop_back();
std::this_thread::sleep_for(std::chrono::milliseconds(500));
}
return 0;
}
It is true that when you use std::make_shared the storage for the new object and for the control block is allocated as a single block, so it is not released as long as there exists a std::weak_ptr to it. But, when the last std::shared_ptr is destroyed the object is nonetheless destroyed (its destructor runs and its members are destroyed). It's just the associated storage which remains allocated and unoccupied.
std::vector allocates storage dynamically for its elements. This storage is external to the std::vector, it is not part of the object's memory representation. When you destroy a Widget you also destroy its std::vector member. That member's destructor will release the dynamically allocated memory used to store its elements. The only memory that can't be release immediately is the control block and the storage for Widget (which should be sizeof(Widget) bytes). It does not prevent the storage for the elements of the vector from being released immediately.
I am taking a class on c++ for which I need to write a simple program that leaks memory on purpose. I have tried this by creating new char [] and not deleting them, but this does not seem to work. Below is the complete code I have tried.
#include <iostream>
#include <cstring>
int main()
{
int i=1;
while (i<1000){
char *data = new char [100000000];
*data = 15;
i++;
}
}
When I watch the memory usage of the program it does not grow so it is not leaking any memory. I just get a bad allocation error.
I think the simplest case of memory leakage is dynamically creating an object then immediately losing the reference to it. In this short example, you are immediately losing a reference to the variable you've created, causing the memory leak. Memory leaks in small, contrived programs like these make it hard to appreciate memory leaks because as soon as a program exits, the operating system reclaims the memory the program allocated.
The problem becomes serious when the program runs for long periods of time. The memory leak is exacerbated and computer performance is noticeable hampered.
Example:
#include <iostream>
// Object is being created, allocated on the heap then function immediately exits, losing any reference to the object. This is a memory leak
void createObject()
{
int* x = new int;
}
int prompt()
{
int response;
std::cout << "Run again?\n";
std::cin >> response;
return response;
}
int main()
{
while(continue)
{
createObject();
// Running the program again and again will exacerbate the memory leak.
continue = prompt();
}
return 0;
}
Correct way to retain object reference in this contrived and useless example:
int* createObject()
{
int* x = new int;
return x;
}
int main()
{
// Pointer to the object created in the function in this scope is created, so we still have access to the variable.
int* a = createObject();
return 0;
}
Hope this helps, good luck in your class!
If you put some delay in the loop, you will be able to see the memory grow.
You can use sleep or wait for input from the user.
As it is now, the memory inflate so fast till you run out of allocation memory.
This is not a classic test of memory leak.
Memory leak it tested at the end of the program to see if you released all the memory.
Following program creates Objects in one loop and store the reference in vector for future deletion.
I am seeing an unusual behavior, even though the objects are deleting in the second iteration, the getrusage gives a resident memory higher compared to object creation.
Execution environment is in Linux Kernel 3.2.0-49-generic.
#include <iostream>
#include <vector>
#include <stdio.h>
#include <mcheck.h>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
void printUsage(string tag)
{
struct rusage usage;
getrusage(RUSAGE_SELF, &usage);
printf("%s -- Max RSS - %ld\n", tag.c_str() ,usage.ru_maxrss);
}
class MyObject
{
public:
char array[1024 * 1024];
MyObject() {};
~MyObject() {};
};
int main()
{
printUsage("Starting");
vector<MyObject *> *v = new vector<MyObject *>();
for(int i = 0; i < 10000; i++)
{
MyObject * h = new MyObject();
v->push_back(h);
// The max resident value is same. usual behavior.
// delete h;
}
printUsage("After Object creation");
for(size_t i = 0; i < v->size(); i++)
{
MyObject * h = v->at(i);
delete h;
}
v->clear();
delete v;
printUsage("After Object deletion");
return 0;
}
g++ test/test.cpp -Wall -O2 -g
Output
Starting -- Max RSS - 3060
After Object creation -- Max RSS - 41192
**After Object deletion -- Max RSS - 41380**
I'm not up on the specifics of getrusage but from a quick google, it seems to be reports OS resources used. Typically, the C++ Run-time library which manages the memory used by malloc/new will request a large block of memory from the OS when it needs it, make malloc requests out of that block, and then hold onto the block even after all the allocations are freed, so it has some avaiable to handle the next request without having to ask the OS again.
I have a class which has a field of type unordered_map. I create a single instance of this object in my application, which is wrapped in a shared_ptr. The object is very memory consuming and I want to delete it as soon as I'm done using it. However, resetting the pointer only frees a small part of the memory occupied by the object. How can I force the program to free all the memory occupied by the object?
The following mock program reproduces my problem. The for loops printing garbage are there only to give me enough time to observe the memory used with top. The destructor gets called just after reset(). Also, immediately after, the memory used drops from approx 2 GB to 1.5 GB.
#include <iostream>
#include <memory>
#include <unordered_map>
using namespace std;
struct A {
~A() {
cerr << "Destructor" << endl;
}
unordered_map<int, int> index;
};
int main() {
shared_ptr<A> a = make_shared<A>();
for (size_t i = 0; i < 50000000; ++i) {
a->index[2*i] = i + 3;
}
// Do some random work.
for (size_t i = 0; i < 3000000; ++i) {
cout << "First" << endl;
}
a.reset();
// More random work.
for (size_t i = 0; i < 3000000; ++i) {
cout << "Second" << endl;
}
}
Compiler: g++ 4.6.3.
GCC's standard library has no "STL memory cache", in its default configuration (which almost everyone uses) std::allocator just calls new and delete, which just call malloc and free. The malloc implementation (which usually comes from the system's C library) decides whether to return memory to the OS. Unless you are on an embedded/constrained system with no virtual memory (or you've turned off over-committing) then you probably do not want to return it -- let the library do what it wants.
The OS doesn't need the memory back, it can allocate gigabytes of virtual memory for other applications without problems. Whenever people think they need to return memory it's usually because they don't understand how a modern OS handles virtual memory.
If you really want to force the C library to return memory to the OS, the C library might provide non-standard hooks to do so, e.g for GNU libc you can call malloc_trim(0) to force the top-most chunk of free memory to be returned to the OS, but that will probably make your program slower next time it needs to allocate more memory, because it will have to get it back from the OS. See https://stackoverflow.com/a/10945602/981959 (and the other answers there) for more details.
There's no guarantee that your application will free the memory back to the OS. It's still available for your application to use but the OS may not reclaim it for general use until your application exits.
Look at the following C++ code:
#include <iostream>
#include <vector>
#include <queue>
using namespace std;
class Buf
{
public:
Buf(size_t size)
{
_storage.reserve(size);
}
~Buf()
{
vector<int> temp;
_storage.swap( temp );//release memory
}
vector<int> _storage;
};
int main()
{
int i = 0;
while( ++i < 10000)
{
Buf *buf = new Buf(100000);
delete buf;
}
return 0;
}
I run it in debug mode(VS2008):when I set a break point in the line
//main function
int i = 0;
I find that the Process MyProgram.exe occupies about 300KB memory in Windows Task Manager.When I set a break point in the line
return 0;
The Process MyProgram.exe occupies about 700KB in Windows Task Manager.
My question is :why the memory that the program occupies increased?I think I have release the memory exactly~Why?
Standard memory allocator will not (usually) release memory to OS when you deallocate it. Instead it will keep it for subsequent allocations for your process.
Thus you don't see memory ussage decrease in TM even though you deallocated it.
The OS/Debug environment might employ optimization techniques and your memory releasing probably just returns it to pool; the actual memory release probably occurs on program termination