I have a class which has a field of type unordered_map. I create a single instance of this object in my application, which is wrapped in a shared_ptr. The object is very memory consuming and I want to delete it as soon as I'm done using it. However, resetting the pointer only frees a small part of the memory occupied by the object. How can I force the program to free all the memory occupied by the object?
The following mock program reproduces my problem. The for loops printing garbage are there only to give me enough time to observe the memory used with top. The destructor gets called just after reset(). Also, immediately after, the memory used drops from approx 2 GB to 1.5 GB.
#include <iostream>
#include <memory>
#include <unordered_map>
using namespace std;
struct A {
~A() {
cerr << "Destructor" << endl;
}
unordered_map<int, int> index;
};
int main() {
shared_ptr<A> a = make_shared<A>();
for (size_t i = 0; i < 50000000; ++i) {
a->index[2*i] = i + 3;
}
// Do some random work.
for (size_t i = 0; i < 3000000; ++i) {
cout << "First" << endl;
}
a.reset();
// More random work.
for (size_t i = 0; i < 3000000; ++i) {
cout << "Second" << endl;
}
}
Compiler: g++ 4.6.3.
GCC's standard library has no "STL memory cache", in its default configuration (which almost everyone uses) std::allocator just calls new and delete, which just call malloc and free. The malloc implementation (which usually comes from the system's C library) decides whether to return memory to the OS. Unless you are on an embedded/constrained system with no virtual memory (or you've turned off over-committing) then you probably do not want to return it -- let the library do what it wants.
The OS doesn't need the memory back, it can allocate gigabytes of virtual memory for other applications without problems. Whenever people think they need to return memory it's usually because they don't understand how a modern OS handles virtual memory.
If you really want to force the C library to return memory to the OS, the C library might provide non-standard hooks to do so, e.g for GNU libc you can call malloc_trim(0) to force the top-most chunk of free memory to be returned to the OS, but that will probably make your program slower next time it needs to allocate more memory, because it will have to get it back from the OS. See https://stackoverflow.com/a/10945602/981959 (and the other answers there) for more details.
There's no guarantee that your application will free the memory back to the OS. It's still available for your application to use but the OS may not reclaim it for general use until your application exits.
Related
This question already has answers here:
What happens to the pointer itself after delete? [duplicate]
(3 answers)
Closed 3 years ago.
I have a code snippet like below. I have created some Dynamic memory allocation for my Something class and then deleted them.
The code print wrong data which I expect but why ->show does not crash?
In what case/how ->show will cause crash?
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
I am trying to understand why after delete which frees up the memory location to be written with something else still have information about ->show!
#include <iostream>
#include <vector>
class Something
{
public:
Something(int i) : i(i)
{
std::cout << "+" << i << std::endl;
}
~Something()
{
std::cout << "~" << i << std::endl;
}
void show()
{
std::cout << i << std::endl;
}
private:
int i;
};
int main()
{
std::vector<Something *> somethings;
Something *i = new Something(1);
Something *ii = new Something(2);
Something *iii = new Something(3);
somethings.push_back(i);
somethings.push_back(ii);
somethings.push_back(iii);
delete i;
delete ii;
delete iii;
std::vector<Something *>::iterator n;
for(n = somethings.begin(); n != somethings.end(); ++n)
{
(*n)->show(); // In what case this line would crash?
}
return 0;
}
The code print wrong data which I expect but why ->show does not crash?
Why do you simultaneously expect the data to be wrong, but also that it would crash?
The behaviour of indirecting through an invalid pointer is undefined. It is not reasonable to expect the data to be correct, nor to expect the data to be wrong, nor to expect that it should crash, nor to expect that it shouldn't crash - in particular.
In what case/how ->show will cause crash?
There is no situation where the C++ language specifies the program to crash. Crashing is a detail of the particular implementation of C++.
For example, a Linux system will typically force the process to crash due to "segmentation fault" if you attempt to write into a memory area that is marked read-only, or attempt to access an unmapped area of memory.
There is no direct way in standard C++ to create memory mappings: The language implementation takes care of mapping the memory for objects that you create.
Here is an example of a program that demonstrably crashes on a particular system:
int main() {
int* i = nullptr;
*i = 42;
}
But C++ does not guarantee that it crashes.
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
The behaviour is undefined. Anything is possible as far as the language is concerned.
Remember, a pointer stores an integer memory address. On a call to delete, the dynamic memory will be deallocated but the pointer will still store the memory address. If we nulled the pointer, then program would crash.
See this question: What happens to the pointer itself after delete?
I am taking a class on c++ for which I need to write a simple program that leaks memory on purpose. I have tried this by creating new char [] and not deleting them, but this does not seem to work. Below is the complete code I have tried.
#include <iostream>
#include <cstring>
int main()
{
int i=1;
while (i<1000){
char *data = new char [100000000];
*data = 15;
i++;
}
}
When I watch the memory usage of the program it does not grow so it is not leaking any memory. I just get a bad allocation error.
I think the simplest case of memory leakage is dynamically creating an object then immediately losing the reference to it. In this short example, you are immediately losing a reference to the variable you've created, causing the memory leak. Memory leaks in small, contrived programs like these make it hard to appreciate memory leaks because as soon as a program exits, the operating system reclaims the memory the program allocated.
The problem becomes serious when the program runs for long periods of time. The memory leak is exacerbated and computer performance is noticeable hampered.
Example:
#include <iostream>
// Object is being created, allocated on the heap then function immediately exits, losing any reference to the object. This is a memory leak
void createObject()
{
int* x = new int;
}
int prompt()
{
int response;
std::cout << "Run again?\n";
std::cin >> response;
return response;
}
int main()
{
while(continue)
{
createObject();
// Running the program again and again will exacerbate the memory leak.
continue = prompt();
}
return 0;
}
Correct way to retain object reference in this contrived and useless example:
int* createObject()
{
int* x = new int;
return x;
}
int main()
{
// Pointer to the object created in the function in this scope is created, so we still have access to the variable.
int* a = createObject();
return 0;
}
Hope this helps, good luck in your class!
If you put some delay in the loop, you will be able to see the memory grow.
You can use sleep or wait for input from the user.
As it is now, the memory inflate so fast till you run out of allocation memory.
This is not a classic test of memory leak.
Memory leak it tested at the end of the program to see if you released all the memory.
Deleting the double pointer is will cause the harmful effect like crash the program and programmer should try to avoid this as its not allowed.
But sometime if anybody doing this then i how do we take care of this.
As delete in C++ is noexcept operator and it'll not throw any exceptions. And its written type is also void. so how do we catch this kind of exceptions.
Below is the code snippet
class myException: public std::runtime_error
{
public:
myException(std::string const& msg):
std::runtime_error(msg)
{
cout<<"inside class \n";
}
};
void main()
{
int* set = new int[100];
cout <<"memory allcated \n";
//use set[]
delete [] set;
cout <<"After delete first \n";
try{
delete [] set;
throw myException("Error while deleting data \n");
}
catch(std::exception &e)
{
cout<<"exception \n";
}
catch(...)
{
cout<<"generic catch \n";
}
cout <<"After delete second \n";
In this case i tried to catch the exception but no success.
Pleas provide your input how we'll take care of these type of scenario.
thanks in advance!!!
Given that the behaviour on a subsequent delete[] is undefined, there's nothing you can do, aside from writing
set = nullptr;
immediately after the first delete[]. This exploits the fact that a deletion of a nullptr is a no-op.
But really, that just encourages programmers to be sloppy.
Segmentation fault or bad memory access or bus errors cannot be caught by exception. Programmers need to manage their own memory correctly as you do not have garbage collection in C/C++.
But you are using C++, no ? Why not make use of RAII ?
Here is what you should strive to do:
Memory ownership - Explicitly via making use of std::unique_ptr or std::shared_ptr and family.
No explicit raw calls to new or delete. Make use of make_unique or make_shared or allocate_shared.
Make use of containers like std::vector or std::array instead of creating dynamic arrays or allocating array on stack resp.
Run your code via valgrind (Memcheck) to make sure there are no memory related issues in your code.
If you are using shared_ptr, you can use a weak_ptr to get access to the underlying pointer without incrementing the reference count. In this case, if the underlying pointer is already deleted, bad_weak_ptr exception gets thrown. This is the only scenario I know of when an exception will be thrown for you to catch when accessing a deleted pointer.
A code must undergo multiple level of testing iterations maybe with different sets of tools before committing.
There is a very important concept in c++ called RAII (Resource Acquisition Is Initialisation).
This concept encapsulates the idea that no object may exist unless it is fully serviceable and internally consistent, and that deleting the object will release any resources it was holding.
For this reason, when allocating memory we use smart pointers:
#include <memory>
#include <iostream>
#include <algorithm>
#include <iterator>
int main()
{
using namespace std;
// allocate an array into a smart pointer
auto set = std::make_unique<int[]>(100);
cout <<"memory allocated \n";
//use set[]
for (int i = 0 ; i < 100 ; ++i) {
set[i] = i * 2;
}
std::copy(&set[0], &set[100] , std::ostream_iterator<int>(cout, ", "));
cout << std::endl;
// delete the set
set.reset();
cout <<"After delete first \n";
// delete the set again
set.reset();
cout <<"After delete second \n";
// set also deleted here through RAII
}
I'm adding another answer here because previous answers focus very strongly on manually managing that memory, while the correct answer is to avoid having to deal with that in the first place.
void main() {
std::vector<int> set (100);
cout << "memory allocated\n";
//use set
}
This is it. This is enough. This gives you 100 integers to use as you like. They will be freed automatically when control flow leaves the function, whether through an exception, or a return, or by falling off the end of the function. There is no double delete; there isn't even a single delete, which is as it should be.
Also, I'm horrified to see suggestions in other answers for using signals to hide the effects of what is a broken program. If someone is enough of a beginner to not understand this rather basic stuff, PLEASE don't send them down that path.
#include <iostream>
#include <string>
#include <deque>
#include <vector>
#include <unistd.h>
using namespace std;
struct Node
{
string str;
vector<string> vec;
Node(){};
~Node(){};
};
int main ()
{
deque<Node> deq;
for(int i = 0; i < 100; ++i)
{
Node tmp;
tmp.vec.resize(100000);
deq.push_back(tmp);
}
while(!deq.empty())
{
deq.pop_front();
}
{
deque<Node>().swap(deq);
}
cout<<"releas\n";
sleep(80000000);
return 0;
}
By top ,I found my program's memory was about 61M, why? And it's ok if there is a copy-constructor in Node.I would like to know why , not how to make it correct.
gcc (GCC) 4.9.1, centos
Generally, new/delete and malloc/realloc/free arrange for more memory from the OS using sbrk() or OS-specific-equivalent, and divide the pages up however they like to satisfy the program's allocation requests. It's not worth the bother to try to release small pages back to the OS - too much extra overhead tracking the pages that are / are not still part of the pool, rerequesting them etc.. In low memory situations, normal caching mechanisms will allow long-unused memory pages to be swapped out of physical RAM anyway.
FWIW, GNU libC's malloc et al. makes an exception for particularly large requests so they can be fully released for the OS / other programs to use before program termination; quoting from the NOTES section here:
When allocating blocks of memory larger than MMAP_THRESHOLD bytes, the glibc malloc()
implementation allocates the memory as a private anonymous mapping
using mmap(2). MMAP_THRESHOLD is 128 kB by default, but is
adjustable using mallopt(3). Allocations performed using mmap(2) are
unaffected by the RLIMIT_DATA resource limit (see getrlimit(2)).
If container is vector, you can use swap to release memory, container is deque, you should use clear to release memory, like this:
int main ()
{
deque<Node> deq;
for(int i = 0; i < 100; ++i)
{
Node tmp;
tmp.vec.resize(100000);
deq.push_back(tmp);
}
while(!deq.empty())
{
deq.pop_front();
}
deq.clear();
// Or, you should try to use `deque<Node>().swap(deq);`, not `local`.
cout<<"releas\n";
sleep(80000000);
return 0;
}
There is a memory leak that I see in Valgrind in my C++ program. I'm wondering where I should place delete statements to remote it. Thank you.
#include <iostream>
using namespace std;
void showFloatArray(float f1[10]) {
for (int i=0; i < 10; i++)
cout << " " << f1[i];
cout << endl;
}
float *getFloatArrayOne() {
float *floatArray = new float[10];
for (int i=0; i < 10; i++)
floatArray[i] = (float) i;
return(floatArray);
}
float *getFloatArrayTwo() {
float myFloatArray[10];
float *floatArray = myFloatArray;
for (int i=0; i < 10; i++)
floatArray[i] = (float) i;
return(floatArray);
}
int main()
{
float *f1 = getFloatArrayOne();
float *f2 = getFloatArrayTwo();
showFloatArray(f1);
showFloatArray(f2);
}
Anytime you create a pointer with new then you have to make sure you call delete on that pointer before the program ends.
For example:
int main()
{
Object * obj = new Object;
return 0; //leaky program!
}
int main()
{
Object * obj = new Object;
delete obj;
return 0; //non-leaky program!
}
Quick re-write
Better to get caller to make allocations. Caller then knows to allocate and de-allocate. If your function (eg a library) allocates, then caller might be in doubt about whether objects must be de-allocated.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// remove fixed size restriction on function
void print_array(float* f, size_t size) {
for (size_t i=0; i < size; i++)
cout << " " << f[i];
cout << endl;
}
// pass in array
float* getFloatArrayOne(float f[], size_t size) {
for (size_t i=0; i < size; i++)
f[i] = (float)i;
return f;
}
// pass in ptr - caller responsible for allocation and de-allocating
float *getFloatArrayTwo(float* f, size_t size) {
for (size_t i=0; i < size; i++)
*(f+i) = (float)i; // dereference pointer + offset method
return f;
}
// You can use any algorithm you like to generate numbers
struct myincrementer {
myincrementer(float startval) : n_(startval) {}
float operator()() { return ++n_; } // change to n_++ to start printing first value
float n_;
};
int main()
{
const int size = 10;
float* floatArray = new float[size]();
float *f1 = getFloatArrayOne(floatArray, size);
float myFloatArray[size] = {0};
float *f2 = getFloatArrayTwo(myFloatArray, size);
print_array(f1, size);
print_array(f2, size);
delete [] floatArray; // note [] form
// More advanced approach
vector<float> vec;
myincrementer myi(0.0);
generate_n(back_inserter(vec), 10, myi);
std::copy(vec.begin(), vec.end(), std::ostream_iterator<float>(std::cout, " "));
}
'Modern' C++ typically avoids leaks by not using new and delete directly, instead delegating the management of resources like memory to objects that handle them internally.
However since this is homework it seems worthwhile to learn not just good practices which eliminate problems, but the technical details of what a leak is and the formal requirements to avoid a leak, independent of any particular method for effectively carrying out those requirements.
So here it is: A memory leak occurs when a pointer value is returned by a successful call to an allocation function and no subsequent call to the correct deallocation function is made using the value returned by the allocation function. That is, a leak occurs when you allocate memory and then fail to deallocate it.
Allocations by malloc() must be deallocated with free(). Allocations by new must be deallocated with delete. Allocations by new[] must be deallocated with delete[].
int *x = malloc(sizeof(int)); // C code
if (x) {
// allocation succeeded, you can use the resource and you should free() it
// ... use
free(x);
}
int *y = new int;
delete y;
int *z = new int[10];
delete [] z;
In Practice
So fixing or avoiding memory leaks requires 'merely' that your program call the deallocation function for every successful allocation. The challenge however, is that this is difficult to do in an arbitrary or ad-hoc manner. In order to avoid leaks in practice you need to establish patterns of allocation and deallocation that can be easily managed and verified.
So here are some pointers to get you started on learning about the practicalities of resource management:
The basic practice for managing resource across many languages is to define "ownership semantics" for specific resources. You define rules for determining what part of the program is responsible for any particular allocated resource, and rules for how responsibility for a particular resource may be handed off from one part of the program to another.
Typically ownership semantics are defined such that the part of a program that allocates a resource is responsible for it. That may seem obvious, but there are alternatives. E.g. a program could designate a single entity that takes responsibility for cleaning up everything, and then the rest of the program just allocates at will and has nothing to do with clean-up. But more commonly whatever allocates a resource takes responsibility for it.
For example a function that allocates some dynamic memory to perform its task also frees that memory when its done:
void foo(int n) {
int *arr = malloc(n * sizeof(int));
// ...
free(arr)
}
Another way to 'take responsibility' for an allocated resource is to be explicit about about requirements for resource management when resources are passed off. For example a function which needs to allocate memory and pass that memory back to the caller may specify "callers of foo() must call free_foo(foo_results) when the foo results are no longer needed."
foo_t *foo() {
foo_t *f = malloc(sizeof(foo_t));
// ...
return f;
}
void free_foo(foo_t *f) {
free(f);
}
Exceptions
For correct resource management whatever rules of ownership semantics have been designed must be followed in all circumstances. There's one language feature supported by C++ that has historically given some people trouble, making them think they'd correctly handled resource management responsibilities when in fact they hadn't. This feature is exceptions.
I won't go into details about exceptions, but it suffices to say that they are the reason that code such as:
doSomething();
cleanup();
is incorrect. And once you learn the idiomatic C++ way to manage resources it should be absolutely obvious that the above is wrong, without you even needing to know what doSomething() does. (One common criticism of exceptions is that they require you to know if doSomething() might throw an exception in order to know how to do the cleanup, which could require manually examining a huge amount of code. But since one can do the cleanup correctly without knowing if doSomething() throws, that criticism is incorrect.)
C++
In C++ a specific practice for managing resources has been developed, called RAII, for Resource Acquisition Is Initialization. It's reliable and easy to use, and correctly handles circumstances such as exceptions. Under RAII a resource is represented as an object, and the correct ownership semantics are encoded into the object's special functions: its destructor, copy/move constructors, and copy and move assignment operators.
Thus you acquire a resource by initializing an object of the right type and you access the resource through that object. If the resource can be copied or moved then you can copy or move the object. If the resource is fundamentally not copyable or moveable then the object is non-copyable or non-moveable, and trying to copy or move it will produce a compiler error.
Some resource managing, RAII types in the C++ standard library are:
std::array: a template class that manages a static, in-place memory buffer, presented as an array of objects
std::vector: a template class that manages dynamic memory, presented as a resizable array of objects.
std::string: a template class that manages static and/or dynamic memory, presented as a resizable array of char.
std::shared_ptr: a template class that implements reference counting ownership semantics. By default the resource is a dynamically allocated object, but this can be configured.
std::unique_ptr: a template class that implements unique ownership semantics. By default the resource is a dynamically allocated object or array, but this can be configured.
For more info on resource management in C++ you can visit http://exceptionsafecode.com/
You should probably just delete f1, as the main function terminates. The first one is allocated on the heap dynamically and it remains allocated though-out execution, and it needs to be deleted. As for the second one, you declare it statically (on the stack), and when the function getFloatArrayTwo() terminates, it deallocates already the vector, deleting it again my result a runtime double delete error. After showFloatArray(f2); you should put delete f1, and the leaks should dissapear.
Hope this to be of help.