My question is about the absolute scope of memory allocated on the heap. Say you have a simple program like
class Simple
{
private:
int *nums;
public:
Simple()
{
nums = new int[100];
}
~Simple()
{
delete [] nums;
}
};
int main()
{
Simple foo;
Simple *bar = new Simple;
}
Obviously foo falls out of scope at the end of main and its destructor is called, whereas bar will not call its destructor unless delete is called on it. So the Simple object that bar points to, as well as the nums array, will be lost in the heap. While this is obviously bad practice, does it actually matter since the program ends immediately after? Am I correct in my understanding that the OS will free all heap memory that it allocated to this program once it ends? Are the effects of my bad decisions limited to the time it runs?
Any modern OS will reclaim all the memory allocated by any process after it terminates.
Each process has it's own virtual address space in all common operating systems nowdays, so it's easy for the OS to claim all memory back.
Needless to say it's a bad practice to rely on the OS for that.
It essentially means such code can't be used in a program that runs for a long while.
Also, in real world applications destructors may do far more than just deallocate memory.
A network client may send a termination message, a database related object may commit transactions, and a file wrapping object may write some closure data to it's file.
In other words: don't let your memory leak.
Related
Is there an easy way of freeing all the memory when I am exiting due to an error in the program? I don't want to free every single allocation, as it's hard to keep track of all of them. And how do the experienced developers that work with C++ approach this problem?
RAII is a technique where all resources are owned by objects that have destructors, except objects that exist directly in automatic (stack) and static/global storage.
When an exception occurs that isn't unhandled, C++ guarantees your program will unroll the stack, destroying each object on the stack in turn.
When you exit main, C++ guarantees that objects of static/global storage duration get destroyed as well.
Done properly, every resource you allocated is cleaned up.
This requires some discipline, and is often worth it.
Now, you can rely on the fact that modern OS's recycle memory and file handle resources that your program acquires while running, and just exit your program.
This is a lot easier. But it has serious limitations.
Not every program has resources that are cleaned up by an OS. If you are negotiating with a server for some resource, your OS cleanup won't work. If you create some kinds of a named pipes, your OS cleanup won't work. If you make some kinds of temporary files, your OS cleanup won't work.
In addition, exiting out of part of a program is a lot like exiting the program as a whole. So proper RAII cleanup in each part results in entire programs that clean themselves up when they shut down.
RAII is "resource allocation is initialization". The simplest case is a std::unique_ptr<T>, where you do this:
std::unique_ptr<int> pint = std::make_unique<int>(3);
now pint has a pointer to the allocated int whose value is 3. Its ownership can be moved to another unique_ptr (or even a shared_ptr), and when the smart pointer that owns the memory goes out of scope, the memory is recycled.
This could happen when object containing pint is destroyed, or when pint goes out of scope as it was an automatic storage variable (on the stack), or when an exception is thrown past the destruction of pint.
Fancier versions of this require writing your own destructors.
One good tool to upgrade C-style resource management to C++-style RAII is a scope_guard, that runs arbitrary code on destruction. (Be sure that destruction code cannot throw).
template<class F>
struct scope_guard {
F f;
bool bDoIt = true;
scope_gaurd( F in ):f(std::move(in)) {}
~scope_guard() { run_now(); }
void skip() { bDoIt = false; }
bool run_now() {
if (!bDoIt) return false;
f();
bDoIt = false;
return true;
}
scope_guard(scope_guard const&)=delete;
scope_guard(scope_guard && o):f(std::move(o.f)), bDoIt(o.bDoIt) {
o.bDoIt = false;
}
};
using storable_scope_guard = scope_guard<std::function<void()>>;
then you can do this:
auto atexit0 = scope_guard{ []{ /* recycle resources you just allocated */ } };
in your function body right after you allocate some resources that need cleaning up (as opposed to the C-style "do it manually at the end of the function, possibly with a bunch of gotos").
Suppose I have a function that uses "new", do I need to set aside some emergency memory in case "new" fails? Such as:
static char* emerg_mem = new char[EMERG_MEM_SZ];
FooElement* Foo::createElement()
{
try
{
FooElement* ptr;
ptr = new FooElement();
return ptr;
}
catch(bad_alloc ex)
{
delete[] emerg_mem;
emerg_mem = NULL;
return NULL;
}
}
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
I am using GCC on Linux Mint, but I suppose this question could apply to any platform.
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
Before attempting to provide such memory for destructors, you should first be able to argue some reason why your destructors would need to allocate dynamic memory in the first place. Such requirement is a serious red flag about the design of the class.
Is it necessary to put aside some emergency memory when new fails?
Not necessarily. Firstly, graceful exit is often possible without allocating any dynamic memory. Secondly, a program running within the protection of an operating system doesn't necessarily need to terminate gracefully in such a dire situation as lack of memory.
P.S. Some systems (Linux in particular, given certain configuration) "overcommit" memory and never throw std::bad_alloc. Instead, allocation always succeeds, physical memory isn't allocated until it is actually accessed, and if no memory is available at that time the process (or some other process) is killed to free some memory. On such system there is no way in C++ to recover from lack of memory.
I would say no.
When your application is out of memory and throws an exception the stack will start to unwind (thus destroying and releasing memory as it goes). As a general rule destructors should not be using dynamic memory allocation more like they should be releasing the memory.
Thus if you have correctly used RAII then you will gain memory back as the stack unwinds, which potentially allows you to catch and continue (if the thing throwing is a discrete task whose results can be discarded).
Also in most situations your application will slow to an unusable crawl long before actual throwing an out of memory exception (as the OS tries to consolidate memory to get you that elusive slot).
When I use the top terminal program at Linux, I can't see the result of free.
My expectation is:
free map and list.
The memory usage that I can see at the top(Linux function) or /proc/meminfo
get smaller than past.
sleep is start.
program exit.
But
The usage of memory only gets smaller when the program ends.
Would you explain the logic of free function?
Below is my code.
for(mapIter = bufMap->begin(); mapIter != bufMap -> end();mapIter++)
{
list<buff> *buffList = mapIter->second;
list<buff>::iterator listIter;
for(listIter = buffList->begin(); listIter != buffList->end();listIter++)
{
free(listIter->argu1);
free(listIter->argu2);
free(listIter->argu3);
}
delete buffList;
}
delete bufMap;
printf("Free Complete!\n");
sleep(10);
printf("endend\n");
Thanks you.
Memory is allocated onto a heap.
When you request some memory in your program (with a new() or malloc() etc.) Your program requests some memory from its heap, which in turn requests it from the operating system{1}. Since this is an expensive operation, it gets a chunk of memory from the OS, not just what you ask for. The memory manager puts everything it gets into the heap, just returning to you the perhaps small amount you asked for. When you free() or delete() this memory, it simply gets returned to the heap, not the OS.
It's absolutely normal for that memory to not be returned to the operating system until your program exits, as you may request further memory later on.
If your program design relies on this memory be recycled, it may be achievable using multiple copies of your program (by fork()~ing) which run and exit.
{1} The heap is probably non-empty on program start, but assuming it's not illustrates my point.
The heap typically uses operating system functions to manage its memory. The heap’s
size may be fixed when the program is created, or it may be allowed to grow. However,
the heap manager does not necessarily return memory to the operating system when
the free function is called. The deallocated memory is simply made available for subsequent use by the application. Thus, when a program allocates and then frees up memory, the deallocation of memory is not normally reflected in the application’s memory
usage as seen from the operating system perspective.
Consider this code:
class Foo;
std:queue<Foo*> q;
// allocate and add objects to the queue
for (int i=0; i<100000; i++)
{
Foo* f = new Foo();
q.push(f);
}
// remove objects from queue and free them
while (!q.empty())
{
Foo* f2 = q.front();
q.pop();
delete f2;
}
By single-stepping I can see the Foo destructor getting called as each object is deleted, so I would expect the process memory usage to drop as each delete happens - but it doesn't. In my application the queue is used in producer/consumer threads and the memory usage just keeps growing.
The only way I have found to recover the memory is to swap the queue for an empty one whenever I have popped all items from it:
q.swap(std::queue<Foo*>());
If I use a vector rather than a queue, deleting the stored objects immediately drops process memory usage. Can anyone explain why the queue isn't behaving like that?
Edit to clarify from the comments: I understand that the queue manages the memory of the pointer variables themselves (i.e. 4 or 8 bytes per pointer), and that I can't control when that memory gets released. What I'm concerned about is that the heap memory being pointed to, which I am managing through new and delete, is also not being released on time.
*Edit 2: seems to only happen when the process is being debugged.. so not actually a problem in reality. Still weird though.
Many implementations of delete/free may not always make all memory available again to the operating system right away – the memory might just be available for that process for awhile. So if you are measuring your RSS or something, you can't necessarily expect it to go down the instant that something is deleted. I think the behavior may be different here in debug mode, which would explain what you are seeing.
I think I might be creating a memory leak here:
void commandoptions(){
cout<< "You have the following options: \n 1). Buy Something.\n 2).Check you balance. \n3). See what you have bought.\n4.) Leave the store.\n\n Enter a number to make your choice:";
int input;
cin>>input;
if (input==1) buy();
//Continue the list of options.....
else
commandoptions(); //MEMORY LEAK IF YOU DELETE THE ELSE STATEMENTS!
}
inline void buy(){
//buy something
commandoptions();
}
Let's say commandoptions has just exectued for the first time the program has been run. The user selects '1', meaning the buy() subroutine is executed by the commandoptions() subroutine.
After buy() executes, it calls commandoptions() again.
Does the first commandoptions() ever return? Or did I just make a memory leak?
If I make a subroutine that does nothing but call itself, it will cause a stackoverflow because the other 'cycles' of that subroutine never exit. Am I doing/close to doing that here?
Note that I used the inline keyword on buy... does that make any difference?
I'd happily ask my professor, he just doesn't seem available. :/
EDIT: I can't believe it didn't occur to me to use a loop, but thanks, I learned something new about my terminology!
A memory leak is where you have allocated some memory using new like so:
char* memory = new char[100]; //allocate 100 bytes
and then you forget, after using this memory to delete the memory
delete[] memory; //return used memory back to system.
If you forget to delete then you are leaving this memory as in-use while your program is running and cannot be reused for something else. Seeing that memory is a limited resource, doing this millions of times for example, without the program terminating, would end you with no memory left to use.
This is why we clean up after ourselves.
In C++ you'd use an idiom like RAII to prevent memory leaks.
class RAII
{
public:
RAII() { memory = new char[100]; }
~RAII() { delete[] memory }
//other functions doing stuff
private:
char* memory;
};
Now you can use this RAII class, as so
{ // some scope
RAII r; // allocate some memory
//do stuff with r
} // end of scope destroys r and calls destructor, deleting memory
Your code doesn't show any memory allocations, therefore has no visible leak.
Your code does seem to have endless recursion, without a base case that will terminate the recursion.
Inline keyword won't cause a memory leak.
If this is all the code you have, there shouldn't be a memory leak. It does look like you have infinite recursion though. If the user types '1' then commandoptions() gets called again inside of buy(). Suppose they type '1' in that one. Repeat ad infinum, you then eventually crash because the stack got too deep.
Even if the user doesn't type '1', you still call commandoptions() again inside of commandoptions() at the else, which will have the exact same result -- a crash because of infinite recursion.
I don't see a memory leak with the exact code given however.
This is basically a recursion without a base case. So, the recursion will never end (until you run out of stack space that is).
For what you're trying to do, you're better off using a loop, rather than recursion.
And to answer your specific questions :
No, commandoptions never returns.
If you use a very broad definition of a memory leak, then this is a memory leak, since you're creating stack frames without ever removing them again. Most people wouldn't label it as such though (including me).
Yes, you are indeed gonna cause a stack overflow eventually.
The inline keyword won't make a difference in this.
This is not about memory leak, you are making infinite calls to commandoptions function no matter what the value of input is, which will result in stack crash. You need some exit point in your commandoptions function.
There is no memory leak here. What does happen (at least it looks that way in that butchered code snippet of yours) is that you get into an infinite loop. You might run out of stack space if tail call optimization doesn't kick in or isn't supported by your compiler (it's a bit hard to see whether or not your calls actually are in tail position though).