The general rule, only objects allocated in the free store can cause memory leaks.
But objects created in the stack doesn't.
Here is my doubt,
int main()
{
myclass x;
...
throw;
...
}
If throw is not handled, it calls, terminate(), which in turn calls abort() and crashes the application. At this time, the objects in the stack are not destoryed (The destructor is not invoked).
My understanding is "When the application terminates (either by abort or by normal exit), it frees all the memory that was allocated for the application". Thus this cannot be considered as memory leak.
Am I correct?
In a hosted environment (e.g. your typical Unix / Windows / Mac OS X, even DOS, machine) when the application terminates all the memory it occupied is automatically reclaimed by the operating system. Therefore, it doesn't make sense to worry about such memory leaks.
In some cases, before an application terminates, you may want to release all the dynamic memory you allocated in order to detect potential memory leaks through a leak detector, like valgrind. However, even in such a case, the example you describe wouldn't be considered a memory leak.
In general, failing to call a destructor is not the same as causing a memory leak. Memory leaks stem from memory allocated on the heap (with new or malloc or container allocators). Memory allocated on the stack is automatically reclaimed when the stack is unwound. However, if an object holds some other resource (say a file or a window handle), failing to call its destructor will call a resource leak, which can also be a problem. Again, modern OSs will reclaim their resources when an application terminates.
edit: as mentioned by GMan, "throw;" re-throws a previously thrown exception, or if there is none, immediately terminates. Since there is none in this case, immediate termination is the result.
Terminating a process always cleans up any leftover userland memory in any modern OS, so is not typically considered a "memory leak", which is defined as unreferenced memory not deallocated in a running process. However, it's really up to the OS as to whether such a thing is considered a "memory leak".
The answer is, it depends on the OS. I can't think of a modern OS that does not do it this way. But old systems (I think up to win 3.1 in windows, and some old embedded Linux platforms) if the program closed without releasing its memory requests the OS would hold them until you rebooted.
Memory leaks are considered a problem because a long running application will slowly bleed away system memory and may in the worst case make the whole machine unusable due to low memory conditions. In your case, the application terminates and all memory allocated to the application will be given back to the system, so hardly a problem.
The real question is, "Does myclass allocate any memory itself that must be free/deleted?"
If it does not -- if the only memory it uses is it's internal members -- then it exists entirely on the stack. Once it leaves that function (however it does), the memory on the stack is reclaimed and reused. myclass is gone. That's just the way stacks work.
If myclass does allocate memory that needs to be freed in it's dtor, then you are still in luck, as the dtor will be called as the stack is unwound during the throw. The dtor will have already been called before the exception is declared unhandled and terminate is called.
The only place you will have a problem is if myclass has a dtor, and the dtor throws as exception of it own. The second throw occurring during the stack unwind from the first throw will have it call terminate immedaitely without any more dtors being called.
From OP,
If throw is not handled, it calls,
terminate(), which in turn calls
abort() and crashes the application.
At this time, the objects in the stack
are not destoryed (The destructor is
not invoked).
This is an implementation defined behavior.
$15.3/9- "If no matching handler is
found in a program, the function
terminate() is called; whether or not
the stack is unwound before this call
to terminate() is
implementation-defined (15.5.1)."
Therefore, whether this constitues a memory leak or not is also implementation defined behavior, I guess.
My understanding is "When the application terminates (either by abort or by normal exit), it frees all the memory that was allocated for the application". Thus this cannot be considered as memory leak.
Am I correct?
Memory leak is a type of programming error which is ranked somewhat lower on scale of programming errors - compared to the uncaught exception.
IOW, if program doesn't terminate properly, a.k.a. crashes, then it is too soon to speak about memory leaks.
On other note, most memory analyzers I have worked with over past decade would not trigger any memory leak alarm in the case - because they do not trigger any alarms when program dumbly crashes. One first has to make program not crashing, only then debug the memory leaks.
Related
Suppose I have a function that uses "new", do I need to set aside some emergency memory in case "new" fails? Such as:
static char* emerg_mem = new char[EMERG_MEM_SZ];
FooElement* Foo::createElement()
{
try
{
FooElement* ptr;
ptr = new FooElement();
return ptr;
}
catch(bad_alloc ex)
{
delete[] emerg_mem;
emerg_mem = NULL;
return NULL;
}
}
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
I am using GCC on Linux Mint, but I suppose this question could apply to any platform.
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
Before attempting to provide such memory for destructors, you should first be able to argue some reason why your destructors would need to allocate dynamic memory in the first place. Such requirement is a serious red flag about the design of the class.
Is it necessary to put aside some emergency memory when new fails?
Not necessarily. Firstly, graceful exit is often possible without allocating any dynamic memory. Secondly, a program running within the protection of an operating system doesn't necessarily need to terminate gracefully in such a dire situation as lack of memory.
P.S. Some systems (Linux in particular, given certain configuration) "overcommit" memory and never throw std::bad_alloc. Instead, allocation always succeeds, physical memory isn't allocated until it is actually accessed, and if no memory is available at that time the process (or some other process) is killed to free some memory. On such system there is no way in C++ to recover from lack of memory.
I would say no.
When your application is out of memory and throws an exception the stack will start to unwind (thus destroying and releasing memory as it goes). As a general rule destructors should not be using dynamic memory allocation more like they should be releasing the memory.
Thus if you have correctly used RAII then you will gain memory back as the stack unwinds, which potentially allows you to catch and continue (if the thing throwing is a discrete task whose results can be discarded).
Also in most situations your application will slow to an unusable crawl long before actual throwing an out of memory exception (as the OS tries to consolidate memory to get you that elusive slot).
Suppose we have a program where we allocate some memory and then we have an assert statement some lines after. If the assert statement throws and error, what happens with the allocated memory? Does it get free before stopping the program?
assert on failure, writes the error to stderr and calls abort(). which unlike exit() doesn't execute the functions registered with atexit(), nor does it call destructors.
Hence, none of your destructors, clean-up code etc can be called. So it is up to the OS, as the memory isn't freed by the program, before its "unexpected" termination.
This is probably by design, as calling destructors might result in some further error. It terminates at that failed assert, executing no further code.
The memory stays allocated as the assert failure brings down your program.
As part of destroying the process, any modern desktop OS will reclaim the memory. Some embedded operating systems might not be able to do this, although I don't have the name of one on hand.
You can detect memory that has to be reclaimed by the OS this way by using a utility such as Valgrind.
I read in Google c++ coding standards that Google does not use exception. If exception is not used, how do you free memory when errors occur in your program?
For example, f() calls g(), and if there is error in g(), I should free all memory allocated in g(), and then call an exception to f(). Once f() catches the exception, f() will free all memory allocated in f(), and exits the program.
If exception is not used, and if there is an error in g(), can I force exit exit(0), and will the c++ program be smart enough to free all memory that is allocated? My guess is, since c++ maintain a stack and heap, and once the program exits, c++ will automatically free both stack and heap?
The operating system cleans up all used memory and file handles when a process is terminated for whatever reason.
I have heard that some memory types like, on Windows, COM global heap memory cannot be freed for you. However, most memory/handles are cleaned up, because the OS has to cope with the condition that your application crashed. You can certainly guarantee it in the case of process-local memory and most handles like file handles, etc. In the general case, you can assume that the OS will clean up after you when your application exits.
Also, don't ever, ever follow Google's style guide. It's not for C++, it's for C++ minus everything you have to take away to make it C. It might work for Google (dubiously), but it definitely won't work for anyone else.
I am working on a school assignment, and we were told that whenever we have an input
error we should print a message and exit the program. Obviously I use exit(1), but
the problem is I have memory leaks when using this functions. I don't understand why - every single variable I've used was on the stack and not on the heap.
What should I do to prevent those memory leaks?
Thanks!
exit does not call the destructors of any stack based objects so if those objects have allocated any memory internally then yes that memory will be leaked.
In practice it probably doesn't matter as any likely operating system will reclaim the memory anyway. But if the destructors were supposed to do anything else you'll have a problem..
exit doesn't really mix well with c++ for this reason. You are better just allowing your program to return from main to exit, or if you need to exit from an internal function throwing an exception instead, which will cause the call stack to be unwound and therefore destructors to be called.
When using the exit function, your program will terminate and all memory allocated by it will be released. There will be no memory leak.
EDIT:
From your comments, I can understand you're concerned that your objects aren't destroyed before termination (i.e. their destructor isn't called). This however doesn't constitute a memory leak, since the memory is released by the process and made available to the system. If you're counting on object destructors to perform operations important to your workflow, I suggest returning an error code instead of using exit and propagate that error code up to main().
EDIT2:
According to the standard, calling exit() during the destruction of an object with static storage duration results in undefined behavior. Are you doing that?
The solution is to not use exit() at all. You write your program using RAII (use classes for resources management) and throw an exception when something goes wrong. Then all memory is reclaimed thanks to destructors being called.
You don't have a real memory leaks. When a program is terminate the OS freeing all the memory the program used.
Every process can use heap memory to store and share data within the process. We have a rule in programming whenever we take some space in heap memory, we need to release it once job is done, else it leads to memory leaks.
int *pIntPtr = new int;
.
.
.
delete pIntPtr;
My question: Is heap memory per-process?
If YES,
then memory leak is possible only when a process is in running state.
If NO,
then it means OS is able to retain data in a memory somewhere. If so, is there a way to access this memory by another process. Also this may become a way for inter-process communication.
I suppose answer to my question is YES. Please provide your valuable feedback.
On almost every system currently in use, heap memory is per-process. On older systems without protected memory, heap memory was system-wide. (In a nutshell, that's what protected memory does: it makes your heap and stack private to your process.)
So in your example code on any modern system, if the process terminates before delete pIntPtr is called, pIntPtr will still be freed (though its destructor, not that an int has one, would not be called.)
Note that protected memory is an implementation detail, not a feature of the C++ or C standards. A system is free to share memory between processes (modern systems just don't because it's a good way to get your butt handed to you by an attacker.)
In most modern operating systems each process has its own heap that is accessible by that process only and is reclaimed once the process terminates - that "private" heap is usually used by new. Also there might be a global heap (look at Win32 GlobalAlloc() family functions for example) which is shared between processes, persists for the system runtime and indeed can be used for interprocess communications.
Generally the allocation of memory to a process happens at a lower level than heap management.
In other words, the heap is built within the process virtual address space given to the process by the operating system and is private to that process. When the process exits, this memory is reclaimed by the operating system.
Note that C++ does not mandate this, this is part of the execution environment in which C++ runs, so the ISO standards do not dictate this behaviour. What I'm discussing is common implementation.
In UNIX, the brk and sbrk system calls were used to allocate more memory from the operating system to expand the heap. Then, once the process finished, all this memory was given back to the OS.
The normal way to get memory which can outlive a process is with shared memory (under UNIX-type operating systems, not sure about Windows). This can result in a leak but more of system resources rather than process resources.
There are some special purpose operating systems that will not reclaim memory on process exit. If you're targeting such an OS you likely know.
Most systems will not allow you to access the memory of another process, but again...there are some unique situations where this is not true.
The C++ standard deals with this situation by not making any claim about what will happen if you fail to release memory and then exit, nor what will happen if you attempt to access memory that isn't explicitly yours to access. This is the very essence of what "undefined behavior" means and is the core of what it means for a pointer to be "invalid". There are more issues than just these two, but these two play a part.
Normally the O/S will reclaim any leaked memory when the process terminates.
For that reason I reckon it's OK for C++ programmers to never explicitly free any memory which is needed until the process exits; for example, any 'singletons' within a process are often not explicitly freed.
This behaviour may be O/S-specific, though (although it's true for e.g. both Windows and Linux): not theoretically part of the C++ standard.
For practical purposes, the answer to your question is yes. Modern operating systems will generally release memory allocated by a process when that process is shut down. However, to depend on this behavior is a very shoddy practice. Even if we can be assured that operating systems will always function this way, the code is fragile. If some function that fails to free memory suddenly gets reused for another purpose, it might translate to an application-level memory leak.
Nevertheless, the nature of this question and the example posted requires, ethically, for me to point you and your team to look at RAII.
int *pIntPtr = new int;
...
delete pIntPtr;
This code reeks of memory leaks. If anything in [...] throws, you have a memory leak. There are several solutions:
int *pIntPtr = 0;
try
{
pIntPtr = new int;
...
}
catch (...)
{
delete pIntPtr;
throw;
}
delete pIntPtr;
Second solution using nothrow (not necessarily much better than first, but allows sensible initialization of pIntPtr at the time it is defined):
int *pIntPtr = new(nothrow) int;
if (pIntPtr)
{
try
{
...
}
catch (...)
{
delete pIntPtr;
throw;
}
delete pIntPtr;
}
And the easy way:
scoped_ptr<int> pIntPtr(new int);
...
In this last and finest example, there is no need to call delete on pIntPtr as this is done automatically regardless of how we exit this block (hurray for RAII and smart pointers).