deallocating memory when using exit(1), c++ - c++

I am working on a school assignment, and we were told that whenever we have an input
error we should print a message and exit the program. Obviously I use exit(1), but
the problem is I have memory leaks when using this functions. I don't understand why - every single variable I've used was on the stack and not on the heap.
What should I do to prevent those memory leaks?
Thanks!

exit does not call the destructors of any stack based objects so if those objects have allocated any memory internally then yes that memory will be leaked.
In practice it probably doesn't matter as any likely operating system will reclaim the memory anyway. But if the destructors were supposed to do anything else you'll have a problem..
exit doesn't really mix well with c++ for this reason. You are better just allowing your program to return from main to exit, or if you need to exit from an internal function throwing an exception instead, which will cause the call stack to be unwound and therefore destructors to be called.

When using the exit function, your program will terminate and all memory allocated by it will be released. There will be no memory leak.
EDIT:
From your comments, I can understand you're concerned that your objects aren't destroyed before termination (i.e. their destructor isn't called). This however doesn't constitute a memory leak, since the memory is released by the process and made available to the system. If you're counting on object destructors to perform operations important to your workflow, I suggest returning an error code instead of using exit and propagate that error code up to main().
EDIT2:
According to the standard, calling exit() during the destruction of an object with static storage duration results in undefined behavior. Are you doing that?

The solution is to not use exit() at all. You write your program using RAII (use classes for resources management) and throw an exception when something goes wrong. Then all memory is reclaimed thanks to destructors being called.

You don't have a real memory leaks. When a program is terminate the OS freeing all the memory the program used.

Related

Is it necessary to put aside some emergency memory when new fails?

Suppose I have a function that uses "new", do I need to set aside some emergency memory in case "new" fails? Such as:
static char* emerg_mem = new char[EMERG_MEM_SZ];
FooElement* Foo::createElement()
{
try
{
FooElement* ptr;
ptr = new FooElement();
return ptr;
}
catch(bad_alloc ex)
{
delete[] emerg_mem;
emerg_mem = NULL;
return NULL;
}
}
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
I am using GCC on Linux Mint, but I suppose this question could apply to any platform.
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
Before attempting to provide such memory for destructors, you should first be able to argue some reason why your destructors would need to allocate dynamic memory in the first place. Such requirement is a serious red flag about the design of the class.
Is it necessary to put aside some emergency memory when new fails?
Not necessarily. Firstly, graceful exit is often possible without allocating any dynamic memory. Secondly, a program running within the protection of an operating system doesn't necessarily need to terminate gracefully in such a dire situation as lack of memory.
P.S. Some systems (Linux in particular, given certain configuration) "overcommit" memory and never throw std::bad_alloc. Instead, allocation always succeeds, physical memory isn't allocated until it is actually accessed, and if no memory is available at that time the process (or some other process) is killed to free some memory. On such system there is no way in C++ to recover from lack of memory.
I would say no.
When your application is out of memory and throws an exception the stack will start to unwind (thus destroying and releasing memory as it goes). As a general rule destructors should not be using dynamic memory allocation more like they should be releasing the memory.
Thus if you have correctly used RAII then you will gain memory back as the stack unwinds, which potentially allows you to catch and continue (if the thing throwing is a discrete task whose results can be discarded).
Also in most situations your application will slow to an unusable crawl long before actual throwing an out of memory exception (as the OS tries to consolidate memory to get you that elusive slot).

Assert frees memory in C++

Suppose we have a program where we allocate some memory and then we have an assert statement some lines after. If the assert statement throws and error, what happens with the allocated memory? Does it get free before stopping the program?
assert on failure, writes the error to stderr and calls abort(). which unlike exit() doesn't execute the functions registered with atexit(), nor does it call destructors.
Hence, none of your destructors, clean-up code etc can be called. So it is up to the OS, as the memory isn't freed by the program, before its "unexpected" termination.
This is probably by design, as calling destructors might result in some further error. It terminates at that failed assert, executing no further code.
The memory stays allocated as the assert failure brings down your program.
As part of destroying the process, any modern desktop OS will reclaim the memory. Some embedded operating systems might not be able to do this, although I don't have the name of one on hand.
You can detect memory that has to be reclaimed by the OS this way by using a utility such as Valgrind.

how to make smart pointer go out of scope at exit()

I've spent a bit of time writing an application for practice and i've taken a liking to using smart pointers throughout so as to avoid memory leaks in case i forgot to delete something. At the same time, i've also taken a liking to using exceptions to report failure in a constructor and attempt to handle it. When it cannot however, i would like for it to exit the program at that spot either through a call to assert() or exit(). However, using the crtdbg library in msvc, it reports a memory leak from the smart pointers that have anything dynamically allocated to them. This means one of two things to me. 1) the smart pointers never went out of scope of where they were allocated, and never deallocate, causing some memory leaks, or 2) crtdbg is not catching the deallocation because it doesn't exit at main. From this page though, using _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF ); at the begginning of the program will catch the leaks from any exit point, and I still get the memory leak errors using that.
So my question to you guys, will the memory actually be deallocated at exit or assert, and if not, might i be able to derive from std::shared_ptr and implement my own solution to cataloging dynamically allocated objects to be deallocated just before the call to exit or assert, or is that too much work for a more simple solution?
When the program exits, the memory is reclaimed by the OS anyway, so if leaking is worrying you, it shouldn't.
If, however, you have logic in your destructors, and the objects must be destroyed - calling exit explicitly bypasses all deallocation. A workaround for this is to throw an exception where you would call exit, catch it in main and return.
#include "stdlib.h"
void foo()
{
//exit(0);
throw killException();
}
int main
{
try
{
foo();
}
catch (killException& ex)
{
//exit program calling destructors
return EXIT_FAILURE;
}
}
The real problem is not with memory, but other resources. The OS will (in most cases, unless you are running an embedded system) recover the memory from the process when it terminates, so memory will not be leaked in the OS. The actual problem might be with other resources, external to your process that might need to be released before your process completes...
At any rate, why do you prefer to abort or exit rather than letting the exception propagate up? In general you should handle only the exceptions that you want to manage and let the others fall through. While you might not be able to recover from it, your caller might actually be able to. By capturing the exception and exiting the program on the spot you are removing the choice of handling from the users.

Does a c++ program automatically free memory when it crashes?

I read in Google c++ coding standards that Google does not use exception. If exception is not used, how do you free memory when errors occur in your program?
For example, f() calls g(), and if there is error in g(), I should free all memory allocated in g(), and then call an exception to f(). Once f() catches the exception, f() will free all memory allocated in f(), and exits the program.
If exception is not used, and if there is an error in g(), can I force exit exit(0), and will the c++ program be smart enough to free all memory that is allocated? My guess is, since c++ maintain a stack and heap, and once the program exits, c++ will automatically free both stack and heap?
The operating system cleans up all used memory and file handles when a process is terminated for whatever reason.
I have heard that some memory types like, on Windows, COM global heap memory cannot be freed for you. However, most memory/handles are cleaned up, because the OS has to cope with the condition that your application crashed. You can certainly guarantee it in the case of process-local memory and most handles like file handles, etc. In the general case, you can assume that the OS will clean up after you when your application exits.
Also, don't ever, ever follow Google's style guide. It's not for C++, it's for C++ minus everything you have to take away to make it C. It might work for Google (dubiously), but it definitely won't work for anyone else.

Is this considered memory leak?

The general rule, only objects allocated in the free store can cause memory leaks.
But objects created in the stack doesn't.
Here is my doubt,
int main()
{
myclass x;
...
throw;
...
}
If throw is not handled, it calls, terminate(), which in turn calls abort() and crashes the application. At this time, the objects in the stack are not destoryed (The destructor is not invoked).
My understanding is "When the application terminates (either by abort or by normal exit), it frees all the memory that was allocated for the application". Thus this cannot be considered as memory leak.
Am I correct?
In a hosted environment (e.g. your typical Unix / Windows / Mac OS X, even DOS, machine) when the application terminates all the memory it occupied is automatically reclaimed by the operating system. Therefore, it doesn't make sense to worry about such memory leaks.
In some cases, before an application terminates, you may want to release all the dynamic memory you allocated in order to detect potential memory leaks through a leak detector, like valgrind. However, even in such a case, the example you describe wouldn't be considered a memory leak.
In general, failing to call a destructor is not the same as causing a memory leak. Memory leaks stem from memory allocated on the heap (with new or malloc or container allocators). Memory allocated on the stack is automatically reclaimed when the stack is unwound. However, if an object holds some other resource (say a file or a window handle), failing to call its destructor will call a resource leak, which can also be a problem. Again, modern OSs will reclaim their resources when an application terminates.
edit: as mentioned by GMan, "throw;" re-throws a previously thrown exception, or if there is none, immediately terminates. Since there is none in this case, immediate termination is the result.
Terminating a process always cleans up any leftover userland memory in any modern OS, so is not typically considered a "memory leak", which is defined as unreferenced memory not deallocated in a running process. However, it's really up to the OS as to whether such a thing is considered a "memory leak".
The answer is, it depends on the OS. I can't think of a modern OS that does not do it this way. But old systems (I think up to win 3.1 in windows, and some old embedded Linux platforms) if the program closed without releasing its memory requests the OS would hold them until you rebooted.
Memory leaks are considered a problem because a long running application will slowly bleed away system memory and may in the worst case make the whole machine unusable due to low memory conditions. In your case, the application terminates and all memory allocated to the application will be given back to the system, so hardly a problem.
The real question is, "Does myclass allocate any memory itself that must be free/deleted?"
If it does not -- if the only memory it uses is it's internal members -- then it exists entirely on the stack. Once it leaves that function (however it does), the memory on the stack is reclaimed and reused. myclass is gone. That's just the way stacks work.
If myclass does allocate memory that needs to be freed in it's dtor, then you are still in luck, as the dtor will be called as the stack is unwound during the throw. The dtor will have already been called before the exception is declared unhandled and terminate is called.
The only place you will have a problem is if myclass has a dtor, and the dtor throws as exception of it own. The second throw occurring during the stack unwind from the first throw will have it call terminate immedaitely without any more dtors being called.
From OP,
If throw is not handled, it calls,
terminate(), which in turn calls
abort() and crashes the application.
At this time, the objects in the stack
are not destoryed (The destructor is
not invoked).
This is an implementation defined behavior.
$15.3/9- "If no matching handler is
found in a program, the function
terminate() is called; whether or not
the stack is unwound before this call
to terminate() is
implementation-defined (15.5.1)."
Therefore, whether this constitues a memory leak or not is also implementation defined behavior, I guess.
My understanding is "When the application terminates (either by abort or by normal exit), it frees all the memory that was allocated for the application". Thus this cannot be considered as memory leak.
Am I correct?
Memory leak is a type of programming error which is ranked somewhat lower on scale of programming errors - compared to the uncaught exception.
IOW, if program doesn't terminate properly, a.k.a. crashes, then it is too soon to speak about memory leaks.
On other note, most memory analyzers I have worked with over past decade would not trigger any memory leak alarm in the case - because they do not trigger any alarms when program dumbly crashes. One first has to make program not crashing, only then debug the memory leaks.