how to make smart pointer go out of scope at exit() - c++

I've spent a bit of time writing an application for practice and i've taken a liking to using smart pointers throughout so as to avoid memory leaks in case i forgot to delete something. At the same time, i've also taken a liking to using exceptions to report failure in a constructor and attempt to handle it. When it cannot however, i would like for it to exit the program at that spot either through a call to assert() or exit(). However, using the crtdbg library in msvc, it reports a memory leak from the smart pointers that have anything dynamically allocated to them. This means one of two things to me. 1) the smart pointers never went out of scope of where they were allocated, and never deallocate, causing some memory leaks, or 2) crtdbg is not catching the deallocation because it doesn't exit at main. From this page though, using _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF ); at the begginning of the program will catch the leaks from any exit point, and I still get the memory leak errors using that.
So my question to you guys, will the memory actually be deallocated at exit or assert, and if not, might i be able to derive from std::shared_ptr and implement my own solution to cataloging dynamically allocated objects to be deallocated just before the call to exit or assert, or is that too much work for a more simple solution?

When the program exits, the memory is reclaimed by the OS anyway, so if leaking is worrying you, it shouldn't.
If, however, you have logic in your destructors, and the objects must be destroyed - calling exit explicitly bypasses all deallocation. A workaround for this is to throw an exception where you would call exit, catch it in main and return.
#include "stdlib.h"
void foo()
{
//exit(0);
throw killException();
}
int main
{
try
{
foo();
}
catch (killException& ex)
{
//exit program calling destructors
return EXIT_FAILURE;
}
}

The real problem is not with memory, but other resources. The OS will (in most cases, unless you are running an embedded system) recover the memory from the process when it terminates, so memory will not be leaked in the OS. The actual problem might be with other resources, external to your process that might need to be released before your process completes...
At any rate, why do you prefer to abort or exit rather than letting the exception propagate up? In general you should handle only the exceptions that you want to manage and let the others fall through. While you might not be able to recover from it, your caller might actually be able to. By capturing the exception and exiting the program on the spot you are removing the choice of handling from the users.

Related

Is it necessary to put aside some emergency memory when new fails?

Suppose I have a function that uses "new", do I need to set aside some emergency memory in case "new" fails? Such as:
static char* emerg_mem = new char[EMERG_MEM_SZ];
FooElement* Foo::createElement()
{
try
{
FooElement* ptr;
ptr = new FooElement();
return ptr;
}
catch(bad_alloc ex)
{
delete[] emerg_mem;
emerg_mem = NULL;
return NULL;
}
}
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
I am using GCC on Linux Mint, but I suppose this question could apply to any platform.
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
Before attempting to provide such memory for destructors, you should first be able to argue some reason why your destructors would need to allocate dynamic memory in the first place. Such requirement is a serious red flag about the design of the class.
Is it necessary to put aside some emergency memory when new fails?
Not necessarily. Firstly, graceful exit is often possible without allocating any dynamic memory. Secondly, a program running within the protection of an operating system doesn't necessarily need to terminate gracefully in such a dire situation as lack of memory.
P.S. Some systems (Linux in particular, given certain configuration) "overcommit" memory and never throw std::bad_alloc. Instead, allocation always succeeds, physical memory isn't allocated until it is actually accessed, and if no memory is available at that time the process (or some other process) is killed to free some memory. On such system there is no way in C++ to recover from lack of memory.
I would say no.
When your application is out of memory and throws an exception the stack will start to unwind (thus destroying and releasing memory as it goes). As a general rule destructors should not be using dynamic memory allocation more like they should be releasing the memory.
Thus if you have correctly used RAII then you will gain memory back as the stack unwinds, which potentially allows you to catch and continue (if the thing throwing is a discrete task whose results can be discarded).
Also in most situations your application will slow to an unusable crawl long before actual throwing an out of memory exception (as the OS tries to consolidate memory to get you that elusive slot).

How to catch or handle segfault from free() or delete()

In c++, I have a server code running continuously 24*7 but i am getting segfault sometimes while freeing the buffer.
I tried try catch as well.
try {
free(partialBuf);
} catch (...) {
printf("Caught partial buf free error");
}
Thanks in advance!
Since you're apparently able to use try/catch, you're writing C++ code. It helps to know which language you're using.
The solution then is to use std::shared_ptr. You may have multiple places in which a pointer goes out of scope. With shared_ptr you no longer call free, and as a bonus shared_ptr will call delete only once (after the last pointer goes out of scope).
However, you should now allocate memory with new instead of malloc.
A segfault is not an exception in the sense of other C++ exceptions, hence you cannot catch it with try/catch. A segfault can have any number of reasons, but in 99.9% of cases it's a memory access bug :-) If the segfault happens during a call to delete or free(), chances are that you are having a double-free issue.
You could use GDB to debug, and find out whether you are trying to free a pointer which was not allocated previously.

Assert frees memory in C++

Suppose we have a program where we allocate some memory and then we have an assert statement some lines after. If the assert statement throws and error, what happens with the allocated memory? Does it get free before stopping the program?
assert on failure, writes the error to stderr and calls abort(). which unlike exit() doesn't execute the functions registered with atexit(), nor does it call destructors.
Hence, none of your destructors, clean-up code etc can be called. So it is up to the OS, as the memory isn't freed by the program, before its "unexpected" termination.
This is probably by design, as calling destructors might result in some further error. It terminates at that failed assert, executing no further code.
The memory stays allocated as the assert failure brings down your program.
As part of destroying the process, any modern desktop OS will reclaim the memory. Some embedded operating systems might not be able to do this, although I don't have the name of one on hand.
You can detect memory that has to be reclaimed by the OS this way by using a utility such as Valgrind.

Does a c++ program automatically free memory when it crashes?

I read in Google c++ coding standards that Google does not use exception. If exception is not used, how do you free memory when errors occur in your program?
For example, f() calls g(), and if there is error in g(), I should free all memory allocated in g(), and then call an exception to f(). Once f() catches the exception, f() will free all memory allocated in f(), and exits the program.
If exception is not used, and if there is an error in g(), can I force exit exit(0), and will the c++ program be smart enough to free all memory that is allocated? My guess is, since c++ maintain a stack and heap, and once the program exits, c++ will automatically free both stack and heap?
The operating system cleans up all used memory and file handles when a process is terminated for whatever reason.
I have heard that some memory types like, on Windows, COM global heap memory cannot be freed for you. However, most memory/handles are cleaned up, because the OS has to cope with the condition that your application crashed. You can certainly guarantee it in the case of process-local memory and most handles like file handles, etc. In the general case, you can assume that the OS will clean up after you when your application exits.
Also, don't ever, ever follow Google's style guide. It's not for C++, it's for C++ minus everything you have to take away to make it C. It might work for Google (dubiously), but it definitely won't work for anyone else.

deallocating memory when using exit(1), c++

I am working on a school assignment, and we were told that whenever we have an input
error we should print a message and exit the program. Obviously I use exit(1), but
the problem is I have memory leaks when using this functions. I don't understand why - every single variable I've used was on the stack and not on the heap.
What should I do to prevent those memory leaks?
Thanks!
exit does not call the destructors of any stack based objects so if those objects have allocated any memory internally then yes that memory will be leaked.
In practice it probably doesn't matter as any likely operating system will reclaim the memory anyway. But if the destructors were supposed to do anything else you'll have a problem..
exit doesn't really mix well with c++ for this reason. You are better just allowing your program to return from main to exit, or if you need to exit from an internal function throwing an exception instead, which will cause the call stack to be unwound and therefore destructors to be called.
When using the exit function, your program will terminate and all memory allocated by it will be released. There will be no memory leak.
EDIT:
From your comments, I can understand you're concerned that your objects aren't destroyed before termination (i.e. their destructor isn't called). This however doesn't constitute a memory leak, since the memory is released by the process and made available to the system. If you're counting on object destructors to perform operations important to your workflow, I suggest returning an error code instead of using exit and propagate that error code up to main().
EDIT2:
According to the standard, calling exit() during the destruction of an object with static storage duration results in undefined behavior. Are you doing that?
The solution is to not use exit() at all. You write your program using RAII (use classes for resources management) and throw an exception when something goes wrong. Then all memory is reclaimed thanks to destructors being called.
You don't have a real memory leaks. When a program is terminate the OS freeing all the memory the program used.