If you check this link http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=107
it's written:
"For example, the abort() and exit() library functions are never to be used in an object-oriented environment—even during debugging—because they don't invoke objects' destructors before program termination."
Why does the destructor need to be called when one calls exit? (Since the OS guarantees memory will be reclaim whenever a program exits, right?)
Destructors can, and often do, other operations besides freeing memory and/or resources. They are often used to make certain other guarantees such as user data is written to a file or non-process specific resources are in a known state. The OS won't do these types of operations on exit.
That being said, any program which relies on these types of actions is fundamentally flawed. Use of exit and abort are not the only ways destructors are avoided. There are many other ways a destructor can be circumvented. For example user forcable terminating the process or a power outage.
I definitely disagree with the use of never in the quoted passage. I can think of at least one situation where you absolutely don't want destructors executing: corrupted memory. At the point you detect corrupted memory you can no longer make any guarantees about the code in your process, destructors included. Code which should write data to a file might delete / corrupt it instead.
The only safe thing to do when memory corruption is detected is to exit the process as fast as possible.
First I will like to advice that "Don't believe anything blindly you have read."
Probably as JaredPar said the destructor may doing some logging things and closing of OS resource handle things. In case you call abort or exit these things will never happen.
But I certainly do not agree with the quote at all. Abort is the best and the fastest way of finding the programming errors in the development cycle. As a developer you certainly don't put abort blindly everywhere in the code, but on the condition you know that should have never happened. You put abort when you know that the other programmer or you have messed up somewhere in the code and it's better to stop than to handle the error. I have gone through a situation where abort has really saved my #$$.
Related
I'm writing a class that save the state of the connected components of a graph, supporting dynamic connectivity, and, each time a new edge is removed or added, I have to recalculate the neighbouring components to join or split them.
The only exceptions that those methods can throw is std::bad_alloc. No other exception will be thrown by any of my dependencies. So, the only possible exceptions are due to lack of memory by methods like std::unordered_set<...>::insert or std::deque<...>::push_back.
It complicates a lot the design of my algorithms, because I've to deal with local data to save differences, and then move all modifications based on those cached modifications in a well-scoped try-catch block.
The readability is severaly decreased and the time to think and write this exception-safe code increases a lot. Also, memory overcommit makes dealing with this exceptions a bit pointless.
What do you do in situations like that? Is it really important to ensure exception-safe code given that, if there is a real lack of memory, your code will probably fail anyway, but probably later, and the program as a whole will as well?
So, in short, is it worthy to deal with lack-of-memory exceptions at all, considering that, as one comment points out, that the very same exception throwing mechanism could exhaust memory as well?
As you suggested, trying to handle out-of-memory situations gracefully within a process is somewhere between extremely difficult and impossible, depending on the memory behavior of the OS you are running on. On many OS's (such as Linux when configured with its default settings) an out-of-memory scenario can result in your process being simply killed without any warning or recourse, no matter how carefully your code tries to handle bad_alloc exceptions.
My suggestion is to instead have your application launch a child process (i.e. it can launch itself as a child process, with a special argument to let the child process know that it is the child process and not the parent process). Then the parent process simply waits for the child process to exit, and if it does, it relaunches the child process.
That has the advantage of not only being able to recover from an out-of-memory situation, but also helping you recover from any other problem that might cause your child process to crash or otherwise prematurely exit.
It is almost impossible to ensure desired OOM handling on application level especially because as #StaceyGirl mentioned, there is no guarantee you will be even able to throw std::bad_alloc. Instead it is much more important (and easy) to manage memory allocation. By using memory pools and smart pointer templates you can achieve several advantages:
cleaner code
single place where your memory allocation can fail and thus should be handled
ability to ensure your application has required (or planned) amount of memory
graceful degradation. Since you are decoupling "Allocation Next Memory Chunk to Pool" event from "Give me some memory from pool" Request, at the moment of truth (std::unordered_set<...>::insert etc.) you will be able to handle it gracefully (not by throwing exception) and your program will not halt unexpectedly.
I have a program that uses services from others. If the program crashes, what is the best way to close those services? At server side, I would define some checkers that monitor if a client is invalid periodically. But can we do any thing at client? I am not the sure if the normal RAII still works at this case. My code is written in C and C++.
If your application experiences a hard crash, then no, your carefully crafted cleanup code will not run, whether it is part of an RAII paradigm or a method you call at the end of main. None of an application's cleanup code runs after a crash that causes the application to be terminated.
Of course, this is not true for exceptions. Although those might eventually cause the application to be terminated, they still trigger this termination in a controlled way. Generally, the runtime library will catch an unhandled exception and trigger termination. Along the way, your RAII-based cleanup code will be executed, unless it also throws an exception. Then you're back to being unceremoniously ripped out of memory.
But even if your application's cleanup code can't run, the operating system will still attempt to clean up after you. This solves the problem of unreleased memory, handles, and other system objects. In general, if you crash, you need not worry about releasing these things. Your application's state is inconsistent, so trying to execute a bunch of cleanup code will just lead to unpredictable and potentially erroneous behavior, not to mention wasting a bunch of time. Just crash and let the system deal with your mess. As Raymond Chen puts it:
The building is being demolished. Don't bother sweeping the floor and emptying the trash cans and erasing the whiteboards. And don't line up at the exit to the building so everybody can move their in/out magnet to out. All you're doing is making the demolition team wait for you to finish these pointless housecleaning tasks.
Do what you must; skip everything else.
The only problem with this approach is, as you suggest in this question, when you're managing resources that are not controlled by the operating system, such as a remote resource on another system. In that case, there is very little you can do. The best scenario is to make your application as robust as possible so that it doesn't crash, but even that is not a perfect solution. Consider what happens when the power is lost, e.g. because a user's cat pulled the cord from the wall. No cleanup code could possibly run then, so even if your application never crashes, there may be termination events that are outside of your control. Therefore, your external resources must be robust in the event of failure. Time-outs are a standard method, and a much better solution than polling.
Another possible solution, depending on the particular use case, is to run consistency-checking and cleanup code at application initialization. This might be something that you would do for a service that is intended to run continuously and will be restarted promptly after termination. The next time it restarts, it checks its data and/or external resources for consistency, releases and/or re-initializes them as necessary, and then continues on as normal. Obviously this is a bad solution for a typical application, because there is no guarantee that the user will relaunch it in a timely manner.
As the other answers make clear, hoping to clean up after an uncontrolled crash (i.e., a failure which doesn't trigger the C++ exception unwind mechanism) is probably a path to nowhere. Even if you cover some cases, there will be other cases that fail and you are building in a serious vulnerability to those cases.
You mention that the source of the crashes is that you are "us[ing] services from others". I take this to mean that you are running untrusted code in-process, which is the potential source of crashes. In this case, you might consider running the untrusted code "out of process" and communicating back to your main process through a pipe or shared memory or whatever. Then you isolate the crashes this child process, and can do controlled cleanup in your main process. A separate process is really the lightest weight thing you can do that gives you the strong isolation you need to avoid corruption in the calling code.
If forking a process per-call is performance-prohibitive, you can try to keep the child process alive for multiple calls.
One approach would be for your program to have two modes: normal operation and monitoring.
When started in a usual way, it would :
Act as a background monitor.
Launch a subprocess of itself, passing it an internal argument (something that wouldn't clash with normal arguments passed to it, if any).
When the subprocess exists, it would release any resources held at the server.
When started with the internal argument, it would:
Expose the user interface and "act normally", using the resources of the server.
You might look into atexit, which may give you the functionality you need to release resources upon program termination. I don't believe it is infallible, though.
Having said that, however, you should really be focusing on making sure your program doesn't crash; if you're hitting an error that is "unrecoverable", you should still invest in some error-handling code. If the error is caused by a Seg-Fault or some other similar OS-related error, you can either enable SEH exceptions (not sure if this is Windows-specific or not) to enable you to catch them with a normal try-catch block, or write some Signal Handlers to intercept those errors and deal with them.
In cppreference abort, we have
Destructors of variables with automatic, thread local and static storage durations are not called. Functions passed to std::atexit() are also not called. Whether open resources such as files are closed is implementation defined.
I'm bit confused about the terminology and contradiction of the abort term that "closes" my program and from the description of that function which it says that destructors and open resources possibly are not called/closed, respectively. So, is it possible that my program remains running and it has some memory leak or resources still opened after a call to abort()?
It's like killing a person. They won't have a chance to pay any outstanding bills, organize their heritage, clean their apartment, etc.
Whether any of this happens is up to their relatives or other third parties.
So, usually things like open files will be closed and no memory will be leaked because the OS takes care of this (like when the police or so will empty the apartment). There are some platforms where this won't happen, such as 16 bit windows or embedded systems, but under modern windows or Linux systems it will be okay.
However, what definitely won't happen is that destructors are run. This would be like having the to-be-killed person write a last entry into their diary and seal it or something - only the person itself knows how to do it, and they can't when you kill them without warning. So if anything important was supposed to happen in a destructor, it can be problematic, but usually not dramatically - it might be something like that the program created a Temporary file somewhere and would normally delete it on exiting, and now it can't and the file stays.
Still, your program will be closed and not running anymore. It just won't get a chance to clean up things and is therefore depending on the OS to do the right thing and clean up the resources used by it.
Functions passed to std::atexit() are also not called. Whether open resources such as files are closed is implementation defined.
This means the implementation gets to decide what happens. On any common consumer operating system, most objects associated with your process get destroyed when your process exits. That means you won't leak memory that was allocated with new, for example, or open files.
There might be uncommon kinds of objects that aren't freed - for example, if you have a shared memory block, it might stay around in case another process tries to access it. Or if you created a temporary file, intending to delete it later, now the file will stay there because your program isn't around to delete it.
On Unix, calling abort() effectively delivers a SIGABRT signal to your process. The default behavior of the kernel when that signal is delivered is to close down your process, possibly leaving behind a core file, and closing any descriptors. Your process's thread of control is completely removed. Note that this all happens outside any notion of c++ (or any other language). That is why it is considered implementation defined.
If you wish to change the default behavior, you would need to install a signal handler to catch the SIGABRT.
I am refactoring an old code, and one of the things I'd like to address is the way that errors are handled. I'm well aware of exceptions and how they work, but I'm not entirely sure they're the best solution for the situations I'm trying to handle.
In this code, if things don't validate, there's really no reason or advantage to unwind the stack. We're done. There's no point in trying to save the ship, because it's a non-interactive code that runs in parallel through the Sun Grid Engine. The user can't intervene. What's more, these validation failures don't really represent exceptional circumstances. They're expected.
So how do I best deal with this? One thing I'm not sure I want is an exit point in every class method that can fail. That seems unmaintainable. Am I wrong? Is it acceptable practice to just call exit() or abort() at the failure point in codes like this? Or should I throw an exception all the way back to some generic catch statement in main? What's the advantage?
Throwing an exception to be caught in main and then exiting means your RAII resource objects get cleaned up. On most systems this isn't needed for a lot of resource types. The OS will clean up memory, file handles, etc. (though I've used a system where failing to free memory meant it remained allocated until system restart, so leaking on program exit wasn't a good idea.)
But there are other resource types that you may want to release cleanly such as network or database connections, or a mechanical device you're driving and need to shut down safely. If an application uses a lot of such things then you may prefer to throw an exception to unwind the stack back to main, and then exit.
So the appropriate method of exiting depends on the application. If an application knows it's safe then calling _Exit(), abort(), exit(), or quickexit() may be perfectly reasonable. (Library code shouldn't call these, since obviously the library has no idea whether its safe for every application that will ever use the library.) If there is some critical clean up that must be performed before an application exits but you know it's limited, then the application can register that clean up code via atexit() or at_quick_exit().
So basically decide what you need cleaned up, document it, implement it, and try to make sure it's tested.
It is acceptable to terminate the program if it cannot handle the error gracefully. There are few things you can do:
Call abort() if you need a core dump.
Call exit() if you want to give a chance to run to those routines registered with atexit() (that is most likely to call destructors for global C++ objects).
Call _exit() to terminate a process immediately.
There is nothing wrong with using those functions as long as you understand what you are doing, know your other choices, and choose that path willingly. After all, that's why those functions exist. So if you don't think it makes any sense to try to handle the error or do anything else when it happens - go ahead. What I would probably do is try to log some informative message (say, to syslog), and call _exit. If logging fails - call abort to get a core along the termination.
I'd suggest to call global function
void stopProgram() {
exit(1);
}
Later you can change it's behavior, so it is maintainable.
As you pointed out, having an exit or abort thrown around throughout your code is not maintainable ... additionally, there may be a mechanism in the future that could allow you to recover from an error, or handle an error in a more graceful manner than simply exiting, and if you've already hard-coded this functionality in, then it would be very hard to undo.
Throwing an exception that is caught in main() is your best-bet at this point that will also give you flexibility in the future should you run the code under a different scenario that will allow you to recover from errors, or handle them differently. Additionally, throwing exceptions could help should you decide to add more debugging support, etc., as it will give you spots to implement logging features and record the program state from isolated and maintainable points in the software before you decide let the program exit.
I have a few "global" constructs that are allocated with new and are alive the entirety of the applications life span.
Should i bother calling delete on the pointers just before the application finishes?
Doesn't all the of the applications memory get reclaimed after it closes anyway?
Edit For Clarity. I am only talking about not calling delete for lifetime objects who "die" right as the program is closing.
Technically, yes, the memory is reclaimed. But unless you use delete the destructors of those objects are not run and their side effect is not applied. This might lead to a temporary file not deleted or a database change not committed depending on what those destructor were meant to do.
Also don't forget Murphy. Now the code for managing those objects is used as you describe (objects have to persist for the life of the program) but later you might want to reuse the code so that it is run multiple times. Unless it can deal with recreating objects properly it will be leaking objects.
It is always good practice to clean up everything, although the memory is reclained these objects might have other resources allocated (shared memory, smeaphores etc) that should be cleaned up, probably by the objects destructors.
If you do not want to call delete use shared pointers to hold these resources, so that they are cleaned up correctly when the application exits.
How are you testing your application? Not cleaning up might hinder development of a decent test harness. Tests of the application might want a way of spoofing a shutdown and restart.
There is more to cleaning up that simple releasing memory.
No, don't write/debug/maintain code to do something that the OS is already very good at.
Unless there are specific reasons to the contrary, (eg. outstanding transactions to be commited, files to flush, connections to be closed), I don't bother writing code to do something that the OS is going to do anyway. If a dtor does nothing special, why bother calling it?
Many developers put in a lot of effort into deleting/destroying/freeing/terminating stuff at app close time - a load of effort to avoid some spurious 'leak report' on app shutdown from a memory manager that is itself about to be destroyed.
I think you're probably right, but I personally would consider it poor coding and bad practise to rely on the system and would ensure my code always tidied properly whe shutting down.
There is no one right answer. Most of the time, it probably doesn't
matter, but there are destructors which do something important, beyond
just freeing memory (I have one which deletes temporary files), which
argues in favor of cleanup; on the other hand, destructing such objects
may lead to order of destruction issues, if the objects are used by
destructors of other objects. My general rule is to not destruct,
unless the destructor does something more than just free memory, but
others may prefer a different set of defaults.
In addition to the destructors not being executed (as sharptooth pointer out), it's also worthwhile to delete global objects to make memory checkers happy. Especially if your code is in a shared library - you don't want to clutter their memory checker (say, Valgrind) output just because you didn't delete properly.
.. then there's those cases where you definitely don't want the dtor's called at all before the OS terminates the process, eg:
1) When the dtor does not work properly because it tries to terminate a thread, fails and blocks on the thread handle or other signal, (the perennial 'join/waitFor' deadlock) - the cause off 99% of all household 'my app will not close down cleanly' posts.
2) When the dtor does not work properly because it's just bad anyway and buried in a library.
3) Where the memory must outlive the process threads else there will be segfaults/AV on close, (eg. pools of buffer objects that threads may well be writing to at close time).
4) Any other 'special cases' where the destruction of the object/s has to be left to the OS.
There are so many 'special cases' that I regard 'cleaning up' shutdown code as the special case.