So I realize that in fact I really do not need to do what I am about to explain but I am very picky about making sure my programs cleans up everything before exiting so I still want to do it...
I have a QApplication that I connect a single shot timer to the quit slot on. (in the future imagine this quit is really going to be generated from the UI on a user click so this is just for debugging) I have noticed that at first I was just allocating the qApp in the main function on the stack. The problem is in doing some research it seems the exec function does NOT HAVE to return. This means the main function stack does not get cleaned up. (Or well at least not until the program exits and the system reclaims that memory...) So in valgrind I have some QCoreApplication::init() memory "issues". Once again more just me being picky then really affecting things...
Anyways so I decided to malloc the QApplication and then try to free it just before the program closes. I can do this for signals but how about on the quit signal? I'm tied into the aboutToQuit signal but I feel like that's not the right stage to blow away the qApp. So my question is, IS there a right place to delete the qApp and if yes where?
The problem is in doing some research it seems the exec function does NOT HAVE to return.
Well, yeah, it doesn't "have" to return if your process is crashing and burning anyway, i.e. if you've called - directly or indirectly - std::terminate(), ::abort(), ::exit(), etc. Those library functions are used to quickly terminate the process and your problems aren't limited to the QApplication instance. Every object on the call stack, in every thread, will be leaked, and some of those objects you have neither access to nor any control of - the runtime and the libraries create them - and there's nothing you can do about it. The case of non-returning exec() is an exception, not the normal way your program should be ending. In terms of "what to do when exec() doesn't return: nothing. It's too late by then.
Hence - don't throw uncaught exceptions, don't ::exit() nor ::abort(), and don't worry about it. In every well-behaved Qt program, QCoreApplication::exec() returns.
Related
Let's say I compiled using gcc --stack,4194304
Next in my program I do something like char what_is_wrong_with_me[8000000];
This will result in a segmentation fault, but the weird thing is I have a working segv_handler, where if I do something stupid like char *stupid=0; *stupid='x'; it will print an error message.
My question is, how do I handle the out of stack space segfault as well?
You can handle this but you've exhausted your primary stack. You need to set an alternate stack for your signal handler. You can do this with a sigaltstack syscall
When installing your segfault handler with sigaction, you'll also need the SA_ONSTACK option
So, your process exhausted its allocated stack space (intentionally, in your case, but it doesn't matter whether it's intentional or not). As soon as an attempt is made to write the next stack frame into the unallocated page occurs, a SIGSEGV signal gets sent to the process.
An attempt is then made to invoke your installed signal handler. Now, lets remember that SIGSEGV is just like any other signal. And, as you know, when a signal handler gets invoked upon receipt of a signal, when the signal handler returns the process continues to execute.
In other words, the signal handler gets invoked as if it were a function call, and when the function call returns, the original execution thread resumes running.
Of course, you know what needs to happen for a function call, right? A new call frame gets pushed on the stack, containing the return address, and a few other things. You know, so when the function call returns, the original execution thread resumes running where it left off (in case of a signal you also get an entire register dump in there, and the rest of the CPU state, but that's an irrelevant detail).
And now, perhaps, you can figure out, all by yourself, the answer to your own question, why your signal handler does not get invoked in this situation, when the stack space has been exhausted...
Say I have two C++ functions foo1() and foo2(), and I want to minimize the likelihood that that foo1() starts execution but foo2() is not called due to some external event. I don't mind if neither is called, but foo2() must execute if foo1() was called. Both functions can be called consecutively and do not throw exceptions.
Is there any benefit / drawback to wrapping the functions in an object and calling both in the destructor? Would things change if the application was multi-threaded (say the parent thread crashes)? Are there any other options for ensuring foo2() is called so long as foo1() is called?
I thought having them in a destructor might help with e.g. SIGINT, though I learned SIGINT will stop execution immediately, even in the middle of the destructor.
Edit:
To clarify: both foo1() and foo2() will be abstracted away, so I'm not concerned about someone else calling them in the wrong order. My concern is solely related to crashes, exceptions, or other interruptions during the execution of the application (e.g. someone pressing SIGINT, another thread crashing, etc.).
If another thread crashes (without relevant signal handler -> the whole application exits), there is not much you can do to guarantee that your application does something - it's up to what the OS does. And there are ALWAYS cases where the system will kill your app without your actual knowledge (e.g. a bug that causes "all" memory being used by your app and the OS "out of memory killer" killing your process).
The only time your destructor is guaranteed to be executed is if the object is constructed and a C++ exception is thrown. All signals and such, make no such guarantees, and contininuing to execute [in the same thread] after for example SIGSEGV or SIGBUS is well into the "undefined" parts of the world - nothing much you can do about that, since the SEGV typically means "you tried to do something to memory that doesn't exist [or that you can't access in the way you tried, e.g. write to code-memory]", and the processor would have aborted the current instruction. Attempting to continue where you were will either lead to the same instruction being executed again, or the instruction being skipped [if you continue at the next instruction - and I'm ignoring the trouble of determining where that is for now]. And of course, there are situations where it's IMPOSSIBLE to continue even if you wanted to - say for example the stack pointer has been corrupted [restored from memory that was overwritten, etc].
In short, don't spend much time trying to come up with something that tries to avoid these sort of scenarios, because it's unlikely to work. Spend your time trying to come up with schemes where you don't need to know if you completed something or not [for example transaction based programming, or "commit-based" programming (not sure if that's the right term, but basically you do some steps, and then "commit" the stuff done so far, and then do some further steps, etc - only stuff that has been "committed" is sure to be complete, uncommitted work is discarded next time around) , where something is either completely done, or completely discarded, depending on if it completed or not].
Separating "sensitive" and "not sensitive" parts of your application into separate processes can be another way to achieve some more safety.
I've tried to read up on the difference between return EXIT_SUCCESS; from main() and calling exit(EXIT_SUCCESS) from anywhere, and the best resource I've found so far is this answer here on SO. However, there is one detail I'd like to have cleared up.
To me, the most compelling argument against exit() (as laid forward in that post) is that no destructor is called on locally scoped objects. But what does this mean to other objects? What if I'm calling exit() from somewhere else, quite far away on the stack from the main() method, but in block (even a method) that contains only that call, and no variables? Will objects elsewhere on the stack still be destructed?
My use case is this:
I have an application that keeps prompting the user for input until the "quit" command is given (a text-based adventure game). The easiest way to accomplish that, was to map "quit" to a method that simply calls exit(EXIT_SUCCESS). Of course, I could write it so that every action the user can take returns a boolean indicating wether the game should go on or not, and then just return false when I want to quit - but the only time I'd return anything but true is from this method - every other action method would then have to return true just because I wanted to avoid exit(). On the other hand, I create quite a lot of objects and allocate quite a lot of memory dynamically - all of that has to be taken care of by class destructors, so it is crucial that they do run.
What is best practice here? Is this a good case for exit(), or just as bad as in the main method?
if (command == "quit") {
throw QuitGameException();
}
You could throw an exception. An exception would safely unwind the stack and destroy objects in all the callers along the way.
I'm not even gonna read that SO post, because I know what it says. Don't use exit(), so don't.
I know one reason to use exit() - if you're completely doomed anyway and there's no way you can exit nicely. In such case you will not exit with code zero. So, exit() with non-zero when you're about to crash anyway.
In every other case, create variables which let you leave main loops and exit main nice and sane, to clean-up all your memory. If you don't write code like this, you will e.g. never be able to detect all your memory leaks.
Will objects elsewhere on the stack still be destructed?
Nope, exit() does the following (in order):
Objects associated with the current thread with thread storage duration are destroyed (C++11 only).
Objects with static storage duration are destroyed (C++) and functions registered with atexit are called (if an unhandled exception is thrown terminate is called).
All C streams (open with functions in ) are closed (and flushed, if buffered), and all files created with tmpfile are removed.
Control is returned to the host environment
from: http://www.cplusplus.com/reference/cstdlib/exit/
exit() does not unwind the stack, the memory for the whole stack is simply freed, the destructor for individual objects in the stack are not run. Using exit() is safe only when all objects that does not have simple destructors (those that does not deal with external resources) are allocated in the static storage (i.e. global variables or locally scoped static variable). Most programs have files handlers, socket connections, database handlers, etc that can benefit from a more graceful shut down. Note that dynamically allocated object (that does not deal with external resources) does not necessarily need to be deallocated because the program is about to terminate anyway.
exit() is a feature inherited from C, which does not have destructor and so clean up of external resources can always be arranged using atexit(); in general it's very hard to use exit() in C++ safely, instead in C++ you should write your program in RAII, and throw an exception to terminate and do clean ups.
I have a Windows C++ console program, and if I don't call ReleaseDriver() at the end of my program, some pieces of hardware enter a bad state and can't be used again without rebooting.
I'd like to make sure ReleaseDriver() gets runs even if the program exits abnormally, for example if I hit Ctrl+C or close the console window.
I can use signal() to create a signal handler for SIGINT. This works fine, although as the program ends it pops up an annoying error "An unhandled Win32 exception occurred...".
I don't know how to handle the case of the console window being closed, and (more importantly) I don't know how to handle exceptions caused by bad memory accesses etc.
Thanks for any help!
Under Windows, you can create an unhandled exception filter by calling SetUnhandledExceptionFilter(). Once done, any time an exception is generated that is not handled somewhere in your application, your handler will be called.
Your handler can be used to release resources, generate dump files (see MiniDumpWriteDump), or whatever you need to make sure gets done.
Note that there are many 'gotchas' surrounding how you write your exception handler function. In particular:
You cannot call any CRT function, such as new
You cannot perform any stack-based allocation
If you do anything in your handler which causes an exception, Windows will immediately terminate your application by ripping the bones out of its back. You get no further chances to shut down gracefully.
You can call many Windows API functions. But you can't sprintf, new, delete... In short, if it isn't a WINAPI function, it probably isn't safe.
Because of all of the above, it is advisable to make all the variables in your handler function static variables. You won't be able to use sprintf, so you will have to format strings ahead of time, during initialization. Just remember that the machine is in a very unstable state when your handler is called.
If I'm not mistaken, you can detect if the console is closed or the program is terminated with Ctrl+C with SetConsoleCtrlHandler:
#include <windows.h>
BOOL CtrlHandler(DWORD)
{
MessageBox(NULL, "Program closed", "Message", MB_ICONEXCLAMATION | MB_OK);
exit(0);
}
int main()
{
SetConsoleCtrlHandler((PHANDLER_ROUTINE)&CtrlHandler, TRUE);
while (true);
}
If you are worried about exceptions, like bad_alloc, you can wrap main into a try block. Catch std::exception& which should ideally be the base class of all thrown exception, but you can also catch any C++ exception with catch (...). With those exceptions, though, not all is lost, and you should figure out what is being thrown and why.
Avoiding undefined behavior also helps. :)
You can't (guarantee code runs). You could lose power, then nothing will run. The L1 instruction cache of your CPU could get fried, then your code will fail in random ways.
The most sure way of running cleanup code is in a separate process that watches for exit of the first (just WaitForSingleObject on the process handle). A separate watchdog process is as close as you can get to a guarantee (but someone could still TerminateProcess your watchdog).
I currently have a program which has the following basic structure
main function
-- displays menu options to user
-- validates user input by passing it to a second function (input_validator)
-- if user selects option 1, run function 1, etc
function1,2,3,etc
-- input is requested from user and then validated by input_validator
-- if input_validator returns true, we know input is good
Here is my problem. I want to allow the user to quit at any point within the program by typing '0'. I planned on doing this with some basic code in input_validator (if input = 0, etc).
This would appear to be simple, but I have been told that using quit() will result in some resources never been released / etc. I cannot simply do a 'break' either -- it will result in my program simply returning to the main function.
Any ideas?
One possibility would be to do it by throwing an exception that you catch in main, and when you catch it, you exit the program. The good point of throwing an exception is that it lets destructors run to clean up objects that have been created, which won't happen if you exit directly from elsewhere (e.g., by using exit()).
exit()
Terminates the process normally,
performing the regular cleanup for
terminating processes.
First, all functions registered by
calls to atexit are executed in the
reverse order of their registration.
Then, all streams are closed and the
temporary files deleted, and finally
the control is returned to the host
environment.
This hasn't been true for any kind of mainstream operating system for a long time. The OS ensures that all kernel resources are released, even if the program didn't explicitly do so. Calling abort() or exit() from anywhere in your code is fine.
exit(int exitCode) - defined in stdlib.h / cstdlib - you'd probably want to exit(0); // normal termintation.
exit() will not call your destructors, so you might want to consider using an exception handler instead.
If you have things like open but unflushed files, the OS will close the file handles, but won't flush any unwritten data.
You have to design your menu system so that a status can be passed back to the previous method, unwinding until code in the main function is executed. Similar issues apply to back or previous screen buttons.
Taking a step back and looking at the Big Picture, the unwinding technique looks very similar to C++ exception handling strategy. I suggest using exceptions for cases that don't follow the normal flow of execution, such as main menu, and previous menu.
Try it out.