Should I always call Release on COM pointers when my program terminates? - c++

I know that modern Windows versions reclaim memory that was previously acquired with malloc, new and the like, after program termination, but what about COM objects? Should I call obj->Release() on them on program's exit, or will the system do this for me?
My guess it: it depends. For out of process COM, I should probably always call Release(), but for in-process COM, I think it really doesn't matter, because the COM objects die after program termination anyway.

If you're in the process itself then yes you should as you might not know where the server is and the server could be out of proc. If you're in a DLL it becomes more complicated.
In a DLL you should UNLESS you receive a DLL_PROCESS_DETACH notification, in which case you should do absolutely nothing and just let the application close. This is because this notification is called during process teardown. As such it is too late to clean up at that point. The kernel may have already reclaimed the blocks you call release on.
Remember as a DLL writer there is nothing you can do if the process exits ungracefully, you can only do what you can within reason to clean up after yourself in a graceful exit.
One easy solution is to use smart COM Pointers everywhere, the ATL and WRL have implementations that work nicely and make it so you don't have to worry about it for the most part. Even if these are stored statically their destructors will be called before process teardown or DLL unload, thus releasing safely at a time when it is safe to do so.
So the short answer is If you can e.g. you should always call release if it is safe to do so. However there are times when it is not and you should most definitely NOT do anything.

Depending on the implementation of the underlying object, there may or may not be a penalty. The object may have state that persists beyond process shutdown. A persistent lock in a local database is the easiest example that comes to mind.
With that in mind, I say it's better to call Release just in case.

in-process COM object will die with the process
out-of-process reference will be released on timeout
poorly designed servers and clients might remain in bad state holding pointers to objects (typically proxies), that are not available any longer, being unable to see they are dead. being unable to get rid of them
it is always a good idea to release the pointers gracefully
If you don't release COM pointers properly, and COM activity included marshaling, you are very likely to have exceptions in CoUninitialze which are both annoying and/or can end up showing process crash message to the user.

Related

Ending global threads in DLL during process shutdown

There is this multiplatform (Windows, Linux, Cygwin) dynamic library which is loaded at run time by a Cygwin executable. At some point of time, during the normal workflow, the DLL allocates a pool of threads for use. These threads are managed as global variables (reference counted). So when the client process goes to shutdown, it starts releasing global objects, threads should be released too.
Issue is, as I understand, that during the process shutdown, the Loader lock is acquired and further down the street, threads want to acuiqre the same lock and, we have now a deadlock.
Now my ask for advise is, how we can make a nice shutdown?
The DLL has no init() or uninit() methods to be called. The client at best can be enhanced with some code before the end of main () (so this is before the process shutdown).
If I detach the threads, instead of joining them, during the global var clean up, memory goes corrupted. If I terminate them, we have ugly process dumps.
Btw, under Linux I see no such problems.
DLL is only C++14, client is C99 (Cygwin).
I tried to make the situation clear, but let me know if you have further questions. Thanks in advance for any ideas.
The fix is to add an uninit method to the DLL. It may not have one yet, but it needs one. You found out why: while the OS will call DllMain on DLL unload, it does so under loader lock. You need to do things that aren't possible under loader lock, so you need an extra call before DllMain. Naming that method uninit() is reasonable enough.
C++14 is not an issue here; this is an OS mechanism. Loader Lock has been around since ancient times.
At a previous job I struggled with this issue for a pretty lengthy period. Ultimately it came down to only 2 possible solutions:
Clean up every single resource the thread has claim to, then TerminateThread. It's violent and ugly but it works around the THREAD_DETACH issue and I actually found it advised on the Internet.
If you have the luxury of being able to get advance notice prior to PROCESS_DETACH - clean up everything at that early point, including orderly shutdown of your threads. Then by all means do absolutely nothing during PROCESS_DETACH - yes, don't even free any lingering heap objects, as you may be exposing yourself to deadlock or crash risks and the process is going down and freeing up all its resources anyway.
As an added note, I also learned to avoid at all costs having any global variables linked to the DLL lifetime. These will have their constructors and destructors executed in the DllMain context, needless to say more... If you need global singletons in a DLL, make sure to have manual control on their lifetimes (on both ends, so no auto-destruct smart pointers either).

Can (Should?) a native shared library which creates its own threads support the using process exiting 'without warning'?

I work on a product that's usually built as a shared library.
The using application will load it, create some handles, use them, and eventually free all the handles and unload the library.
The library creates some background threads which are usually stopped at the point the handles are freed.
Now, the issue is that some consuming applications aren't super well-behaved, and will fail to free the handles in some cases (cancellation, errors, etc). Eventually, static destructors in our library run, and crash when they try to interact with the (now dead) background thread(s).
One possibility is to not have any global objects with destructors, and so to avoid running any code in the library during static destruction. This would probably solve the crash on process exit, but it would introduce leaks and crashes in the scenario where the application simply unloads the library without freeing the handles (as opposed to exiting), as we wouldn't ensure that the background threads are actually stopped before the code they were running was unloaded.
More importantly, to my knowledge, when main() exits, all other threads will be killed, wherever they happened to be at the time, which could leave locks locked, and invariants broken (for example, within the heap manager).
Given that, does it even make sense to try and support these buggy applications?
Yes, your library should allow the process to exit without warning. Perhaps in an ideal world every program using your library would carefully track the handles and free them all when it exits for any reason, but in practice this isn't a realistic requirement. The code path that is triggering the program exit might be a shared component that isn't even aware that your library is in use!
In any case, it is likely that your current architecture has a more general problem, because it is inherently unsafe for static destructors to interact with other threads.
From DllMain entry point in MSDN:
Because DLL notifications are serialized, entry-point functions should not attempt to communicate with other threads or processes. Deadlocks may occur as a result.
and
If your DLL is linked with the C run-time library (CRT), the entry point provided by the CRT calls the constructors and destructors for global and static C++ objects. Therefore, these restrictions for DllMain also apply to constructors and destructors and any code that is called from them.
In particular, if your destructors attempt to wait for your threads to exit, that is almost certain to deadlock in the case where the library is explicitly unloaded while the threads are still running. If the destructors don't wait, the process will crash when the code the threads are running disappears. I'm not sure why you aren't seeing that problem already; perhaps you are terminating the threads? (That's not safe either, although for different reasons.)
There are a number of ways to resolve this problem. Probably the simplest is the one you already mentioned:
One possibility is to not have any global objects with destructors, and so to avoid running any code in the library during static destruction.
You go on to say:
[...] but it would introduce leaks and crashes in the scenario where the application simply unloads the library without freeing the handles [...]
That's not your problem! The library will only be unloaded if the application explicitly chooses to do so; obviously, and unlike the earlier scenario, the code in question knows your library is present, so it is perfectly reasonable for you to require that it close all your handles before doing so.
Ideally, however, you would provide an uninitialization function that closes all the handles automatically, rather than requiring the application to close each handle individually. Explicit initialization and uninitialization functions also allows you to safely set up and free global resources, which is usually more efficient than doing all of your setup and teardown on a per-handle basis and is certainly safer than using global objects.
(See the link above for a full description of all the restrictions applicable to static constructors and destructors; they are quite extensive. Constructing all your globals in an explicit initialization routine, and destroying them in an explicit uninitialization routine, avoids the whole messy business.)

How to release resources if a program crashes

I have a program that uses services from others. If the program crashes, what is the best way to close those services? At server side, I would define some checkers that monitor if a client is invalid periodically. But can we do any thing at client? I am not the sure if the normal RAII still works at this case. My code is written in C and C++.
If your application experiences a hard crash, then no, your carefully crafted cleanup code will not run, whether it is part of an RAII paradigm or a method you call at the end of main. None of an application's cleanup code runs after a crash that causes the application to be terminated.
Of course, this is not true for exceptions. Although those might eventually cause the application to be terminated, they still trigger this termination in a controlled way. Generally, the runtime library will catch an unhandled exception and trigger termination. Along the way, your RAII-based cleanup code will be executed, unless it also throws an exception. Then you're back to being unceremoniously ripped out of memory.
But even if your application's cleanup code can't run, the operating system will still attempt to clean up after you. This solves the problem of unreleased memory, handles, and other system objects. In general, if you crash, you need not worry about releasing these things. Your application's state is inconsistent, so trying to execute a bunch of cleanup code will just lead to unpredictable and potentially erroneous behavior, not to mention wasting a bunch of time. Just crash and let the system deal with your mess. As Raymond Chen puts it:
The building is being demolished. Don't bother sweeping the floor and emptying the trash cans and erasing the whiteboards. And don't line up at the exit to the building so everybody can move their in/out magnet to out. All you're doing is making the demolition team wait for you to finish these pointless housecleaning tasks.
Do what you must; skip everything else.
The only problem with this approach is, as you suggest in this question, when you're managing resources that are not controlled by the operating system, such as a remote resource on another system. In that case, there is very little you can do. The best scenario is to make your application as robust as possible so that it doesn't crash, but even that is not a perfect solution. Consider what happens when the power is lost, e.g. because a user's cat pulled the cord from the wall. No cleanup code could possibly run then, so even if your application never crashes, there may be termination events that are outside of your control. Therefore, your external resources must be robust in the event of failure. Time-outs are a standard method, and a much better solution than polling.
Another possible solution, depending on the particular use case, is to run consistency-checking and cleanup code at application initialization. This might be something that you would do for a service that is intended to run continuously and will be restarted promptly after termination. The next time it restarts, it checks its data and/or external resources for consistency, releases and/or re-initializes them as necessary, and then continues on as normal. Obviously this is a bad solution for a typical application, because there is no guarantee that the user will relaunch it in a timely manner.
As the other answers make clear, hoping to clean up after an uncontrolled crash (i.e., a failure which doesn't trigger the C++ exception unwind mechanism) is probably a path to nowhere. Even if you cover some cases, there will be other cases that fail and you are building in a serious vulnerability to those cases.
You mention that the source of the crashes is that you are "us[ing] services from others". I take this to mean that you are running untrusted code in-process, which is the potential source of crashes. In this case, you might consider running the untrusted code "out of process" and communicating back to your main process through a pipe or shared memory or whatever. Then you isolate the crashes this child process, and can do controlled cleanup in your main process. A separate process is really the lightest weight thing you can do that gives you the strong isolation you need to avoid corruption in the calling code.
If forking a process per-call is performance-prohibitive, you can try to keep the child process alive for multiple calls.
One approach would be for your program to have two modes: normal operation and monitoring.
When started in a usual way, it would :
Act as a background monitor.
Launch a subprocess of itself, passing it an internal argument (something that wouldn't clash with normal arguments passed to it, if any).
When the subprocess exists, it would release any resources held at the server.
When started with the internal argument, it would:
Expose the user interface and "act normally", using the resources of the server.
You might look into atexit, which may give you the functionality you need to release resources upon program termination. I don't believe it is infallible, though.
Having said that, however, you should really be focusing on making sure your program doesn't crash; if you're hitting an error that is "unrecoverable", you should still invest in some error-handling code. If the error is caused by a Seg-Fault or some other similar OS-related error, you can either enable SEH exceptions (not sure if this is Windows-specific or not) to enable you to catch them with a normal try-catch block, or write some Signal Handlers to intercept those errors and deal with them.

C++ graceful shutdown best practices

I'm writing a multi-threaded c++ application for *nix operating systems. What are some best practices for terminating such an application gracefully? My instinct is that I'd want to install a signal handler on SIGINT (SIGTERM?) which stops/joins my threads. Also, is it possible to "guarantee" that all destructors are called (provided no other errors or exceptions are thrown while handling the signal)?
Some considerations come to mind:
designate 1 thread to be responsible for orchestrating the shutdown, eg, as Dithermaster suggested, this could be the main thread if you are writing a standalone application. Or if you are writing a library, provide an interface (eg function call) whereby a client program can terminate the objects created within the library.
you cannot guarantee destructors are called; that is up to you, and requires carefully calling delete for each new. Maybe smart pointers will help you. But, really, this is a design consideration. The major components should have start & stop semantics, which you could choose to invoke from the class constructor & destructor.
the shutdown sequence for a set of interacting objects is something that can require some effort to get correct. E.g., before you delete an object, are you sure some timer mechanism is not going to try calling it in few micro/milli/seconds later? Trial and error is your friend here; develop a framework which can repeatedly & rapidly start and stop your application to tease out shutdown related race-conditions.
signals are one way to trigger an event; others might be periodically polling for a known file, or opening a socket and receiving some data on it. Either way, you want to decouple the shutdown sequence code from the trigger event.
My recommendation is that the main thread shut down all worker threads before exiting itself. Send each worker an event telling it to clean up and exit, and wait for each one to do so. This will allow all C++ destructors to run.
Regarding signal management, the only thing you can portably and safely do inside a signal handler is to write to a variable of type sig_atomic_t (possibly volatile-qualified) and return. In general, you cannot call most functions and must not write to global memory. In other words, the handler should just set a flag to be tested inside your main routine, at some point you find appropriate, and the action resulting from the signal itself should be performed from there.
(Since there might be blocking I/O involved, consider studying POSIX Thread Cancellation. Your Unix clone (most notably Linux) might have peculiarities with respect to this and to the above.)
Regarding destructors, no magic is involved. They will be executed if control leaves a given scope through any means defined in the language. Leaving a scope through other means (for example, longjmp() or even exit()) does not trigger destructors.
Regarding general shutdown practices, there are divergent opinions on the field.
Some state that a "graceful termination", in the sense of releasing every resource ever allocated, should be performed. In C++, this usually means that all destructors should be properly executed before the process terminates. This is tricky in practice and often a source of much grief, specially in multithreaded programs, for a variety of reasons. Signals further complicate things by the very nature of asynchronous signal dispatching.
Because most of this work is totally useless, some others, like me, contend that the program must just terminate immediately, possibly shortly after undoing persistent changes to the system (like removing temporary files or restoring the screen resolution) and saving configuration. An apparently tidier cleanup is not only a waste of time (because the operating system will clean up most things like allocated memory, dangling threads and open file descriptors), but might be a serious waste of time (deallocators might touch paged out memory, uselessly forcing the system to page them in just for releasing them soon after the process terminates, for example), not mentioning the possibility of deadlocks being originated from joining threads.
Just say no. When you want to leave, call exit() (or even _exit(), but watch out for unflushed I/O) and that's it. More annoying than slow starting programs are slow terminating programs.

c++ should i bother deleting pointers to application lifetime variables?

I have a few "global" constructs that are allocated with new and are alive the entirety of the applications life span.
Should i bother calling delete on the pointers just before the application finishes?
Doesn't all the of the applications memory get reclaimed after it closes anyway?
Edit For Clarity. I am only talking about not calling delete for lifetime objects who "die" right as the program is closing.
Technically, yes, the memory is reclaimed. But unless you use delete the destructors of those objects are not run and their side effect is not applied. This might lead to a temporary file not deleted or a database change not committed depending on what those destructor were meant to do.
Also don't forget Murphy. Now the code for managing those objects is used as you describe (objects have to persist for the life of the program) but later you might want to reuse the code so that it is run multiple times. Unless it can deal with recreating objects properly it will be leaking objects.
It is always good practice to clean up everything, although the memory is reclained these objects might have other resources allocated (shared memory, smeaphores etc) that should be cleaned up, probably by the objects destructors.
If you do not want to call delete use shared pointers to hold these resources, so that they are cleaned up correctly when the application exits.
How are you testing your application? Not cleaning up might hinder development of a decent test harness. Tests of the application might want a way of spoofing a shutdown and restart.
There is more to cleaning up that simple releasing memory.
No, don't write/debug/maintain code to do something that the OS is already very good at.
Unless there are specific reasons to the contrary, (eg. outstanding transactions to be commited, files to flush, connections to be closed), I don't bother writing code to do something that the OS is going to do anyway. If a dtor does nothing special, why bother calling it?
Many developers put in a lot of effort into deleting/destroying/freeing/terminating stuff at app close time - a load of effort to avoid some spurious 'leak report' on app shutdown from a memory manager that is itself about to be destroyed.
I think you're probably right, but I personally would consider it poor coding and bad practise to rely on the system and would ensure my code always tidied properly whe shutting down.
There is no one right answer. Most of the time, it probably doesn't
matter, but there are destructors which do something important, beyond
just freeing memory (I have one which deletes temporary files), which
argues in favor of cleanup; on the other hand, destructing such objects
may lead to order of destruction issues, if the objects are used by
destructors of other objects. My general rule is to not destruct,
unless the destructor does something more than just free memory, but
others may prefer a different set of defaults.
In addition to the destructors not being executed (as sharptooth pointer out), it's also worthwhile to delete global objects to make memory checkers happy. Especially if your code is in a shared library - you don't want to clutter their memory checker (say, Valgrind) output just because you didn't delete properly.
.. then there's those cases where you definitely don't want the dtor's called at all before the OS terminates the process, eg:
1) When the dtor does not work properly because it tries to terminate a thread, fails and blocks on the thread handle or other signal, (the perennial 'join/waitFor' deadlock) - the cause off 99% of all household 'my app will not close down cleanly' posts.
2) When the dtor does not work properly because it's just bad anyway and buried in a library.
3) Where the memory must outlive the process threads else there will be segfaults/AV on close, (eg. pools of buffer objects that threads may well be writing to at close time).
4) Any other 'special cases' where the destruction of the object/s has to be left to the OS.
There are so many 'special cases' that I regard 'cleaning up' shutdown code as the special case.