Can an unreleased COM pointer to an external process (still alive) cause that process to hang on destruction?
Even with TerminateProcess called on it?
Process A has a COM interface pointer reference to Process B, now Process B issues a TerminateProcess on A, if some COM interface pointer to Process B in Process A is not released properly, could it be that the process hangs on termination?
I want to know as I have a project where a child process hangs on killing, even though TerminateProcess is called if the normal close procedure fails. When it hangs on killing, it doesn't just hang itself, but also it's parent process, which is disastrous since this is running in a production environment. So I'm trying to see where there's possibilities of it going wrong.
No. TerminateProcess does just that -- completely destroys the process. Raymond Chen has a few words to say about that....
EDIT: He also has some more detailed articles detailing exactly how process shutdown occurs. It however is not related to TerminateProcess.
Well, yes, it is technically possible for TerminateProcess not to terminate the process. If there's a kernel thread executing an I/O request that never ends then the process cannot exit. Easy to diagnose, you'll see the process in Taskmgr.exe's Processes tab with a handle count of one. Vista had a CancelIo improvement to fix this, I think Raymond talked about that too.
Which is only very remotely associated with COM. Grasping at straws: an out-of-process COM server doesn't deal with TerminateProcess of a client well, Windows cannot automatically call Release() on the interface pointers. It will keep running forever. Until somebody calls TerminateProcess, usually the Windows shutdown code or TaskMgr.exe
Do make sure to edit your question and explain why you even asked it.
Related
I debug console multithreaded application written in C++/Qt 5.12.1. It is running on Linux Mint 18.3 x64.
This app has SIGINT handler, QWebSocketServer and QWebSocket table. It uses close() QWebSocketServer and call abort()/deleteLater() for items in QWebSocket table to handle the termination.
If the websocket client connects to this console app, then termination fails because of some running thread (I suppose it's internal QWebSocket thread).
Termination is successful if there were no connections.
How to fix it? So that the app gracefully exits.
To gracefully quit the socket server we can attempt:
The most important part is to allow the main thread event loop to run and wait on QWebSocketServer::closed() so that the slot calls QCoreApplication::quit().
That can be done even with:
connect(webSocketServer, &QWebSocketServer::closed,
QCoreApplication::instance(), &QCoreApplication::quit);
If we don't need more detailed reaction.
After connecting that signal before all, proceed with pauseAccepting() to prevent more connections.
Call QWebSocketServer::close.
The below may not be needed if the above sufficient. You need to try the above first, and only if still have problems then deal with existing and pending connections. From my experience the behavior was varying on platforms and with some unique websocket implementations in the server environment (which is likely just Qt for you).
As long as we have some array with QWebSocket instances, we can try to call QWebSocket::abort() on all of them to immediately release. This step seem to be described by the question author.
Try to iterate pending connections with QWebSocketServer::nextPendingConnection() and call abort() for them. Call deleteLater, if that works as well.
There is no need to do anything. What do you mean by "graceful exit"? As soon as there's a request to terminate your application, you should terminate it immediately using exit(0) or a similar mechanism. That's what "graceful exit" should be.
Note: I got reformed. I used to think that graceful exits were a good thing. They are most usually a waste of CPU resources and usually indicate problems in the architecture of the application.
A good rationale for why it should be so written in the kj framework (a part of capnproto).
Quoting Kenton Varda:
KJ_NORETURN(virtual void exit()) = 0;
Indicates program completion. The program is considered successful unless error() was
called. Typically this exits with _Exit(), meaning that the stack is not unwound, buffers
are not flushed, etc. -- it is the responsibility of the caller to flush any buffers that
matter. However, an alternate context implementation e.g. for unit testing purposes could
choose to throw an exception instead.
At first this approach may sound crazy. Isn't it much better to shut down cleanly? What if
you lose data? However, it turns out that if you look at each common class of program, _Exit()
is almost always preferable. Let's break it down:
Commands: A typical program you might run from the command line is single-threaded and
exits quickly and deterministically. Commands often use buffered I/O and need to flush
those buffers before exit. However, most of the work performed by destructors is not
flushing buffers, but rather freeing up memory, placing objects into freelists, and closing
file descriptors. All of this is irrelevant if the process is about to exit anyway, and
for a command that runs quickly, time wasted freeing heap space may make a real difference
in the overall runtime of a script. Meanwhile, it is usually easy to determine exactly what
resources need to be flushed before exit, and easy to tell if they are not being flushed
(because the command fails to produce the expected output). Therefore, it is reasonably
easy for commands to explicitly ensure all output is flushed before exiting, and it is
probably a good idea for them to do so anyway, because write failures should be detected
and handled. For commands, a good strategy is to allocate any objects that require clean
destruction on the stack, and allow them to go out of scope before the command exits.
Meanwhile, any resources which do not need to be cleaned up should be allocated as members
of the command's main class, whose destructor normally will not be called.
Interactive apps: Programs that interact with the user (whether they be graphical apps
with windows or console-based apps like emacs) generally exit only when the user asks them
to. Such applications may store large data structures in memory which need to be synced
to disk, such as documents or user preferences. However, relying on stack unwind or global
destructors as the mechanism for ensuring such syncing occurs is probably wrong. First of
all, it's 2013, and applications ought to be actively syncing changes to non-volatile
storage the moment those changes are made. Applications can crash at any time and a crash
should never lose data that is more than half a second old. Meanwhile, if a user actually
does try to close an application while unsaved changes exist, the application UI should
prompt the user to decide what to do. Such a UI mechanism is obviously too high level to
be implemented via destructors, so KJ's use of _Exit() shouldn't make a difference here.
Servers: A good server is fault-tolerant, prepared for the possibility that at any time
it could crash, the OS could decide to kill it off, or the machine it is running on could
just die. So, using _Exit() should be no problem. In fact, servers generally never even
call exit anyway; they are killed externally.
Batch jobs: A long-running batch job is something between a command and a server. It
probably knows exactly what needs to be flushed before exiting, and it probably should be
fault-tolerant.
I use Qt 4.8.6, MS Visual Studio 2008, Windows 7. I've created a GUI program. It contains main GUI thread and worker thread (I have not made QThread subclass, by the way), which makes synchronous calls to 3rd party DLL functions. These functions are rather slow. QTcpServer instance is also under worker thread. My worker class contains QTcpServer and DLL wrapper methods.
I know that quit() is preferred over terminate(), but I don't wanna wait for a minute (because of slow DLL functions) during program shutdown. When I try to terminate() worker thread, I notice warnings about stopping QTcpServer from another thread. What is a correct way of process shutdown?
QThread::quit tells the thread's event loop to exit. After calling it the thread will get finished as soon as the control returns to the event loop of the thread
You may also force a thread to terminate right now via QThread::terminate(), but this is a very bad practice, because it may terminate the thread at an undefined position in its code, which means you may end up with resources never getting freed up and other nasty stuff. So use this only if you really can't get around it.
So i think the right approach is to first tell the thread to quit normally and if something goes wrong and takes much time and you have no way to wait for it, then terminate it:
QThread * th = myWorkerObject->thread();
th->quit();
th->wait(5000); // Wait for some seconds to quit
if(th->isRunning()) // Something took time more than usual, I have to terminate it
th->terminate();
You should always try to avoid killing threads from the outside by force and instead ask them nicely to finish what they're doing. This usually means that the thread checks regularly if it should terminate itself and the outside world tells it to terminate when needed (by setting a flag, signaling an event or whatever is appropriate for the situation at hand).
When a thread is asked to terminate itself, it finishes up what it's doing and exists cleanly. The application waits for the thread to terminate and then exits.
You say that in your case the thread takes a long time to finish. You can take this into consideration and still terminate the thread "the nice way" (for example you can hide the application window and give the impression that the app has exited, even if the process takes a little more time until it finally terminates; or you can show some form of progress indication to the user telling him that the application is shutting down).
Unless there is an overriding reason to do so, you should not attempt to terminate threads with user code at process-termination.
If there is no such reason, just call your OS process termination syscall, eg. ExitProcess(0). The OS can, and will will stop all process threads in any state before releasing all process resources. User code cannot do that, and should not try to terminate threads, or signal them to self-terminate, unless absolutely necessary.
Attempting to 'clean up' with user code sounds 'nice', (aparrently), but is an expensive luxury that you will pay for with extra code, extra testing and extra maintenance.
That is, if your customers don't stop buying your app because they get pissed off with it taking so long to shut down.
The OS is very good at stopping threads and cleaning up. It's had endless thousands of hours of testing during development and decades of life in the wild where problems with process termination would have become aparrent and got fixed. You will not even get close to that with your flags, events etc. as you struggle to stop threads running on another core without the benefit of an interprocessor driver.
There are surely times when you will have to resort to user code to stop threads. If you need to stop them before process termination, or you need to close some DB connection, flush some file at shutdown, deal with interprocess comms or the like issues, then you will have to resort to some of the approaches already suggested in other answers.
If not, don't try to duplicate OS functionality in the name of 'niceness'. Just ask it to terminate your process. You can get your warm, fuzzy feeling when your app shuts down immedately while other developers are still struggling to implement 'Shutdown' progress bars or trying to explain to customers why they have 15 zombie apps still running.
Based on this link, I am creating Windows Processing Monitoring and Windows Service Monitoring DLLs that are to be called by the main application and run them in thread(using boost::thread) to get the data asynchronously. Consider that these both dll are run by my application. I get the error Failed to initialize security. Error code = 0x80010119 for one of my app. And also when stopping the threads for these dll, CoUninitialize is called in both of them. Here I get a crash. It might be because the CoUninitialize in latter thread might attempt to clear memory that was cleared by former thread.
If so, how can I check whether the CoUninitialize in one thread was successful so that I would not call it in another thread.
CoUninitialize is a part of COM library, WMI can use COM interface, but has nothing to do with its initialization or disposal. Don't call CoUninitialize twice within one process. Note, that both of your DLL's probably share application context and are executed within it.
"how can I check whether the CoUninitialize in one thread was successful so that I would not call it in another thread."
That tells me your error, but it's not in CoUninitialize. You are assuming that CoUninitialize is process-wide. In reality, CoUninitialize mirrors CoInitializeEx, and that should have de done on every thread. So your crash is almost certainly caused by a CoInitializeEx on one thread and the CoUninitialize on another thread.
The fix is to do CoInitializeEx on both threads, and CoUninitialize also on both threads.
I know that modern Windows versions reclaim memory that was previously acquired with malloc, new and the like, after program termination, but what about COM objects? Should I call obj->Release() on them on program's exit, or will the system do this for me?
My guess it: it depends. For out of process COM, I should probably always call Release(), but for in-process COM, I think it really doesn't matter, because the COM objects die after program termination anyway.
If you're in the process itself then yes you should as you might not know where the server is and the server could be out of proc. If you're in a DLL it becomes more complicated.
In a DLL you should UNLESS you receive a DLL_PROCESS_DETACH notification, in which case you should do absolutely nothing and just let the application close. This is because this notification is called during process teardown. As such it is too late to clean up at that point. The kernel may have already reclaimed the blocks you call release on.
Remember as a DLL writer there is nothing you can do if the process exits ungracefully, you can only do what you can within reason to clean up after yourself in a graceful exit.
One easy solution is to use smart COM Pointers everywhere, the ATL and WRL have implementations that work nicely and make it so you don't have to worry about it for the most part. Even if these are stored statically their destructors will be called before process teardown or DLL unload, thus releasing safely at a time when it is safe to do so.
So the short answer is If you can e.g. you should always call release if it is safe to do so. However there are times when it is not and you should most definitely NOT do anything.
Depending on the implementation of the underlying object, there may or may not be a penalty. The object may have state that persists beyond process shutdown. A persistent lock in a local database is the easiest example that comes to mind.
With that in mind, I say it's better to call Release just in case.
in-process COM object will die with the process
out-of-process reference will be released on timeout
poorly designed servers and clients might remain in bad state holding pointers to objects (typically proxies), that are not available any longer, being unable to see they are dead. being unable to get rid of them
it is always a good idea to release the pointers gracefully
If you don't release COM pointers properly, and COM activity included marshaling, you are very likely to have exceptions in CoUninitialze which are both annoying and/or can end up showing process crash message to the user.
Unfortunately, MSDN is not clear enough with it. I'm writing a program which uses a global hook, and I'm worrying about what would happen if the program terminates abnormally (crashes, killed by user, etc).
Does Windows automatically unhook global hooks installed by a process when the process terminates?
If not, is it possible to call UnhookWindowsHookEx() in another process to release the hook? (I'm thinking of doing this in a hooked thread, if it detects that the installer process is dead.)
If the answers were no and no, isn't it dangerous to leave a global hook active when the installer process is terminated? What are the standard methods of dealing with this situation?
I've read in MSDN that UnhookWindowsHookEx() doesn't free the dll loaded in other processes, but it doesn't say when will the dll be freed. This article in CodeProject seems to suggest that the dll is unmapped (in the respective process) when the first message arrives at the hooked thread, so it's about right after the UnhookWindowsHookEx() call. Is it true?
Thank you.
Yes, when a process terminates the system cleans up after it -- all handles are closed implicitly.
No, it's not, and you don't need to anyway.
(It's Yes and no not no and no)
I don't see why there's a DLL loaded in another process involved here. (EDIT: I was originally thinking of a systemwide hook such as CBTProc -- if your hook is per-process that might be different) If you're dealing with something like the link indicated in #Hans' comment, whereby you've injected your own DLL into the target process, then you should put functionality to unload the hook inside your DLL, not tie it's correct operation to your application. (I.e. if sending the message back to your application fails inside the DLL, then your DLL should decide to unload itself) /EDIT When a DLL is loaded inside another process it's up to that process to do the freeing.
If your process dies, UnhookWindowsHookEx is called implicitly and your hooks are removed. The .dll is unloaded by the message processing code after a new message is received. Therefore some background processes which almost never receive any messages, may still keep the library locked long after your hook was removed. Broadcasting a WM_NULL message usually helps. I like sending it a few times after unhooking.
SendNotifyMessage(HWND_BROADCAST, WM_NULL, 0, 0);