I have a separate thread ListenerThread having a socket listening to info broadcasted by some remote server. This is created at the constructor of one class I need to develop.
Because of requirements, once the separate thread is started I need to avoid any blocking function on the main thread.
Once it comes to the point of calling the destructor of my class I cannot perform a join on the listener thread so the only thing I can do is to KILL it.
My questions are:
what happens to the network resoruces allocated by the function passed to the thead? Is the socket closed properly or there might be something pending? ( most worried about this )
is this procedure fast enough i.e. is the thread killed so that interrupt immediately ?
I am working with Linux ...what command or what can I check to ensure that there is no networking resource left pending or that something went wrong for the Operating system
I thank you very much for your help
Regards
MNSTN
NOTE: I am using boost::thread in C++
Network resources belong to the process, not the thread, so the socket is still open.
boost::thread does not have a
kill method. You can only
interrupt it. The effect is not
immediate and depends on OS
scheduler.
For looking at what network resources
a process holds check out lsof
and netstat(8)
with -p option.
The stop-signaling issue with blocking sockets as you describe is usually solved with the self-pipe trick.
When you are killing a thread, you can't be sure what resources it holds. For example, it might be holding the heap mutex; if you kill the thread, the mutex will stay locked and nobody (in your process) will be able to allocate dynamic memory, ever.
It's much better to do these things by peaceful consensus than by force.
Just add a way to signal to your thread that it's not needed anymore. It can be a boost::condition. The thread would check this condition and stop when it's signalled.
Related
I'm building plugins for a host application using C++11/14, for now targeting Windows and MacOS. The plugins start up async worker threads when the host app starts us up, and if they're still running when the host shuts the plugins down they get signaled to stop. Some of these worker threads are started with std::async so I can use an std::future to get the thread result back, while other less involved threads are just std::threads which I ultimately just join to see when they're done. It all works nicely this way.
Unless the host decides not to call our shutdown procedure when it shuts down itself... Yeah, I know, but it really is that bad sometimes -- it often enough just crashes during shutdown. And they even plan to make that into a 'feature' and call it "Fast Exit" to please their users; just pull the plug and we're done extra fast :(
For that case I have registered an std::atexit handler. It last-minute signals any still running threads to exit NOW (atomic bools and/or signals to wake them up), then it waits a second to give the threads some time to respond, and finally it detaches the regular std::thread threads and hopes for the best. This way at least the threads get a heads up to quickly write intermediate state to disk for a next round (if needed), and quit writing to probably already deceased data structures, thus avoiding crashes which would make any crash dump point the finger at my plugins.
However, atexit handlers run at OS DLL unload time, so I'm not even allowed to use thread synchronization (right?). And under the debugger I just saw all of the worker threads were presumably already killed by the OS, since the atexit handler's thread was the only thread left under the debugger. Needless to say, all remaining std::futures went into full blocking mode, hanging up the remaining corpse of the dead host app...
Is there a way to abandon an std::future? In MS Visual C++ I saw futures have an _Abandon method, but that's too platform specific (and undocumented) for my taste. Or is my only recourse to not use std::future, do all thread communication via my own data structures and synchronization, and work with simple std::threads which can just be detached?
Inside my desktop application I have created a simple thread by using _beginthreadex(...). I wonder what happens if my application will be closed (without explicitly closing the thread)? Will all resources inside the thread be cleared automatically? I have doubts.
So I like to end the thread when my application will be closed. I wonder what would be the best practise?
Using _endthreadex is only possible inside(!) the thread and something like TerminateThread(...) does not seems to work (infinite loop). Do you have some advices?
When main exits your other threads will be destroyed.
It's best to have main wait on your other threads, using their handles, and send them a message (using an event, perhaps) to signal them to exit. Main can then signal the event and wait for the other threads to complete what they were doing and exit cleanly. Of course this requires that the threads check the event periodically to see if they need to exit.
When the main thread exits, the app and all of its resources are cleaned up. This will include other threads and their resources.
Also, post the code you have for TerminateThread, because it works.
The tidiest way is to send your thread(s) a message (or otherwise indicate via an event) that the tread should terminate and allow it to free its resources and exit its entry point function.
To close the thread, you need to call CloseHandle() with the handle returned by _beginthreadex.
The thread is part of the process, so when the process terminates it will take the thread with it and the operating system will resume ownership of everything the two own, so all the resources will be released.
Bear in mind that if you have not forewarned the thread that the-end-is-nigh, it may be in the middle of some work when it ends. If it is in the middle of using any system or external resources, they will be released but may be in a funky state (e.g. a file may be partially written, etc).
See also http://www.bogotobogo.com/cplusplus/multithreading_win32A.php
Note: Using CloseHandle() is only for _beginthreadex and not if you are using _beginthread. See http://msdn.microsoft.com/en-us/library/kdzttdcb(v=vs.90).aspx
I am writing a multithreaded socket server and I need to know for sure.
Articles about threads say that I should wait for the thread to return, instead of killing it. In some cases though, the user's thread i want to kick/ban, will not be able to return properly (for example, I started to send a big block of data and send() blocks the thread at the moment) so I'll need just to kill it.
Why killing thread functions are dangerous and when can they crash the whole application?
Killing a thread means stopping all execution exactly where it is a the moment. In particular, it will not execute any destructors. This means sockets and files won't be closed, dynamically-allocated memory will not be freed, mutexes and semaphores won't be released, etc. Killing a thread is almost guaranteed to cause resource leaks and deadlocks.
Thus, your question is kind of reversed. The real question should read:
When, and under what conditions can I kill a thread?
So, you can kill the thread when you're convinced no leaks and deadlocks can occur, not now, and not when the other thread's code will be modified (thus, it is pretty much impossible to guarantee).
In your specific case, the solution is to use non-blocking sockets and check some thread/user-specific flag between calls to send()and recv(). This will likely complicate your code, which is probably why you've been resisting to do so, but it's the proper way to go about it.
Moreover, you will quickly realize that a thread-per-client approach doesn't scale, so you'll change your architecture and re-write lots of it anyways.
Killing a thread can cause your program to leak resources because the thread did not get a chance to clean up after itself. Consider closing the socket handle the thread is sending on. This will cause the blocking send() to return immediately with an appropriate error code. The thread can then clean up and die peacefully.
If you kill your thread the hard way it can leak resources.
You can avoid it when you design your thread to support cancelation.
Do not use blocking calls or use blocking calls with a timeout. Receive or send data in smaller chunks or asynchronously.
You really don't want to do this.
If you kill a thread while it holds a critical section it won't be released which will likely result in your whole application breaking. Certain C library calls like heap memory allocation use critical sections and if you happen to kill your thread while it's doing a "new" then calling new from anywhere else in your program will cause that thread to stop.
You simply can't do this safely without really extreme measures which are much more restrictive than simply signalling the thread to terminate itsself.
There's many reasons, but here's an easy one: there's only one heap. If a thread allocates ANYTHING on the heap, and you kill it, whatever it has allocated is around until the process ends. Each thread gets its own stack, and so that MAY be freed (implementation-dependent), but you GUARANTEE leaks on the heap by not letting it shut itself down.
In the case of a thread blocked in I/O you never really need to kill it, instead you have the choice between non-blocking I/O, timeouts, and closing the socket from another thread. Either of these will unblock the thread.
The Windows and Solaris thread APIs both allow a thread to be created in a "suspended" state. The thread only actually starts when it is later "resumed". I'm used to POSIX threads which don't have this concept, and I'm struggling to understand the motivation for it. Can anyone suggest why it would be useful to create a "suspended" thread?
Here's a simple illustrative example. WinAPI allows me to do this:
t = CreateThread(NULL,0,func,NULL,CREATE_SUSPENDED,NULL);
// A. Thread not running, so do... something here?
ResumeThread(t);
// B. Thread running, so do something else.
The (simpler) POSIX equivalent appears to be:
// A. Thread not running, so do... something here?
pthread_create(&t,NULL,func,NULL);
// B. Thread running, so do something else.
Does anyone have any real-world examples where they've been able to do something at point A (between CreateThread & ResumeThread) which would have been difficult on POSIX?
To preallocate resources and later start the thread almost immediately.
You have a mechanism that reuses a thread (resumes it), but you don't have actually a thread to reuse and you must create one.
It can be useful to create a thread in a suspended state in many instances (I find) - you may wish to get the handle to the thread and set some of it's properties before allowing it to start using the resources you're setting up for it.
Starting is suspended is much safer than starting it and then suspending it - you have no idea how far it's got or what it's doing.
Another example might be for when you want to use a thread pool - you create the necessary threads up front, suspended, and then when a request comes in, pick one of the threads, set the thread information for the task, and then set it as schedulable.
I dare say there are ways around not having CREATE_SUSPENDED, but it certainly has its uses.
There are some example of uses in 'Windows via C/C++' (Richter/Nasarre) if you want lots of detail!
There is an implicit race condition in CreateThread: you cannot obtain the thread ID until after the thread started running. It is entirely unpredictable when the call returns, for all you know the thread might have already completed. If the thread causes any interaction in the rest of that process that requires the TID then you've got a problem.
It is not an unsolvable problem if the API doesn't support starting the thread suspended, simply have the thread block on a mutex right away and release that mutex after the CreateThread call returns.
However, there's another use for CREATE_SUSPENDED in the Windows API that is very difficult to deal with if API support is lacking. The CreateProcess() call also accepts this flag, it suspends the startup thread of the process. The mechanism is identical, the process gets loaded and you'll get a PID but no code runs until you release the startup thread. That's very useful, I've used this feature to setup a process guard that detects process failure and creates a minidump. The CREATE_SUSPEND flag allowed me to detect and deal with initialization failures, normally very hard to troubleshoot.
You might want to start a thread with some other (usually lower) priority or with a specific affinity mask. If you spawn it as usual it can run with undesired priority/affinity for some time. So you start it suspended, change the parameters you want, then resume the thread.
The threads we use are able to exchange messages, and we have arbitrarily configurable priority-inherited message queues (described in the config file) that connect those threads. Until every queue has been constructed and connected to every thread, we cannot allow the threads to execute, since they will start sending messages off to nowhere and expect responses. Until every thread was constructed, we cannot construct the queues since they need to attach to something. So, no thread can be allowed to do work until the very last one was configured. We use boost.threads, and the first thing they do is wait on a boost::barrier.
I stumbled with a similar problem once upon I time. The reasons for suspended initial state are treated in other answer.
My solution with pthread was to use a mutex and cond_wait, but I don't know if it is a good solution and if can cover all the possible needs. I don't know, moreover, if the thread can be considered suspended (at the time, I considered "blocked" in the manual as a synonim, but likely it is not so)
I have a program which:
has a main thread (1) which starts a server thread (2) and another (4).
the server thread (2) does an accept(), then creates a new thread (3) to handle the connection.
At some point, thread (4) does a fork/exec to run another program which should connect to the socket that thread (2) is listening to. Occasionally this fails or takes an unreasonably long time, and it's extremely difficult to diagnose. If I strace the system, it appears that the fork/exec has worked, the accept has happened, the new thread (4) has been created .. but nothing happens in that thread (using strace -ff, the file for the relevant pid is blank).
Any ideas?
I came to the conclusion that it was probably this phenomenon:
http://kerneltrap.org/mailarchive/linux-kernel/2008/8/15/2950234/thread
as the bug is difficult to trigger on our development systems but is generally reported by users running on large shared machines; also the forked application starts a JVM, which itself allocates a lot of threads. The problem is also associated with the machine being loaded, and extensive memory usage (we have a machine with 128Gb of RAM and processes may be 10-100G in size).
I've been reading the O'Reilly pthreads book, which explains pthread_atfork(), and suggests the use of a "surrogate parent" process forked from the main process at startup from which subprocesses are run. It also suggests the use of a pre-created thread pool. Both of these seem like good ideas, so I'm going to implement at least one of them.
It's look like a deadlock condition. Look for blocking functions, like accept(), the problem should be there.
Decrease the code to the smallest possible size that still has the behavior and post it here. Either you will find the answer or we will be able to track it down.
BTW - http://lists.samba.org/archive/linux/2002-February/002171.html it seems that pthread behavior for exec is not well defined and may depend on your OS.
Do you have any code between fork and exec? This may be a problem.
Be very careful with multiple threads and fork. Most of glibc/libstdc++ is thread safe. If a thread, other than the forking thread, is holding a lock when the fork executes the forked process will inherit the mutexes in their current locked state. The new process will never see those mutexes unlocked. For more information see man pthread_atfork.
I've just fallen into same problems, and finally found that fork() duplicates all the threads. Now imagine, what does your program do after a fork() with all the threads running double instance...
The following rules are from "A Mini-guide regarding fork() and Pthreads":
1- You DO NOT WANT to do that.
2- If you needs to fork() then:
whenever possible, fork() all your
childs prior to starting any threads.
Edit: tried, fork() does not duplicate threads.