Interrupting threads if not joined - c++

I am looking for a way(preferably with boost threads), to interrupt a thread if it has not joined. I start multiple threads, and would like to end any of them that have not finished by 200 milliseconds. I tried something like this
boost::thread_group tgroup;
tgroup.create_thread(boost::bind(&print_f));
tgroup.create_thread(boost::bind(&print_g));
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
tgroup.interrupt_all();
Now this works, and all threads are ended after 200 milliseconds; however I would like to try and join these threads if they finish before 200 milliseconds, is there a way to join and interrupt if not finished by a certain amount of time?
Edit: reason why I need join to happen before timeout:
I am creating a server where speed is very important. Unfortunately I have to make requests to other servers for some information. So I would like to make these calls in parallel, and finish as soon as possible. If a server is taking too long, I have to just ignore the information coming from that server, and continue on without it. So my timeout time is my maximum amount of time I can wait. It will be extremely beneficial to me to be able to continue on with contemplation when all responses are received, instead of waiting for time timeout timer. So what my program will:
-Get a request from a client.
-Parse information.
Create threads
-Send information to multiple other servers.
-Get information back from servers.
-Put information from servers on a shared queue.
End Threads
-Parse information from shared queue.
-Return information back to client

What you want to use is probably a set of scoped threads, and call terminate on all the remaining threads after timeout. thread groups and scoped threads are not useable together unfortunately.
The thread group class is actually a very simple container: you cannot remove a thread of it if you don't have a pointer to it already, and you cannot get a pointer to a thread which has been created by the group. The class API doesn't provide much either. This is a bit hindering for management in your situation.
The remaining solutions rely on creating the threads outside the goup, and have each of them do a specific task just before finishing. It could:
remove itself from the group,
then add itself to another group
The managing thread will have to call join_all on the later group, and act as before with the former.
using namespace boost;
void thread_end(auto &thmap, thread_group& t1, thread_group& t2, auto &task){
task();
thread *self = thmap[this_thread::get_id()];
t1.remove_thread(&self);
t2.add_thread(&self);
}
std::map<thread::id, thread *> thmap;
thread_group trunninggroup;
thread_group tfinishedgroup;
thread *th;
th = new thread(
bind(&thread_end, thmap, trunninggroup, tfinishedgroup, bind(&print_f)));
thmap[th->get_id()] = th;
trunninggroup.add_thread(th);
th = new thread(
bind(&thread_end, thmap, trunning_group, tfinishedgroup, bind(&print_g)));
thmap[th->get_id()] = th;
trunninggroup.add_thread(th);
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
tfinishedgroup.join_all();
trunninggroup.interrupt_all();
But this is not ideal if you actually want the managing thread to be notified of a thread end when it actually happens (and I'm not really certain it does anything useful anyway). A solution for getting notified is perhaps to:
do the group migration as above
then trigger a condition variable on which the management thread is doing a timed_wait
but you will have to do some time computation to keep track of the remaining time after being notified, and resume sleep with that time left. That would be entirely dependent on the Duration class used for that task.
Update
Seeing the big picture, I would try a completely different approach: I don't think that terminating a thread which is already finished is a problem, so I would leave them all in the group, and use the group to create them, as your code demonstrate it.
However, I would try to wake up the managing thread as soon as all the threads are done, or after timeout. This is not doable with what the thread_group class offers alone, but it can be done with a custom made semaphore, or a patched version of boost::barrier to allow a timed wait.
Basically, you set a barrier to the number of threads in the group plus one (the main thread), and have the main thread time wait on it. Each worker thread does its work, and when finished, post its result in the queue, and wait on the barrier. If all the worker threads finish their task, everyone will wait and the barrier gets triggered.
Then main thread (as well as all others, but it doesn't matter), wakes up and can proceed by terminating the group and process the result. Otherwise, it will be awaken at timeout and do the same anyway.
The patching of boost::barrier should not be too difficult, you should only need to duplicate the wait method and replace the condition variable wait inside by a timed_wait (I didn't look at the code, this assumption might be totally of the mark though). Otherwise I provided a sample semaphore implementation for this question, which shouldn't be difficult to patch either.
Some last consideration: terminating a thread is usually not the best approach. You should instead try to signal the threads they have to abort, and wait for them, or somehow havecthem pass their unfinished task to an auxiliary thread which should clean things up serially. Then your thread group would be ready to tackle on the next task, and you wouldn't have to destroy and create threads all the time, which is a somewhzt costly operation. It will require to formalize the idea of a task in the context of your application, and make the threads run on a loop for taking new tasks and process them.

If you're using a very recent Boost and C++11, use try_join_for() (http://www.boost.org/doc/libs/1_53_0/doc/html/thread/thread_management.html#thread.thread_management.thread.try_join_for). Otherwise, use timed_join() (http://www.boost.org/doc/libs/1_53_0/doc/html/thread/thread_management.html#thread.thread_management.thread.timed_join).

Related

C++ wait notify in threads with synchronized queues

I have a program structured like that: one thread that receives tasks and writes them to input queue, multiple which process them and write in output queue, one that responds with results from it. When queue is empty, thread sleeps for several milliesconds. Queue has mutex inside it, pushing does lock(), and popping does try_lock() and returns if there is nothing in queue.
This is processing thread for example:
//working - atomic bool
while (working) {
if (!inputQue_->pop(msg)) {
std::this_thread::sleep_for(std::chrono::milliseconds(200));
continue;
} else {
string reply = messageHandler_->handle(msg);
if (!reply.empty()) {
outputQue_->push(reply);
}
}
}
And the thing that I dont like is that the time since receiving task until responding, as i have measured with high_resolution_clock, is almost 0, when there is no sleeping. When there is sleeping, it becomes bigger.
I dont want cpu resources to be wasted and want to do something like that: when recieving thread gets task, it notifies one of the processing threads, that does wait_for, and when processing task is done, it notifies responding thread same way. As a result I think i will get less time spent and cpu resources will not be wasted. And I have some questions:
Will this work the way that I see it supposed to, and the only difference will be waking up on notifying?
To do this, I have to create 2 condition variables: first same for receiving thread and all processing, second same for all processing and responding? And mutex in processing threads has to be common for all of them or uniuqe?
Can I place creation of unique_lock(mutex) and wait_for() in if branch just instead of sleep_for?
If some processing threads are busy, is it possible that notify_one() can try to wake up one of them, but not the free thread? I need to use notify_all()?
Is it possible that notify will not wake up any of threads? If yes, does it have high probability?
Will this work the way that I see it supposed to, and the only difference will be waking up on notifying?
Yes, assuming you do it correctly.
To do this, I have to create 2 condition variables: first same for receiving thread and all processing, second same for all processing and responding? And mutex in processing threads has to be common for all of them or uniuqe?
You can use a single mutex and a single condition variable, but that makes it a bit more complex. I'd suggest a single mutex, but one condition variable for each condition a thread might want to wait for.
Can I place creation of unique_lock(mutex) and wait_for() in if branch just instead of sleep_for?
Absolutely not. You need to hold the mutex while you check whether the queue is empty and continue to hold it until you call wait_for. Otherwise, you destroy the entire logic of the condition variable. The mutex associated with the condition variable must protect the condition that the thread is going to wait for, which in this case is the queue being non-empty.
If some processing threads are busy, is it possible that notify_one() can try to wake up one of them, but not the free thread? I need to use notify_all()?
I don't know what you mean by the "free thread". As a general rule, you can use notify_one if it's not possible for a thread to be blocked on the condition variable that can't handle the condition. You should use notify_all if either more than one thread might need to be awoken or there's a possibility that more than one thread will be blocked on the condition variable and the "wrong thread" could be woken, that is, there could be at least one thread that can't do whatever it is that needs to be done.
Is it possible that notify will not wake up any of threads? If yes, does it have high probability?
Sure, it's quite possible. But that would mean no threads were blocked on the condition. In that case, no thread can block on the condition because threads must check the condition before they wait, and they do it while holding a mutex. To provide this atomic "unlock and wait" semantic is the entire purpose of a condition variable.
The mechanism you have is called polling. The thread repeatedly checks (polls) if there is data available. As you mentioned, it has the drawback of wasting time. (But it is simple). What you mentioned you would like to use is called a blocking mechanism. This deschedules the thread until the moment that work becomes available.
1) Yes (although I don't know exactly what you're imagining)
2) a) Yes, 2 condition variables is one way to do it. b) Common mutex is best
3) You would probably place those within pop, and calling pop would have the potential to block.
4) No. notify_one will only wake a thread that is currently waiting from having called wait. Also, if multiple are waiting, it is not necessarily guaranteed which will receive the notification. (OS/library dependent)
5) No. If 1+ threads are waiting, notify_one it is guaranteed to wake one. BUT if no threads are waiting, the notification is consumed (and has no effect). Note that under certain edge conditions, notify_one may actually wake more than one. Also, a thread may wake from wait without anyone having called notify_one ("Spurious wake up"). The fact that this can happen at all means that you always have to do additional checking for it.
This is called the producer/consumer problem btw.
In general, your considerations about condition variable are correct. My proposal is more connected to design and reusability of such functionality.
The main idea is to implement ThreadPool pattern, which has constructor with number of worker threads ,methods submitTask, shutdown, join.
Having such class, you will use 2 instances of pools: one multithreaded for processing, second (singlethreaded by your choice) for result sending.
The pool consists of Blocking Queue of Tasks and array of Worker threads, each performing the same "pop Task and run" loop.The Blocking Queue encapsulates mutex and cond_var. The Task is common functor.
This also brings your design to Task oriented approach, which has a lot of advantages in future of your application.
You are welcome to ask more questions about implementation details if you like this idea.
Best regards, Daniel

Terminate a thread from outside in C++11

I am running multiple threads in my C++11 code and the thread body is defined using lambda function as following.
// make connection to each device in a separate child thread
std::vector<std::thread> workers;
for(int ii = 0; ii < numDev; ii++)
{
workers.push_back(std::thread([=]() { // pass by value
// thread body
}));
}
// detach from all threads
std::for_each(workers.begin(), workers.end(), [](std::thread &t) {
t.detach();
});
// killing one of the threads here?
I detached from all children threads but keep a reference of each in workers vector. How can I kill one of the threads later on in my code?
Post in here suggests using std::terminate() but I guess it has no use in my case.
First, don't use raw std::threads. They are rarely a good idea. It is like manually calling new and delete, or messing with raw buffers and length counters in io code -- bugs waiting to happen.
Second, instead of killing the thread, provide the thread task with a function or atomic variable that says when the worker should kill itself.
The worker periodically checks its "should I die" state, and if so, it cleans itself up and dies.
Then simply signal the worker to die, and wait for it to do so.
This does require work in your worker thread, and if it does some task that cannot be interrupted that lasts a long time it doesn't work. Don't do tasks that cannot be interrupted and last a long time.
If you must do such a task, do it in a different process, and marshall the results back and forth. But modern OSs tend to have async APIs you can use instead of synchronous APIs for IO tasks, which lend themselves to being aborted if you are careful.
Terminating a thread while it is in an arbitrary state places your program into an unknown and undefined state of execution. It could be holding a mutex and never let it go in a standard library call, for example. But really, it can do anything at all.
Generally detaching threads is also a bad idea, because unless you magically know they are finished (difficult because you detached them), what happens after main ends is implementation defined.
Keep track of your threads, like you keep track of your memory allocations, but moreso. Use messages to tell threads to kill themselves. Join threads to clean up their resources, possibly using condition variables in a wrapper to make sure you don't join prior to the thread basically being done. Consider using std::async instead of raw threads, and wrap std::async itself up in a further abstraction.

how to make a threadpool with boost::thread

boost::thread is not-a-thread, a new thread is created when the ftor passed to it is called and thread exits when ftor returns.
We use threadpool to minimize thread creation and destruction cost. but each thread in threadpool is also destroyed when the supplied ftor returns.
So whats the basic concept behind building a threadpool ? is there any permanent thread where I can assign ftors to that thread ?
A thread pool is just a bunch of threads that already running, and that are all running the same function. This functions basically just waits on a queue, and when there is a "function" in the queue it extracts and executes it.
Pseudo-code:
void thread_pool_function()
{
while (true)
{
wait_for_signal_that_queue_is_not_empty();
function_to_call = queue.remove_top();
unklock_queue_semaphore();
function_to_call();
}
}
create_thread(thread_pool_function);
create_thread(thread_pool_function);
create_thread(thread_pool_function);
create_thread(thread_pool_function);
In the "code" above there are now four threads, all initially waiting for something to be put in a "queue". When there is something in the queue, it extracts it, and calls it as a function.
This is probably the simplest way to implement a thread pool.
In addtion to what #Joachim posted:
One way to flow-control such a system (and one I use a lot), is to use a 'pool queue', (blocking producer-consumer queue), of tasks, created and filled at startup with a fixed number of task objects. Any thread that wants to issue a task has to get one from the pool first and tasks are returned to the pool after completion handling. This limits the number of tasks in the system and, if the pool empties, requesting threads just have to wait, blocked on the empty pool, until some 'used' tasks come back in.
This works well, provides flow-control, prevents memory-runaway and eliminates continual task create/destroy. It's also easy to periodically display/write the pool queue depth on a timer, so you can see how 'busy' your app is, (and detect any leaks:).
Edit: Also, it removes the need for any bounded queues in the system. Unbounded queues are simpler and tend to need fewer system calls.

Can't unblock/"wake up" thread with pthread_kill & sigwait

I'm working on a C/C++ networking project and am having difficulties synchronizing/signaling my threads. Here is what I am trying to accomplish:
Poll a bunch of sockets using the poll function
If any sockets are ready from the POLLIN event then send a signal to a reader thread and a writer thread to "wake up"
I have a class called MessageHandler that sets the signals mask and spawns the reader and writer threads. Inside them I then wait on the signal(s) that ought to wake them up.
The problem is that I am testing all this functionality by sending a signal to a thread yet it never wakes up.
Here is the problem code with further explanation. Note I just have highlighted how it works with the reader thread as the writer thread is essentially the same.
// Called once if allowedSignalsMask == 0 in constructor
// STATIC
void MessageHandler::setAllowedSignalsMask() {
allowedSignalsMask = (sigset_t*)std::malloc(sizeof(sigset_t));
sigemptyset(allowedSignalsMask);
sigaddset(allowedSignalsMask, SIGCONT);
}
// STATIC
sigset_t *MessageHandler::allowedSignalsMask = 0;
// STATIC
void* MessageHandler::run(void *arg) {
// Apply the signals mask to any new threads created after this point
pthread_sigmask(SIG_BLOCK, allowedSignalsMask, 0);
MessageHandler *mh = (MessageHandler*)arg;
pthread_create(&(mh->readerThread), 0, &runReaderThread, arg);
sleep(1); // Just sleep for testing purposes let reader thread execute first
pthread_kill(mh->readerThread, SIGCONT);
sleep(1); // Just sleep for testing to let reader thread print without the process terminating
return 0;
}
// STATIC
void* MessageHandler::runReaderThread(void *arg) {
int signo;
for (;;) {
sigwait(allowedSignalsMask, &signo);
fprintf(stdout, "Reader thread signaled\n");
}
return 0;
}
I took out all the error handling I had in the code to condense it but do know for a fact that the thread starts properly and gets to the sigwait call.
The error may be obvious (its not a syntax error - the above code is condensed from compilable code and I might of screwed it up while editing it) but I just can't seem to find/see it since I have spent far to much time on this problem and confused myself.
Let me explain what I think I am doing and if it makes sense.
Upon creating an object of type MessageHandler it will set allowedSignalsMask to the set of the one signal (for the time being) that I am interested in using to wake up my threads.
I add the signal to the blocked signals of the current thread with pthread_sigmask. All further threads created after this point ought to have the same signal mask now.
I then create the reader thread with pthread_create where arg is a pointer to an object of type MessageHandler.
I call sleep as a cheap way to ensure that my readerThread executes all the way to sigwait()
I send the signal SIGCONT to the readerThread as I am interested in sigwait to wake up/unblock once receiving it.
Again I call sleep as a cheap way to ensure that my readerThread can execute all the way after it woke up/unblocked from sigwait()
Other helpful notes that may be useful but I don't think affect the problem:
MessageHandler is constructed and then a different thread is created given the function pointer that points to run. This thread will be responsible for creating the reader and writer threads, polling the sockets with the poll function, and then possibly sending signals to both the reader and writer threads.
I know its a long post but do appreciate you reading it and any help you can offer. If I wasn't clear enough or you feel like I didn't provide enough information please let me know and I will correct the post.
Thanks again.
POSIX threads have condition variables for a reason; use them. You're not supposed to need signal hackery to accomplish basic synchronization tasks when programming with threads.
Here is a good pthread tutorial with information on using condition variables:
https://computing.llnl.gov/tutorials/pthreads/
Or, if you're more comfortable with semaphores, you could use POSIX semaphores (sem_init, sem_post, and sem_wait) instead. But once you figure out why the condition variable and mutex pairing makes sense, I think you'll find condition variables are a much more convenient primitive.
Also, note that your current approach incurs several syscalls (user-space/kernel-space transitions) per synchronization. With a good pthreads implementation, using condition variables should drop that to at most one syscall, and possibly none at all if your threads keep up with each other well enough that the waited-for event occurs while they're still spinning in user-space.
This pattern seems a bit odd, and most likely error prone. The pthread library is rich in synchronization methods, the one most likely to serve your need being in the pthread_cond_* family. These methods handle condition variables, which implement the Wait and Signal approach.
Use SIGUSR1 instead of SIGCONT. SIGCONT doesn't work. Maybe a signal expert knows why.
By the way, we use this pattern because condition variables and mutexes are too slow for our particular application. We need to sleep and wake individual threads very rapidly.
R. points out there is extra overhead due to additional kernel space calls. Perhaps if you sleep > N threads, then a single condition variable would beat out multiple sigwaits and pthread_kills. In our application, we only want to wake one thread when work arrives. You have to have a condition variable and mutex for each thread to do this otherwise you get the stampede. In a test where we slept and woke N threads M times, signals beat mutexes and condition variables by a factor of 5 (it could have been a factor of 40 but I cant remember anymore....argh). We didn't test Futexes which can wake 1 thread at a time and specifically are coded to limit trips to kernel space. I suspect futexes would be faster than mutexes.

Waiting win32 threads

I have a totally thread-safe FIFO structure( TaskList ) to store task classes, multiple number of threads, some of which creates and stores task and the others processes the tasks. TaskList class has a pop_front() method which returns the first task if there is at least one. Otherwise it returns NULL.
Here is an example of processing function:
TaskList tlist;
unsigned _stdcall ThreadFunction(void * qwe)
{
Task * task;
while(!WorkIsOver) // a global bool to end all threads.
{
while(task = tlist.pop_front())
{
// process Task
}
}
return 0;
}
My problem is, sometimes, there is no new task in the task list, so the processing threads enters in an endless loop (while(!WorkIsOver)) and CPU load increases. Somehow I have to make the threads wait until a new task is stored in the list. I think about Suspending and Resuming but then I need extra info about which threads are suspending or running which brings a greater complexity to coding.
Any ideas?
PS. I am using winapi, not Boost or TBB for threading. Because sometimes I have to terminate threads that process for too long, and create new ones immediately. This is critical for me. Please do not suggest any of these two.
Thanks
Assuming you are developing this in DevStudio, you can get the control you want using [IO Completion Ports]. Scary name, for a simple tool.
First, create an IOCompletion Port: CreateIOCompletionPort
Create your pool of worker threads using _beginthreadex / CreateThread
In each worker thread, implement a loop that calls GetQueuedCompletionStatus - The returned lpCompletionKey will be pointing to a work item to process.
Now, whenever you get a work item to process: call PostQueuedCompletionStatus from any thread - passing in the pointer to your work item as the completion key parameter.
Thats it. 3 API calls and you have implemented a thread pooling mechanism based on a kernel implemented queue object. Each call to PostQueuedCompletionStatus will automatically be deserialized onto a thread pool thread thats blocking on GetQueuedCompletionStatus. The pool of worker threads is created, and maintained - by you - so you can call TerminateThread on any worker threads that are taking too long. Even better - depending on how it is set up the kernel will only wake up as many threads as needed to ensure that each CPU core is running at ~100% load.
NB. TerminateThread is really not an appropriate API to use. Unless you really know what you are doing the threads are going to leak their stacks, none of the memory allocated by code on the thread will be deallocated and so on. TerminateThread is really only useful during process shutdown. There are some articles on the net detailing how to release the known OS resources that are leaked each time TerminateThread is called - if you persist in this approach you really need to find and read them if you haven't already.
Use a semaphore in your queue to indicate whether there are elements ready to be processed.
Every time you add an item, call ::ReleaseSemaphore to increment the count associated with the semaphore
In the loop in your thread process, call ::WaitForSingleObject() on the handle of your semaphore object -- you can give that wait a timeout so that you have an opportunity to know that your thread should exit. Otherwise, your thread will be woken up whenever there's one or more items for it to process, and also has the nice side effect of decrementing the semaphore count for you.
If you haven't read it, you should devour Herb Sutter's Effective Concurrency series which covers this topic and many many more.
Use condition variables to implement a producer/consumer queue - example code here.
If you need to support earlier versions of Windows you can use the condition variable in Boost. Or you could build your own by copying the Windows-specific code out of the Boost headers, they use the same Win32 APIs under the covers as you would if you build your own.
Why not just use the existing thread pool? Let Windows manage all of this.
You can use windows threadpool!
Or you can use api call
WaitForSingleObject or
WaitForMultipleObjects.
Use at least SwitchToThread api call
when thread is workless.
If TaskList has some kind of wait_until_not_empty method then use it. If it does not then one Sleep(1000) (or some other value) may just do the trick. Proper solution would be to create a wrapper around TaskList that uses an auto-reset event handle to indicate if list is not empty. You would need to reinvent current methods for pop/push, with new task list being the member of new class:
WaitableTaskList::WaitableTaskList()
{
// task list is empty upon creation
non_empty_event = CreateEvent(NULL, FALSE, FALSE, NULL);
}
Task* WaitableTaskList::wait_and_pop_front(DWORD timeout)
{
WaitForSingleObject(non_empty_event, timeout);
// .. handle error, return NULL on timeout
Task* result = task_list.pop_front();
if (!task_list.empty())
SetEvent(non_empty_event);
return result;
}
void WaitableTaskList::push_back(Task* item)
{
task_list.push_back(item);
SetEvent(non_empty_event);
}
You must pop items in task list only through methods such as this wait_and_pop_front().
EDIT: actually this is not a good solution. There is a way to have non_empty_event raised even if the list is empty. The situation requires 2 threads trying to pop and list having 2 items. If list becomes empty between if and SetEvent we will have the wrong state. Obviously we need to implement syncronization as well. At this point I would reconsider simple Sleep again :-)