Is it possible to use pthreads without pthread_join()? - c++

What I've noticed recently in attempting to add some multithreaded functionality to some code of mine for a project at work is that pthreads are a huge pain in the ass to deal with logistically...
Here's the scenario...
I have an infinite loop in my main method (a server) spawning threads to deal with data whenever it receives a packet from whatever client. The problem is that I can't get the threads to execute concurrently at all. They refuse to begin execution until a call to pthread_join() from the main method, which completely kills the whole purpose of using threads in the first place (server needs to STOP execution flow, and wait for the thread to finish processing its data before receiving anymore packets! ridiculous.)
So is there a way to use pthreads and have them actually be multithreaded? Or am I better off not using threads at all, and saving the extra resources by stopping execution in my server to call a function to process data?
I'm thinking I may have to resort to forking everytime...
This is frustrating....
some sample code i did is below:
// gcc threads.c -lpthread
#include <stdio.h>
#include <pthread.h>
struct point{
int x, y;
};
static void *print_point(void *point_p);
int main() {
pthread_t tid;
struct point pt = {3, 5};
printf("enter main\n");
pthread_create(&tid, NULL, print_point, &pt);
while(1){continue;}
return 0;
}
static void *print_point(void *point_p) {
struct point arg = * (struct point *) point_p;
printf("Point: (%d, %d)\n", arg.x, arg.y);
return NULL;
}
when I run and compile that (yes i compile with the -lpthread switch), it prints "enter main" and doesn't execute the thread... I even let it run for a while (got up, went to the bathroom, ate some food), and still nothing.
So since the main method spawns a thread then loops infinitely, the thread should eventually execute... right? From what I can tell from my tests the main method never gives up execution to the thread it spawned. The only way I can get it to give it up it by calling join (but that defeats the purpose of having threads since main will wait around until the thread is done).

You're never giving the thread a chance to execute with that while(1){continue;}. One of two things will happen here.
You've compiled with high enough optimization that the compiler makes that entire loop vanish. The thread never gets a chance to execute because main starts the thread and then immediately returns zero.
The compiler doesn't optimize the loop away. With this busy loop you once again are not giving the thread mechanism a chance to slip in.
Add a sleep (0); call to the body of that busy loop.

Actually your code works fine for me, but I think your problem is that the main thread is sitting in that while() loop, hogging all the CPU usage, so the second thread never gets a chance. The fact that pthread_join makes it work is a bit of a red herring: it's just stopping the main thread so the other threads get a chance.
Obviously the right fix for this is to make the main thread sleep properly when it has nothing to do. For your test code, try putting sleep(1) in your while loop.

Related

What happens to a thread, waiting on condition variable, that is getting joined?

I've got a class named TThreadpool, which holds member pool of type std::vector<std::thread>>, with the following destructor:
~TThreadpool() {
for (size_t i = 0; i < pool.size(); i++) {
assert(pool[i].joinable());
pool[i].join();
}
}
I'm confident that when destructor is called, all of the threads are waiting on a single condition variable (spurious wakeup controlled with always-false predicate), and joinable outputs true.
Reduced example of running thread would be:
void my_thread() {
std::unique_lock<std::mutex> lg(mutex);
while (true) {
my_cond_variable.wait(lg, [] {
return false;
});
# do some work and possibly break, but never comes farther then wait
# so this probably should not matter
}
}
To check what threads are running, I'm launching top -H. At the start of the program, there are pool.size() threads + 1 thread where TThreadpool itself lives. And to my surprise, joining these alive threads does not remove them from list of threads that top is giving. Is this expected behaviour?
(Originally, my program was a bit different - I made a simple ui application using qt, that used threadpool running in ui thread and other threads controlled by threadpool, and on closing the ui window joining of threads had been called, but QtCreator said my application still worked after I closed the window, requiring me to shut it down with a crash. That made me check state of my threads, and it turned out it had nothing to do with qt. Although I'm adding this in case I missed some obvious detail with qt).
A bit later, I tried not asserting joinable, but printing it, and found out the loop inside Threadpool destructor never moved further than first join - the behaviour I did not expect and cannot explain
join() doesn't do anything to the child thread -- all it does is block until the child thread has exited. It only has an effect on the calling thread (i.e. by blocking its progress). The child thread can keep running for as long as it wants (although typically you'd prefer it to exit quickly, so that the thread calling join() doesn't get blocked for a long time -- but that's up to you to implement)
And to my surprise, joining these alive threads does not remove them from list of threads that top is giving. Is this expected behaviour?
That suggests the thread(s) are still running. Calling join() on a thread doesn't have any impact on that running thread; simply the calling thread
waits for the called-on thread to exit.
found out the loop inside Threadpool destructor never moved further than first join
That means the first thread hasn't completed yet. So none of the other threads haven't been joined yet either (even if they have exited).
However, if the thread function is implemented correctly, the first thread (and all other threads in the pool) should eventually complete and
the join() calls should return (assuming the threads in the pool are supposed to exit - but this doesn't need to true in general.
Depending on application, you could simply make the threads run forever too).
So it appears there's some sort of deadlock or wait for some resource that's holding up one or more threads. So you need to run through a debugger.
Helgrind would be very useful.
You could also try to reduce the number of threads (say 2) and to see if the problem becomes reproducible/obvious and then you could increase the threads.

How to cleanly exit a threaded C++ program?

I am creating multiple threads in my program. On pressing Ctrl-C, a signal handler is called. Inside a signal handler, I have put exit(0) at last. The thing is that sometimes the program terminates safely but the other times, I get runtime error stating
abort() has been called
So what would be the possible solution to avoid the error?
The usual way is to set an atomic flag (like std::atomic<bool>) which is checked by all threads (including the main thread). If set, then the sub-threads exit, and the main thread starts to join the sub-threads. Then you can exit cleanly.
If you use std::thread for the threads, that's a possible reason for the crashes you have. You must join the thread before the std::thread object is destructed.
Others have mentioned having the signal-handler set a std::atomic<bool> and having all the other threads periodically check that value to know when to exit.
That approach works well as long as all of your other threads are periodically waking up anyway, at a reasonable frequency.
It's not entirely satisfactory if one or more of your threads is purely event-driven, however -- in an event-driven program, threads are only supposed to wake up when there is some work for them to do, which means that they might well be asleep for days or weeks at a time. If they are forced to wake up every (so many) milliseconds simply to poll an atomic-boolean-flag, that makes an otherwise extremely CPU-efficient program much less CPU-efficient, since now every thread is waking up at short regular intervals, 24/7/365. This can be particularly problematic if you are trying to conserve battery life, as it can prevent the CPU from going into power-saving mode.
An alternative approach that avoids polling would be this one:
On startup, have your main thread create an fd-pipe or socket-pair (by calling pipe() or socketpair())
Have your main thread (or possibly some other responsible thread) include the receiving-socket in its read-ready select() fd_set (or take a similar action for poll() or whatever wait-for-IO function that thread blocks in)
When the signal-handler is executed, have it write a byte (any byte, doesn't matter what) into the sending-socket.
That will cause the main thread's select() call to immediately return, with FD_ISSET(receivingSocket) indicating true because of the received byte
At that point, your main thread knows it is time for the process to exit, so it can start directing all of its child threads to start shutting down (via whatever mechanism is convenient; atomic booleans or pipes or something else)
After telling all the child threads to start shutting down, the main thread should then call join() on each child thread, so that it can be guaranteed that all of the child threads are actually gone before main() returns. (This is necessary because otherwise there is a risk of a race condition -- e.g. the post-main() cleanup code might occasionally free a resource while a still-executing child thread was still using it, leading to a crash)
The first thing you must accept is that threading is hard.
A "program using threading" is about as generic as a "program using memory", and your question is similar to "how do I not corrupt memory in a program using memory?"
The way you handle threading problem is to restrict how you use threads and the behavior of the threads.
If your threading system is a bunch of small operations composed into a data flow network, with an implicit guarantee that if an operation is too big it is broken down into smaller operations and/or does checkpoints with the system, then shutting down looks very different than if you have a thread that loads an external DLL that then runs it for somewhere from 1 second to 10 hours to infinite length.
Like most things in C++, solving your problem is going to be about ownership, control and (at a last resort) hacks.
Like data in C++, every thread should be owned. The owner of a thread should have significant control over that thread, and be able to tell it that the application is shutting down. The shut down mechanism should be robust and tested, and ideally connected to other mechanisms (like early-abort of speculative tasks).
The fact you are calling exit(0) is a bad sign. It implies your main thread of execution doesn't have a clean shutdown path. Start there; the interrupt handler should signal the main thread that shutdown should begin, and then your main thread should shut down gracefully. All stack frames should unwind, data should be cleaned up, etc.
Then the same kind of logic that permits that clean and fast shutdown should also be applied to your threaded off code.
Anyone telling you it is as simple as a condition variable/atomic boolean and polling is selling you a bill of goods. That will only work in simple cases if you are lucky, and determining if it works reliably is going to be quite hard.
Additional to Some programmer dude answer and related to discussion in the comment section, you need to make the flag that controls termination of your threads as atomic type.
Consider following case :
bool done = false;
void pending_thread()
{
while(!done)
{
std::this_thread::sleep(std::milliseconds(1));
}
// do something that depends on working thread results
}
void worker_thread()
{
//do something for pending thread
done = true;
}
Here worker thread can be your main thread also and done is terminating flag of your thread, but pending thread need to do something with given data by working thread, before exiting.
this example has race condition and undefined behaviour along with it, and it's really hard to find what is the actual problem int the real world.
Now the corrected version using std::automic :
std::atomic<bool> done(false);
void pending_thread()
{
while(!done.load())
{
std::this_thread::sleep(std::milliseconds(1));
}
// do something that depends on working thread results
}
void worker_thread()
{
//do something for pending thread
done = true;
}
You can exit thread without being concern of race condition or UB.

kill boost thread after n seconds

I am looking for the best way to solve the following (c++) problem. I have a function given by some framework, which returns an object. Sometimes it takes just miliseconds, but on some occasions it takes minutes. So i want to stop the execution if it takes longer than let's say 2 seconds.
I was thinking about doing it with boost threads. Important sidenote, if the function returns faster than the 2 seconds the program should not wait.
So i was thinking about 2 threads:
1.thread: execute function a
2.thread: run timer
if(thread 2 exited bevore thread 1) kill thread 1
else do nothing
I am struggeling a bit the practical implementation. Especially,
how do i return an object from a child boost thread to the main thread?
how do i kill a thread in boost?
is my idea even a good one, is there a better way to solve the problem in c++ (with or without boost)?
As for waiting, just use thread::timed_join() inside your main thread, this will return false, if the thread didn't complete within the given time.
Killing the thread is not feasible if your third-party library is not aware of boost:threads. Also, you almost certainly don't want to 'kill' the thread without giving the function the possibility to clean up.
I'd suggest that you wait for, say, 2 seconds and then continue with some kind of error message, letting the framework function finish its work and just ignoring the result if it came too late.
As for returning a value, I'd suggest something like
struct myfunction {
MyObj returnValue;
void operator() () {
// ...
returnValue = theComputedReturnValue;
}
};
// ...
myfunction f;
boost::thread t = boost::thread(boost::ref(f));
t.join(); // or t.timed_join()...
use(f.returnValue);
// ...
I have done something similar by the past and that works (even though not ideal).
To get the return value just "share" a variable (that could be just a pointer (initially nil) to the returned value, or a full object with a state etc ...) and make your thread read/udate it. Don't forget to mutex it needed. That should be quite straight forward.
Expanding what James has said above, "kill a thread" is such a harsh term! :) But interruption is not so easy either, typically with boost threads, there needs to be an interruption point, where the running thread can be interrupted. There is a set of these interruptible functions (unfortunately they are boost specific), such as wait/sleep etc. One option you have is in the first thread, liberally scatter interruption_points(). Such that when you call interrupt() once thread 2 dies, at the next interruption_point() thread 1 will throw an exception.
Threads are in the same process space, thus you can have shared state between multiple threads as long as there is synchronized access to that shared state.
EDIT: just noticed that the OP has already looked into this... will leave the answer up anyway I guess...

Wait for a detached thread to finish in C++

How can I wait for a detached thread to finish in C++?
I don't care about an exit status, I just want to know whether or not the thread has finished.
I'm trying to provide a synchronous wrapper around an asynchronous thirdarty tool. The problem is a weird race condition crash involving a callback. The progression is:
I call the thirdparty, and register a callback
when the thirdparty finishes, it notifies me using the callback -- in a detached thread I have no real control over.
I want the thread from (1) to wait until (2) is called.
I want to wrap this in a mechanism that provides a blocking call. So far, I have:
class Wait {
public:
void callback() {
pthread_mutex_lock(&m_mutex);
m_done = true;
pthread_cond_broadcast(&m_cond);
pthread_mutex_unlock(&m_mutex);
}
void wait() {
pthread_mutex_lock(&m_mutex);
while (!m_done) {
pthread_cond_wait(&m_cond, &m_mutex);
}
pthread_mutex_unlock(&m_mutex);
}
private:
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;
bool m_done;
};
// elsewhere...
Wait waiter;
thirdparty_utility(&waiter);
waiter.wait();
As far as I can tell, this should work, and it usually does, but sometimes it crashes. As far as I can determine from the corefile, my guess as to the problem is this:
When the callback broadcasts the end of m_done, the wait thread wakes up
The wait thread is now done here, and Wait is destroyed. All of Wait's members are destroyed, including the mutex and cond.
The callback thread tries to continue from the broadcast point, but is now using memory that's been released, which results in memory corruption.
When the callback thread tries to return (above the level of my poor callback method), the program crashes (usually with a SIGSEGV, but I've seen SIGILL a couple of times).
I've tried a lot of different mechanisms to try to fix this, but none of them solve the problem. I still see occasional crashes.
EDIT: More details:
This is part of a massively multithreaded application, so creating a static Wait isn't practical.
I ran a test, creating Wait on the heap, and deliberately leaking the memory (i.e. the Wait objects are never deallocated), and that resulted in no crashes. So I'm sure it's a problem of Wait being deallocated too soon.
I've also tried a test with a sleep(5) after the unlock in wait, and that also produced no crashes. I hate to rely on a kludge like that though.
EDIT: ThirdParty details:
I didn't think this was relevant at first, but the more I think about it, the more I think it's the real problem:
The thirdparty stuff I mentioned, and why I have no control over the thread: this is using CORBA.
So, it's possible that CORBA is holding onto a reference to my object longer than intended.
Yes, I believe that what you're describing is happening (race condition on deallocate). One quick way to fix this is to create a static instance of Wait, one that won't get destroyed. This will work as long as you don't need to have more than one waiter at the same time.
You will also permanently use that memory, it will not deallocate. But it doesn't look like that's too bad.
The main issue is that it's hard to coordinate lifetimes of your thread communication constructs between threads: you will always need at least one leftover communication construct to communicate when it is safe to destroy (at least in languages without garbage collection, like C++).
EDIT:
See comments for some ideas about refcounting with a global mutex.
To the best of my knowledge there's no portable way to directly ask a thread if its done running (i.e. no pthread_ function). What you are doing is the right way to do it, at least as far as having a condition that you signal. If you are seeing crashes that you are sure are due to the Wait object is being deallocated when the thread that creates it quits (and not some other subtle locking issue -- all too common), the issue is that you need to make sure the Wait isn't being deallocated, by managing from a thread other than the one that does the notification. Put it in global memory or dynamically allocate it and share it with that thread. Most simply don't have the thread being waited on own the memory for the Wait, have the thread doing the waiting own it.
Are you initializing and destroying the mutex and condition var properly?
Wait::Wait()
{
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
m_done = false;
}
Wait::~Wait()
{
assert(m_done);
pthread_mutex_destroy(&m_mutex);
pthread_cond_destroy(&m_cond);
}
Make sure that you aren't prematurely destroying the Wait object -- if it gets destroyed in one thread while the other thread still needs it, you'll get a race condition that will likely result in a segfault. I'd recommend making it a global static variable that gets constructed on program initialization (before main()) and gets destroyed on program exit.
If your assumption is correct then third party module appears to be buggy and you need to come up with some kind of hack to make your application work.
Static Wait is not feasible. How about Wait pool (it even may grow on demand)? Is you application using thread pool to run?
Although there will still be a chance that same Wait will be reused while third party module is still using it. But you can minimize such chance by properly queing vacant Waits in your pool.
Disclaimer: I am in no way an expert in thread safety, so consider this post as a suggestion from a layman.

C++ Thread question - setting a value to indicate the thread has finished

Is the following safe?
I am new to threading and I want to delegate a time consuming process to a separate thread in my C++ program.
Using the boost libraries I have written code something like this:
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
Where finished_flag is a boolean member of my class. When the thread is finished it sets the value and the main loop of my program checks for a change in that value.
I assume that this is okay because I only ever start one thread, and that thread is the only thing that changes the value (except for when it is initialised before I start the thread)
So is this okay, or am I missing something, and need to use locks and mutexes, etc
You never mentioned the type of finished_flag...
If it's a straight bool, then it might work, but it's certainly bad practice, for several reasons. First, some compilers will cache the reads of the finished_flag variable, since the compiler doesn't always pick up the fact that it's being written to by another thread. You can get around this by declaring the bool volatile, but that's taking us in the wrong direction. Even if reads and writes are happening as you'd expect, there's nothing to stop the OS scheduler from interleaving the two threads half way through a read / write. That might not be such a problem here where you have one read and one write op in separate threads, but it's a good idea to start as you mean to carry on.
If, on the other hand it's a thread-safe type, like a CEvent in MFC (or equivilent in boost) then you should be fine. This is the best approach: use thread-safe synchronization objects for inter-thread communication, even for simple flags.
Instead of using a member variable to signal that the thread is done, why not use a condition? You are already are using the boost libraries, and condition is part of the thread library.
Check it out. It allows the worker thread to 'signal' that is has finished, and the main thread can check during execution if the condition has been signaled and then do whatever it needs to do with the completed work. There are examples in the link.
As a general case I would neve make the assumption that a resource will only be modified by the thread. You might know what it is for, however someone else might not - causing no ends of grief as the main thread thinks that the work is done and tries to access data that is not correct! It might even delete it while the worker thread is still using it, and causing the app to crash. Using a condition will help this.
Looking at the thread documentation, you could also call thread.timed_join in the main thread. timed_join will wait for a specified amount for the thread to 'join' (join means that the thread has finsihed)
I don't mean to be presumptive, but it seems like the purpose of your finished_flag variable is to pause the main thread (at some point) until the thread thrd has completed.
The easiest way to do this is to use boost::thread::join
// launch the thread...
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
// ... do other things maybe ...
// wait for the thread to complete
thrd.join();
If you really want to get into the details of communication between threads via shared memory, even declaring a variable volatile won't be enough, even if the compiler does use appropriate access semantics to ensure that it won't get a stale version of data after checking the flag. The CPU can issue reads and writes out of order as long (x86 usually doesn't, but PPC definitely does) and there is nothing in C++9x that allows the compiler to generate code to order memory accesses appropriately.
Herb Sutter's Effective Concurrency series has an extremely in depth look at how the C++ world intersects the multicore/multiprocessor world.
Having the thread set a flag (or signal an event) before it exits is a race condition. The thread has not necessarily returned to the OS yet, and may still be executing.
For example, consider a program that loads a dynamic library (pseudocode):
lib = loadLibrary("someLibrary");
fun = getFunction("someFunction");
fun();
unloadLibrary(lib);
And let's suppose that this library uses your thread:
void someFunction() {
volatile bool finished_flag = false;
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
while(!finished_flag) { // ignore the polling loop, it's besides the point
sleep();
}
delete thrd;
}
void myclass::mymethod() {
// do stuff
finished_flag = true;
}
When myclass::mymethod() sets finished_flag to true, myclass::mymethod() hasn't returned yet. At the very least, it still has to execute a "return" instruction of some sort (if not much more: destructors, exception handler management, etc.). If the thread executing myclass::mymethod() gets pre-empted before that point, someFunction() will return to the calling program, and the calling program will unload the library. When the thread executing myclass::mymethod() gets scheduled to run again, the address containing the "return" instruction is no longer valid, and the program crashes.
The solution would be for someFunction() to call thrd->join() before returning. This would ensure that the thread has returned to the OS and is no longer executing.