Boost thread - Out of scope possibility - c++

I was curious about the accuracy of the following code
for(int i=0 ; i<5 ; i++)
{
SomeClass* ptrinst = new SomeClass()
boost::thread t( boost::bind (&SomeClass::SomeMethod,ptrinst));
......
}
What would happen to the running thread when t runs out of scope ?

Since the main thread does not call t.join(), the main thread will continue to run its loop, spawning additional threads and then continue onwards. So the answer is, under your current coding, the child threads will not interact with your parent thread (at least not directly).
Also note that the thread class is a strange beast - the only thing that happens when you fall out of scope is, your main thread no longer has a handle to call t.join() on. The fact that it falls out of scope of the parent thread has zero impact on the child thread. Once you spawn your child thread by instantiating it, the child is, essentially, decoupled from the parent (well, the globals/dynamically allocated memory that were visible in the parent are also visible to the child, but you will need mutexes if you want to modify/mutate those globals). As I mentioned later in the post, you need to gain a solid understanding of memory visibility and ownership within a threading context. Just reading my comments here probably will not help you.
If you want the main thread to wait on the completion of the child threads, you need to store those threads in a std::vector<boost::thread> v; outside of your loop and then in a second loop, call join on all those instances.
Your current code looks a bit suspect as you are invoking an instance method through bind - that's fine, but I wouldn't normally expect that instance method to call delete this; which means it's up to the parent thread to clean up (the parent thread shouldn't clean up until the child threads are done). However, there is no way for it to clean up at the right time without some kind of thread synchronization. Hence, a memory leak or some kind of nasty race condition is almost assured (suppose you put a delete ptrinst; in the ... portion of your main thread in an attempt to clean-up. Without some kind of synchronization, you may delete the pointer before the child threads are done using it).
Also, you may want to use std::thread and std::bind in place of the boost versions.
One last note: I suspect you are still experimenting with the use of threads. If this is true, it may be a good idea to read up and experiment a lot more with simpler examples until you try to fix this code. Otherwise, you may be setting yourself for a world of hurt (debugging hell including race conditions, weird memory synchronization issues, etc...).
Try to build a more solid understanding of what happens with memory and threads: what memory is visible to what threads and what memory can and cannot be shared.

Related

Why must one call join() or detach() before thread destruction?

I don't understand why when an std::thread is destructed it must be in join() or detach() state.
Join waits for the thread to finish, and detach doesn't.
It seems that there is some middle state which I'm not understanding.
Because my understanding is that join and detach are complementary: if I don't call join() than detach() is the default.
Put it this way, let's say you're writing a program that creates a thread and only later in the life of this thread you call join(), so up until you call join the thread was basically running as if it was detached, no?
Logically detach() should be the default behavior for threads because that is the definition of what threads are, they are parallelly executed irrespective of other threads.
So when the thread object gets destructed why is terminate() called? Why can't the standard simply treat the thread as being detached?
I'm not understanding the rationale behind terminating a program when either join() or detached() wasn't called before the thread was destructed. What is the purpose of this?
UPDATE:
I recently came across this. Anthony Williams states in his book, Concurrency In Action, "One of the proposals for C++17 was for a joining_thread class that would be similar to std::thread, except that it would automatically join in the destructor much like scoped_thread does. This didn’t get consensus in the committee, so it wasn’t accepted into the standard (though it’s still on track for C++20 as std::jthread)..."
Technically the answer is "because the spec says so" but that is an obtuse answer. We can't read the designers' minds, but here are some issues that may have contributed:
With POSIX pthreads, child threads must be joined after they have exited, or else they continue to occupy system resources (like a process table entry in the kernel). This is done via pthread_join().
Windows has a somewhat analogous issue if the process holds a HANDLE to the child thread; although Windows doesn't require a full join, the process must still call CloseHandle() to release its refcount on the thread.
Since std::thread is a cross-platform abstraction, it's constrained by the POSIX requirement which requires the join.
In theory the std::thread destructor could have called pthread_join() instead of throwing an exception, but that (subjectively) that may increase the risk of deadlock. Whereas a properly written program would know when to insert the join at a safe time.
See also:
https://en.wikipedia.org/wiki/Zombie_process
https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessa
https://learn.microsoft.com/en-us/windows/win32/procthread/terminating-a-process
You're getting confused because you're conflating the std::thread object with the thread of execution it refers to. A std::thread object is a C++ object (a bunch of bytes in memory) that acts as a reference to a thread of execution. When you call std::thread::detach what happens is that the std::thread object is "detached" from the thread of execution -- it no longer refers to (any) thread of execution, and the thread of execution continues running independently. But the std::thread object still exists, until it is destroyed.
When a thread of execution completes, it stores its exit info into the std::thread object that refers to it, if there is one (If it was detached, then there isn't one, so the exit info is just thrown away.) It has no other effect on the std::thread object -- in particular the std::thread object is not destroyed and continues to exist until someone else destroys it.
You might want a thread to completely clean up after itself when it's done leaving no traces. This would mean that you could start a thread and then forget about it.
But you might also want to be able to manage a thread while it was running and get any return value it had provided when it was done. In this case, if a thread cleaned up after itself when it was done, your attempt to manage it could cause a crash because you would be accessing a handle that might be invalid. And to check for the return value when the thread finishes, the return value has to be stored somewhere, which means the thread can't be fully cleaned up because the place where the return value is stored has to be left around.
In most frameworks, by default, you get the second option. You can manage the thread (by interrupting it, sending signals to it, joining it, or whatever) but it can't clean up after itself. If you prefer the first option, there's a function to get that behavior (detach) but that means that you may not be able to access the thread because it may or may not continue to exist.
When a thread handle for an active thread goes out of scope you have a couple of options:
join
detach
kill thread
kill program
Each one of these options is terrible. No matter which one you pick it will be surprising, confusing and not what you wanted in most situations.
Arguably the joining thread you mentioned already exists in the form of std::async which gives you a std::future that blocks until the created thread is done, so doing an implicit join. But the many questions about why
std::async(std::launch::async, f);
g();
does not run f and g concurrently indicate how confusing that is. The best approach I'm aware of is to define it to be a programming error and have the programmer fix it, so an assert would be most appropriate. Unfortunately the standard went with std::terminate instead.
If you really want a detaching thread just write a little wrapper around std::thread that does if (thread.joinable()) thread.detach(); in its destructor or whichever handler you want.
Question: "So when the thread object gets destructed why is terminate() called? Why can't the standard simply treat the thread as being detached?"
Answer: Yes, I agree that it terminates the program badly but such design has its reasons. Without the std::terminate() mechanism in the destructor std::thread::~thread, if the users really wanted to do join(), but for some reason "join" didn't execute (for e.g. exception was thrown) then the new_thread will run in the background just like the detach() behaviors. This might cause undefined behaviors because that was not the original intention of the user to have a detached thread.

Why do I need to explicitly detach a short term variable?

Let's say I have a small operation which I want to perform in a separate thread. I do not need to know when it completes, nor do I need to wait for its completion, but I do not want the operation blocking my current thread. When I write the following code, I will get a crash:
void myFunction() {
// do other stuff
std::thread([]()
{
// do thread stuff
});
}
This crash is solved by assigning the thread to a variable, and detaching it:
void myFunction() {
// do other stuff
std::thread t([]()
{
// do thread stuff
});
t.detach();
}
Why is this step necessary? Or is there a better way to create a small single-use thread?
Because the std::thread::~thread() specification says so:
A thread object does not have an associated thread (and is safe to destroy) after
it was default-constructed
it was moved from
join() has been called
detach() has been called
It looks like detach() is the only one of these that makes sense in your case, unless you want to return the thread object (by moving) to the caller.
Why is this step necessary?
Consider that the thread object represents a long-running "thread" of execution (a lightweight process or kernel schedulable entity or similar).
Allowing you to destroy the object while the thread is still executing, leaves you no way to subsequently join (and find the result of) that thread. This may be a logical error, but it can also make it hard even to correctly exit your program.
Or is there a better way to create a small single-use thread?
Not obviously, but it's frequently better to use a thread pool for running tasks in the background, instead of starting and stopping lots of short-lived threads.
You might be able to use std::async() instead, but the future it returns may block in the destructor in some circumstances, if you try to discard it.
See the documentation of the destructor of std:thread:
If *this has an associated thread (joinable() == true), std::terminate() is called.
You should explicitly say that you don't care what's going to happen with the thread, and that you're OK with loosing any control over it. And that is what detach is for.
In general, this looks like a design problem so crashing makes sense: it's hard to propose a general and not surprising rule about what should happen in such a case (e.g. your program might as well normally end its execution - what should happen with the thread?).
Basically, your use case requires a call to detach() because your use case is pretty weird, and not what C++ is trying to make easy.
While Java and .Net blithely let you toss away a Thread object whose associated thread is still running, in the C++ model the Thread is closer to being the thread, in the sense that the existence of the Thread object coincides with the lifetime, or at least joinability, of the execution it refers to. Note how it's not possible to create a Thread without starting it (except in the case of the default constructor, which is really just there in the service of move semantics), or to copy it or to make one from a thread id. C++ wants Thread to outlive the thread.
Maintaining that condition has various benefits. Final cleanup of a thread's control data doesn't have to be done automagically by the OS, because once a Thread goes away, nothing can ever try to join it. It's easier to ensure that variables with thread storage get destroyed in time, since the main thread is the last to exit (barring some move shenanigans). And a missing join -- which is an extremely common type of bug -- gets properly flagged at runtime.
Letting some thread wander off into the distance, in contrast, is allowed, but it's an unusual thing to do. Unless it's interacting with your other threads through sync objects, there's no way to ensure it's done whatever it was meant to do. A detached thread is on the level of reinterpret_cast: You're allowed to tell the compiler that you know something it doesn't, but that has to be explicit, not just the consequence of the function you didn't call.
Consider this: thread A creates thread B and thread A leaves its scope of execution. The handle for thread B is about to be lost. What should happen now? There are several possibilities, with most obvious as follows:
Thread B is detached and continues its execution indempedently
Thread A waits (joins) thread B before quiting its own scope
Now you can argue which is better: 1 or 2? How should we (the compiler) decide on which one of these is better?
So what the designers did was something different: crash terminate the code so that the developer picks one of these solutions explicitely. In order to avoid implicit (perhaps unwanted) behaviuor. It's a signal for you: "hey, pay attention now, this piece of code is important and I (the compiler) don't want to decide for you".

Concurrent code without waiting

I'm thinking about a certain kind of synchronisation primitive, but I don't know what this kind of synchronisation is called or if something like this would be working.
So there is one variable (boolean) which basically signals if one thread is still working on a block of memory or not. At the beginning the bool is set to false, meaning the worker thread is not working on your block of memory. Now the main thread gives the worker thread a "todo-list", describing how it should be working on that block of memory. After that, it changes the state of the boolean to true, so that the worker thread knows it is now allowed to do its work. The main thread can now continue its own work and checks at certain locations if the worker thread is now done working, e.g. if the boolean has been set to false again. If it is stil true, the main thread just continues its own work and doesn't wait for the worker thread. If the boolean is false, the main thread knows the worker thread is done and starts processing the block of memory.
So the boolean just transfers the ownership over a block of memory between two threads. If one thread currently does not have the ownership of that memory, it just continues with its own work, and checks repeatedly if it now has the ownership again. This way, none of the threads is waiting for one another and can continue its own work.
What is this called and how is such a behavior implemented?
EDIT: Basically it's a mutex. But instead of waiting for the mutex to be unlocked again, it continues/skips the critical code.
EDIT: Basically it's a mutex. But instead of waiting for the mutex to
be unlocked again, it continues/skips the critical code.
It's still a mutex, just with "try" methods.
in standard C++, we're talking about std::mutex::try_lock , which tries to lock the mutex, if it fails it returns false and moves on
class unlocker{
std::mutex& m_Parent;
public :
unlocker(std::mutex& parent) : m_Parent(parent){}
~unlocker() {m_Parent.unlock(); }
};
std::mutex mtx;
if (mtx.try_lock()){
unlocker unlock(mtx); // no, you can't use std::lock_guard/unique_lock here
//success, mtx is free
} else{
// do something else
}
on Native OS's code you have similar functions depending on the operating system you are on, like pthread_mutex_trylock on Unix and TryEnterCriticalSection on Windows. needless to say that standard mutex probably does use these functions behind the scenes
What will you do if the main thread runs out of work?
Suppose you keep checking and you keep reading true. Eventually you reach a point where the main thread cannot continue without the result from the worker thread. Since you have no more work to do, the only thing left is now keep checking the value of the flag over and over, wasting CPU resources that other threads could use to do useful work.
In general, this is not what you want. Instead, you would like the operating system to put your main thread to sleep and only wake it up once the worker thread has finished processing. All kinds of locks and semaphores that ship with modern operating systems work this way. Underneath there is some flag in memory that indicates who owns the lock, but there is also a bunch of logic around it that ensure the operating system won't schedule threads that have nothing to do but wait for a lock to become ready.
That being said, there are some situations where this is not what you want. If you are sufficiently sure that you won't run into the situation where one thread just spins on a lock, and you want to save the overhead that comes with the OS locks, just checking a flag like you described might be a viable option.
Note though that low-level stuff like this should be reserved for special circumstances, not be the first tool in your toolbox. It's just too easy to end up with an algorithm that is incorrect or an implementation that is not as efficient as you thought. If you decide to go down this road, be prepared to do some serious work to get it working as expected.

Wait for a detached thread to finish in C++

How can I wait for a detached thread to finish in C++?
I don't care about an exit status, I just want to know whether or not the thread has finished.
I'm trying to provide a synchronous wrapper around an asynchronous thirdarty tool. The problem is a weird race condition crash involving a callback. The progression is:
I call the thirdparty, and register a callback
when the thirdparty finishes, it notifies me using the callback -- in a detached thread I have no real control over.
I want the thread from (1) to wait until (2) is called.
I want to wrap this in a mechanism that provides a blocking call. So far, I have:
class Wait {
public:
void callback() {
pthread_mutex_lock(&m_mutex);
m_done = true;
pthread_cond_broadcast(&m_cond);
pthread_mutex_unlock(&m_mutex);
}
void wait() {
pthread_mutex_lock(&m_mutex);
while (!m_done) {
pthread_cond_wait(&m_cond, &m_mutex);
}
pthread_mutex_unlock(&m_mutex);
}
private:
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;
bool m_done;
};
// elsewhere...
Wait waiter;
thirdparty_utility(&waiter);
waiter.wait();
As far as I can tell, this should work, and it usually does, but sometimes it crashes. As far as I can determine from the corefile, my guess as to the problem is this:
When the callback broadcasts the end of m_done, the wait thread wakes up
The wait thread is now done here, and Wait is destroyed. All of Wait's members are destroyed, including the mutex and cond.
The callback thread tries to continue from the broadcast point, but is now using memory that's been released, which results in memory corruption.
When the callback thread tries to return (above the level of my poor callback method), the program crashes (usually with a SIGSEGV, but I've seen SIGILL a couple of times).
I've tried a lot of different mechanisms to try to fix this, but none of them solve the problem. I still see occasional crashes.
EDIT: More details:
This is part of a massively multithreaded application, so creating a static Wait isn't practical.
I ran a test, creating Wait on the heap, and deliberately leaking the memory (i.e. the Wait objects are never deallocated), and that resulted in no crashes. So I'm sure it's a problem of Wait being deallocated too soon.
I've also tried a test with a sleep(5) after the unlock in wait, and that also produced no crashes. I hate to rely on a kludge like that though.
EDIT: ThirdParty details:
I didn't think this was relevant at first, but the more I think about it, the more I think it's the real problem:
The thirdparty stuff I mentioned, and why I have no control over the thread: this is using CORBA.
So, it's possible that CORBA is holding onto a reference to my object longer than intended.
Yes, I believe that what you're describing is happening (race condition on deallocate). One quick way to fix this is to create a static instance of Wait, one that won't get destroyed. This will work as long as you don't need to have more than one waiter at the same time.
You will also permanently use that memory, it will not deallocate. But it doesn't look like that's too bad.
The main issue is that it's hard to coordinate lifetimes of your thread communication constructs between threads: you will always need at least one leftover communication construct to communicate when it is safe to destroy (at least in languages without garbage collection, like C++).
EDIT:
See comments for some ideas about refcounting with a global mutex.
To the best of my knowledge there's no portable way to directly ask a thread if its done running (i.e. no pthread_ function). What you are doing is the right way to do it, at least as far as having a condition that you signal. If you are seeing crashes that you are sure are due to the Wait object is being deallocated when the thread that creates it quits (and not some other subtle locking issue -- all too common), the issue is that you need to make sure the Wait isn't being deallocated, by managing from a thread other than the one that does the notification. Put it in global memory or dynamically allocate it and share it with that thread. Most simply don't have the thread being waited on own the memory for the Wait, have the thread doing the waiting own it.
Are you initializing and destroying the mutex and condition var properly?
Wait::Wait()
{
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
m_done = false;
}
Wait::~Wait()
{
assert(m_done);
pthread_mutex_destroy(&m_mutex);
pthread_cond_destroy(&m_cond);
}
Make sure that you aren't prematurely destroying the Wait object -- if it gets destroyed in one thread while the other thread still needs it, you'll get a race condition that will likely result in a segfault. I'd recommend making it a global static variable that gets constructed on program initialization (before main()) and gets destroyed on program exit.
If your assumption is correct then third party module appears to be buggy and you need to come up with some kind of hack to make your application work.
Static Wait is not feasible. How about Wait pool (it even may grow on demand)? Is you application using thread pool to run?
Although there will still be a chance that same Wait will be reused while third party module is still using it. But you can minimize such chance by properly queing vacant Waits in your pool.
Disclaimer: I am in no way an expert in thread safety, so consider this post as a suggestion from a layman.

C++ Thread question - setting a value to indicate the thread has finished

Is the following safe?
I am new to threading and I want to delegate a time consuming process to a separate thread in my C++ program.
Using the boost libraries I have written code something like this:
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
Where finished_flag is a boolean member of my class. When the thread is finished it sets the value and the main loop of my program checks for a change in that value.
I assume that this is okay because I only ever start one thread, and that thread is the only thing that changes the value (except for when it is initialised before I start the thread)
So is this okay, or am I missing something, and need to use locks and mutexes, etc
You never mentioned the type of finished_flag...
If it's a straight bool, then it might work, but it's certainly bad practice, for several reasons. First, some compilers will cache the reads of the finished_flag variable, since the compiler doesn't always pick up the fact that it's being written to by another thread. You can get around this by declaring the bool volatile, but that's taking us in the wrong direction. Even if reads and writes are happening as you'd expect, there's nothing to stop the OS scheduler from interleaving the two threads half way through a read / write. That might not be such a problem here where you have one read and one write op in separate threads, but it's a good idea to start as you mean to carry on.
If, on the other hand it's a thread-safe type, like a CEvent in MFC (or equivilent in boost) then you should be fine. This is the best approach: use thread-safe synchronization objects for inter-thread communication, even for simple flags.
Instead of using a member variable to signal that the thread is done, why not use a condition? You are already are using the boost libraries, and condition is part of the thread library.
Check it out. It allows the worker thread to 'signal' that is has finished, and the main thread can check during execution if the condition has been signaled and then do whatever it needs to do with the completed work. There are examples in the link.
As a general case I would neve make the assumption that a resource will only be modified by the thread. You might know what it is for, however someone else might not - causing no ends of grief as the main thread thinks that the work is done and tries to access data that is not correct! It might even delete it while the worker thread is still using it, and causing the app to crash. Using a condition will help this.
Looking at the thread documentation, you could also call thread.timed_join in the main thread. timed_join will wait for a specified amount for the thread to 'join' (join means that the thread has finsihed)
I don't mean to be presumptive, but it seems like the purpose of your finished_flag variable is to pause the main thread (at some point) until the thread thrd has completed.
The easiest way to do this is to use boost::thread::join
// launch the thread...
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
// ... do other things maybe ...
// wait for the thread to complete
thrd.join();
If you really want to get into the details of communication between threads via shared memory, even declaring a variable volatile won't be enough, even if the compiler does use appropriate access semantics to ensure that it won't get a stale version of data after checking the flag. The CPU can issue reads and writes out of order as long (x86 usually doesn't, but PPC definitely does) and there is nothing in C++9x that allows the compiler to generate code to order memory accesses appropriately.
Herb Sutter's Effective Concurrency series has an extremely in depth look at how the C++ world intersects the multicore/multiprocessor world.
Having the thread set a flag (or signal an event) before it exits is a race condition. The thread has not necessarily returned to the OS yet, and may still be executing.
For example, consider a program that loads a dynamic library (pseudocode):
lib = loadLibrary("someLibrary");
fun = getFunction("someFunction");
fun();
unloadLibrary(lib);
And let's suppose that this library uses your thread:
void someFunction() {
volatile bool finished_flag = false;
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
while(!finished_flag) { // ignore the polling loop, it's besides the point
sleep();
}
delete thrd;
}
void myclass::mymethod() {
// do stuff
finished_flag = true;
}
When myclass::mymethod() sets finished_flag to true, myclass::mymethod() hasn't returned yet. At the very least, it still has to execute a "return" instruction of some sort (if not much more: destructors, exception handler management, etc.). If the thread executing myclass::mymethod() gets pre-empted before that point, someFunction() will return to the calling program, and the calling program will unload the library. When the thread executing myclass::mymethod() gets scheduled to run again, the address containing the "return" instruction is no longer valid, and the program crashes.
The solution would be for someFunction() to call thrd->join() before returning. This would ensure that the thread has returned to the OS and is no longer executing.