I have question about multi threading in c++. I have a scenario as follows
void ThreadedRead(int32_t thread_num, BinReader reader) {
while (!reader.endOfData) {
thread_buckets[thread_num].clear();
thread_buckets[thread_num] = reader.readnextbatch()
thread_flags[thread_num] = THREAD_WAITING;
while (thread_flags[thread_num] != THREAD_RUNNING) {
// wait until awakened
if (thread_flags[thread_num] != THREAD_RUNNING) {
//go back to sleep
}
}
}
thread_flags[thread_num] = THREAD_FINISHED;
}
No section of the above code writes or access memory shared between threads. Each thread is assigned a thread_num and a unique reader object that it may use to read data.
I want the main thread to be able to notify a thread that is in the THREAD_WAITING state that his state has been changed back to THREAD_RUNNING and he needs to do some work. I don't want to him to keep polling his state.
I understand conditional vars and mutexes can help me. But I'm not sure how to use them because I don't want to acquire or need a lock. How can the mainthread blanket notify all waiting threads that they are now free to read more data?
EDIT:
Just in case anyone needs more details
1) reader reads some files
2) thread_buckets is a vector of vectors of uint16
3) threadflags is a int vector
they have all been resized appropriately
I realize that you wrote that you wanted to avoid condition variables and locks. On the other hand you mentioned that this was because you were not sure about how to use them. Please consider the following example to get the job done without polling:
The trick with the condition variables is that a single condition_variable object together with a single mutex object will do the management for you including the handling of the unique_lock objects in the worker threads. Since you tagged your question as C++ I assume you are talking about C++11 (or higher) multithreading (I guess that C-pthreads may work similarly). Your code could be as follows:
// compile for C++11 or higher
#include <thread>
#include <condition_variable>
#include <mutex>
// objects visible to both master and workers:
std::condition_variable cvr;
std::mutex mtx;
void ThreadedRead(int32_t thread_num, BinReader reader) {
while (!reader.endOfData) {
thread_buckets[thread_num].clear();
thread_buckets[thread_num] = reader.readnextbatch()
std::unique_lock<std::mutex> myLock(mtx);
// This lock will be managed by the condition variable!
thread_flags[thread_num] = THREAD_WAITING;
while (thread_flags[thread_num] == THREAD_WAITING) {
cvr.wait(myLock);
// ...must be in a loop as shown because of potential spurious wake-ups
}
}
thread_flags[thread_num] = THREAD_FINISHED;
}
To (re-)activate the workers from a master thread:
{ // block...
// step 1: usually make sure that there is no worker still preparing itself at the moment
std::unique_lock<std::mutex> someLock(mtx);
// (in your case this would not cover workers currently busy with reader.readnextbatch(),
// these would be not re-started this time...)
// step 2: set all worker threads that should work now to THREAD_RUNNING
for (...looping over the worker's flags...) {
if (...corresponding worker should run now...) {
flag = THREAD_RUNNING;
}
}
// step 3: signalize the workers to run now
cvr.notify_all();
} // ...block, releasing someLock
Notice:
If you just want to trigger all sleeping workers you should control them with a single flag instead of a container of flags.
If you want to trigger single sleeping workers but it doesn't matter which one consider the .notify_one() member function instead of .notify_all(). Note as well that also in this case a single mutex/condition_variable pair is sufficient.
The flags should better be placed in an atomic object such as a global std::atomic<int> or maybe for finer control in a std::vector<std::atomic<int>>.
A good introduction to std::condition_variable which also inspired the suggested solution is given in: cplusplus website
It looks like there are a few issues. For one thing, you do not need the conditional inside of your loop:
while (thread_flags[thread_num] != THREAD_RUNNING);
will work by itself. As soon as that condition is false, the loop will exit.
If all you want to do is avoid checking thread_flags as quickly as possible, just put a yield in the loop:
while (thread_flags[thread_num] != THREAD_RUNNING) yield(100);
This will cause the thread to yield the CPU so that it can do other things while the thread waits for its state to change. This will make make the overhead for polling close to negligible. You can experiment with the sleep duration to find a good value. 100ms is probably on the long side.
Depending on what causes the thread state to change, you could have the thread poll that condition/value directly (with a sleep in still) and not bother with states at all.
There are a lot of options here. If you look up reader threads you can probably find just what you want; having a separate reader thread is very common.
Related
I apologise in advance if my question is a duplicate, but I was not able to find a satisfying answer to my question.
I am dealing with the following (maybe silly) issue: I am trying to synchronise two threads (A and B), and I want to block the thread A until a condition is set to true by the thread B.
The "special" thing is that the condition is checked on a thread-safe object (for instance, let's consider it to be a std::atomic_bool).
My naive approach was the following:
// Shared atomic object
std::atomic_bool condition{false};
// Thread A
// ... does something
while(!condition.load()) ; // Do nothing
// Condition is met, proceed with the job
// Thread B
// ... does something
condition.store(true); // Unlock Thread A
but, as far as I have understood, the while implies an active wait which is undesirable.
So, I thought about having a small sleep_for as the body of the while to reduce the frequency of the active wait, but then the issue becomes finding the right waiting time that does not cause waste of time in case the condition unlocks while thread A is sleeping and, at the same time, does not make the loop to execute too often.
My feeling is that this is very much dependant on the time that thread B spends before setting the condition to true, which may be not predictable.
Another solution I have found looking on other SO topics is to use a condition variable, but that would require the introduction of a mutex that is not really needed.
I am perhaps overthinking the problem, but I'd like to know if there are alternative "standard" solutions to follow (bearing in mind that I am limited to C++11), and what would be the best approach in general.
Many thanks in advance for the help.
Your use case is simple and there are many ways to implement that.
The first recommendation would be to make use of condition variable. But it
seems from your question that you would like to avoid that because of mutex.
I don't have any profiling data for your use case, but mutex isn't costly for your use case.
In a multi-threaded environment, at some point of time, you would need some techniques to protect shared access and modification of data. You would probably need mutexes for that.
You could go for condition variable approach.
It is by the standard, and it also provides function to notify all the threads as well, if your use case scales in future.
Also, as you mentioned about "time", condition_variable also comes with variations of wait* functions where the condition could be in terms of "time". It can wait_for or wait_until a certain time as well.
About the while loop and a sleep_for approach, blocking a thread from execution and then rescheduling it again isn't that cheap if we are counting in terms of milliseconds. The condition variable approach would be better suited in this case, rather than having the while loop and an explicit call to sleep_for.
Sorry, condition variables are the way to go here.
The mutex is being used as a part of the condition variable, not as a traditional mutex. And barring some strange priority inversion situation, it shouldn't have much cost.
Here is a simple "farm gate". It starts shut, and can be opened. Once opened, it can never be shut again.
struct gate {
void open_gate() {
auto l = lock();
gate_is_open = true;
cv.notify_all();
}
void wait_on_gate() const {
auto l = lock();
cv.wait(l, [&]{ return gate_is_open; });
}
private:
auto lock() const { return std::unique_lock{m}; }
mutable std::mutex m;
bool gate_is_open = false;
std::condition_variable cv;
};
which you'd use like this:
// Shared gate
gate condition;
// Thread A
// ... does something
condition.wait_on_gate(); // Do nothing
// Condition is met, proceed with the job
// Thread B
// ... does something
condition.open_gate(); // Unlock Thread A
and there we have it.
In c++20 there is std::latch. Start the counter at 1, decrement it when the gate opens, and the other thread waits on the latch.
How about using some sort of a sentinel value to check if the conditions of thread B are true to unlock thread A and synchronize both of them once the condition is met.
I am trying to incorporate threads into my project, but have a problem where using merely 1 worker thread makes it "fall asleep" permanently. Perhaps I have a race condition, but just can't notice it.
My PeriodicThreads object maintains a collection of threads. Once PeriodicThreads::exec_threads() has been invoked, the threads are notified, are awaken and preform their task. Afterwards, they fall back to sleep.
Function of such a worker-thread:
void PeriodicThreads::threadWork(size_t threadId){
//not really used, but need to decalre to use conditional_variable:
std::mutex mutex;
std::unique_lock<std::mutex> lck(mutex);
while (true){
// wait until told to start working on a task:
while (_thread_shouldWork[threadId] == false){
_threads_startSignal.wait(lck);
}
thread_iteration(threadId); //virtual function
_thread_shouldWork[threadId] = false; //vector of flags
_thread_doneSignal.notify_all();
}//end while(true) - run until terminated externally or this whole obj is deleted
}
As you can see, each thread is monitoring its own entry in a vector of flags, and once it sees that it's flag is true - performs the task then resets its flag.
Here is the function that can awaken all the threads:
std::atomic_bool _threadsWorking =false;
//blocks the current thread until all worker threads have completed:
void PeriodicThreads::exec_threads(){
if(_threadsWorking ){
throw std::runtime_error("you requested exec_threads(), but threads haven't yet finished executing the previous task!");
}
_threadsWorking = true;//NOTICE: doing this after the exception check.
//tell all threads to unpause by setting their flags to 'true'
std::fill(_thread_shouldWork.begin(), _thread_shouldWork.end(), true);
_threads_startSignal.notify_all();
//wait for threads to complete:
std::mutex mutex;
std::unique_lock<std::mutex> lck(mutex); //lock & mutex are not really used.
auto isContinueWaiting = [&]()->bool{
bool threadsWorking = false;
for (size_t i=0; i<_thread_shouldWork.size(); ++i){
threadsWorking |= _thread_shouldWork[i];
}
return threadsWorking;
};
while (isContinueWaiting()){
_thread_doneSignal.wait(lck);
}
_threadsWorking = false;//set atomic to false
}
Invoking exec_threads() works fine for several hundred or in rare cases several thousand consecutive iterations. Invocations occur from the main thread's while loop. Its worker thread processes the task, resets its flag and goes back to sleep until the next exec_threads(), and so on.
However, some time after that, the program snaps into a "hibernation", and seems to pause, but doesn't crash.
During such a "hibernation" putting a breakpoint at any while-loop of my condition_variables never actualy causes that breakpoint to trigger.
Being sneaky, I've created my own verify-thread (parallel to main) and monitor my PeriodicThreads object. As it falls into hibernation, my verify-thread keeps outputting to the console me that no threads are currently running (the _threadsWorking atomic of PeriodicThreads is permanently set to false). However, during the other tests the atomic remains as true, once that "hibernation issue" begins.
The strange thing is that if I force the PeriodicThreads::run_thread to sleep for at least 10 microseconds before resetting its flag, things work as normal, and no "hibernation" occurs. Otherwise, if we allow thread to complete it's task very quickly it might cause this whole issue.
I've wrapped each condition_variable inside a while loop to prevent spurious wakes from triggering transition, and situation where notify_all is called before .wait() is called on it. Link
Notice, this occurs even when I have only 1 worker thread
What could be the cause?
Edit
Abandoning these vector flags and just testing on a single atomic_bool with 1 worker thread still shows the same issue.
All shared data should be protected by a mutex. The mutex should have (at least) the same scope as the shared data.
Your _thread_shouldWork container is shared data. You can make a global array of mutexes and each one can protect its own _thread_shouldWork element. (see note below). You should also have at least as many condition variables as you have mutexes. (You can use 1 mutex with several different condition variables, but you should not use several different mutexes with 1 condition variable.)
A condition_variable should protect an actual condition (in this case, the state of an individual element of _thread_shouldWork at any given point) and the mutex is used to protect the variables that encompass that condition.
If you're just using a random local mutex (as you are in your thread code) or just not using a mutex at all (in the main code), then all bets are off. It's undefined behavior. Although I could see it working (by luck) most of the time. What I suspect is happening is that a worker thread is missing the signal from the main thread. It could also be that your main thread is missing the signal from a worker thread. (Thread A reads the state and enters the while loop, then Thread B changes the state and sends the notification, then Thread A goes to sleep... waiting for a notification that was already sent)
Mutexes with local scope are a red flag!
Note: If you're using a vector, you have to watch out because adding or removing items can trigger a resize which will touch elements without grabbing the mutex first (because of course the vector doesn't know about your mutex).
You also have to watch out for false sharing when using arrays
Edit: Here's a video that #Kari found useful for explaining false sharing
https://www.youtube.com/watch?v=dznxqe1Uk3E
From a multithreading perspective, is the following correct or incorrect?
I have an app which has 2 threads: the main thread, and a worker thread.
The main thread has a MainUpdate() function that gets called in a continuous loop. As part of its job, that MainUpdate() function might call a ToggleActive() method on the worker objects running on the worker thread. That ToggleActive() method is used to turn the worker objects on/off.
The flow is something like this.
// MainThread
while(true) {
MainUpdate(...);
}
void MainUpdate(...) {
for(auto& obj: objectsInWorkerThread) {
if (foo())
obj.ToggleActive(getBool());
}
}
// Worker thread example worker ------------------------------
struct SomeWorkerObject {
void Execute(...) {
if(mIsActive == false) // %%%%%%% THIS!
return;
Update(...);
}
void ToggleActive(bool active) {
mIsActiveAtom = active; // %%%%%%% THIS!
mIsActive = mIsActiveAtom; // %%%%%%% THIS!
}
private:
void Update(...) {...}
std::atomic_bool mIsActiveAtom = true;
volatile bool mIsActive = true;
};
I'm trying to avoid checking the atomic field on every invocation of Execute(), which gets called on every iteration of the worker thread. There are many worker objects running at any one time, and thus there would be many atomic fields checks.
As you can see, I'm using the non-atomic field to check for activeness. The value of the non-atomic field gets its value from the atomic field in ToggleActive().
From my tests, this seems to be working, but I have a feeling that it is incorrect.
volatile variable only guarantees that it is not optimized out and reorder by compiler and has nothing to do with multi-thread execution. Therefore, your program does have race condition since ToggleActive and Execute can modify/read mIsActive at the same time.
About performance, you can check if your platform support for lock-free atomic bool. If that is the case, checking atomic value can be very fast. I remember seeing a benchmark somewhere that show std::atomic<bool> has the same speed as volatile bool.
#hgminh is right, your code is not safe.
Synchronization is two way road — if you have a thread perform thread-safe write, another thread must perform thread-safe read. If you have a thread use a lock, another thread must use the same lock.
Think about inter-thread communication as message passing (incidentally, it works exactly that way in modern CPUs). If both sides don't share a messaging channel (mIsActiveAtom), the message might not be delivered properly.
I am trying to execute a piece of code in fixed time intervals. I have something based on naked pthread and now I want to do the same using std::thread.
#include <thread>
#include <mutex>
#include <condition_variable>
#include <iostream>
bool running;
std::mutex mutex;
std::condition_variable cond;
void timer(){
while(running) {
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
std::lock_guard<std::mutex> guard(mutex);
cond.notify_one();
}
cond.notify_one();
}
void worker(){
while(running){
std::unique_lock<std::mutex> mlock(mutex);
cond.wait(mlock);
std::cout << "Hello World" << std::endl;
//... do something that takes a variable amount of time ...//
}
}
int main(){
running = true;
auto t_work = std::thread(worker);
auto t_time = std::thread(timer);
std::this_thread::sleep_for(std::chrono::milliseconds(10000));
running = false;
t_time.join();
t_work.join();
}
The worker in reality does something that takes a variable amount of time, but it should be scheduled at fixed intervals. It seems to work, but I am pretty new to this, so some things arent clear to me...
Why do I need a mutex at all? I do not really use a condition, but whenever the timer sends a signal, the worker should do its job.
Does the timer really need to call cond.notify_one() again after the loop? This was taken from the older code and iirc the reasoning is to prevent the worker to wait forever, in case the timer finishes while the worker is still waiting.
Do I need the running flag, or is there a nicer way to break out of the loops?
PS: I know that there are other ways to ensure a fixed time interval, and I know that there are some problems with my current approach (eg if worker needs more time than the interval used by the timer). However, I would like to first understand that piece of code, before changing it too much.
Why do I need a mutex at all? I do not really use a condition, but whenever the timer sends a signal, the worker should do its job.
The reason you need a mutex is that the thread waiting for the condition to be satisfied could be subject to a spurious wakeup. To make sure your thread actually received the notification that the condition is correctly satisfied you need to check that and should do so with a lambda inside the wait call. And to guarantee that the variable is not modified after the spurious wakeup but before you check the variable you need to acquire a mutex such that your thread is the only one that can modify the condition. In your case that means you need to add a means for the worker thread to actually verify that the timer did run out.
Does the timer really need to call cond.notify_one() again after the loop? This was taken from the older code and iirc the reasoning is to prevent the worker to wait forever, in case the timer finishes while the worker is still waiting.
If you dont call notify after the loop the worker thread will wait indefinitely. So to cleanly exit your program you should actually call notify_all() to make sure every thread waiting for the condition variable wakes up and can terminate cleanly.
Do I need the running flag, or is there a nicer way to break out of the loops?
A running flag is the cleanest way to accomplish what you want.
Let's first check the background concepts.
Critical Section
First of all Mutex is needed to mutually exclude access to a critical section. Usually, critical section is considered to be shared resource. E.g. a Queue, Some I/O (e.g. socket) etc. In plain words Mutex is used to guard shared resource agains a Race Condition, which can bring a resource into undefined state.
Example: Producer / Consumer Problem
A queue should contain some work items to be done. There might be multiple threads which put some work items into the Queue (i.e. produce items => Producer Threads) and multiple threads which consume these items and do smth. useful with them (=> Consumer Threads).
Put and Consume operations modify the Queue (especially its storage and internal representations). Thus when running either put or consume operations we want to exclude other operations from doing the same. This is where Mutex comes into play. In a very basic constellation only one thread (no matter producer or consumer) can get access to the Mutex, i.e. lock it. There exist some other Higher Level locking primitives to increase throughput dependent on usage scenarios (e.g. ReaderWriter Locks)
Concept of Condition Variables
condition_variable::notify_one wakes up one currently waiting thread. At least one thread has to wait on this variable:
If no threads are waiting on this variable posted event will be lost.
If there was a waiting thread it will wake up and start running as soon as it can lock the mutex associated with the condition variable. So if the thread which initiated the notify_one or notify_all call does not give up the mutex lock (e.g. mutex::unlock() or condition_variable::wait()) woken up thread(s) will not run.
In the timer() thread mutex is unlocked after notify_one() call, because the scope ends and guard object is destroyed (destructor calls implicitly mutex::unlock())
Problems with this approach
Cancellation and Variable Caching
Compilers are allowed to cache values of the variables. Thus setting running to true might not work, as the values of the variable might be cached. To avoid that, you need to declare running as volatile or std::atomic<bool>.
worker Thread
You point out that worker needs to run in some time intervals and it might run for various amounts of time. The timer thread can only run after worker thread finished. Why do you need another thread at that point to measure time? These two threads always run as one linear chunk and have no critical section! Why not just put after the task execution the desired sleep call and start running as soon as time elapsed? As it turns out only std::cout is a shared resource. But currently it is used from one thread. Otherwise, you'd need a mutex (without condition variable) to guard writes to cout only.
#include <thread>
#include <atomic>
#include <iostream>
#include <chrono>
std::atomic_bool running = false;
void worker(){
while(running){
auto start_point = std::chrono::system_clock::now();
std::cout << "Hello World" << std::endl;
//... do something that takes a variable amount of time ...//
std::this_thread::sleep_until(start_point+std::chrono::milliseconds(1000));
}
}
int main(){
running = true;
auto t_work = std::thread(worker);
std::this_thread::sleep_for(std::chrono::milliseconds(10000));
running = false;
t_work.join();
}
Note: With sleep_until call in the worker thread the execution is blocked if your task was blocking longer than 1000ms from the start_point.
I have 10 threads that are supposed to be waiting for signal.
Until now I've simply done 'sleep(3)', and that has been working fine, but is there are a more secure way to make sure, that all threads have been created and are indeed waiting.
I made the following construction where I in critical region, before the wait, increment a counter telling how many threads are waiting. But then I have to have an additional mutex and conditional for signalling back to the main that all threads are created, it seems overly complex.
Am I missing some basic thread design pattern?
Thanks
edit: fixed types
edit: clarifying information below
A barrier won't work in this case, because I'm not interested in letting my threads wait until all threads are ready. This already happens with the 'cond_wait'.
I'm interested in letting the main function know, when all threads are ready and waiting.
//mutex and conditional to signal from main to threads to do work
mutex_t mutex_for_cond;
condt_t cond;
//mutex and conditional to signal back from thread to main that threads are ready
mutex_t mutex_for_back_cond;
condt_t back_cond;
int nThreads=0;//threadsafe by using mutex_for_cond
void *thread(){
mutex_lock(mutex_for_cond);
nThreads++;
if(nThreads==10){
mutex_lock(mutex_for_back_cond)
cond_signal(back_cond);
mutex_unlock(mutex_for_back_cond)
}while(1){
cond_wait(cond,mutext_for_cond);
if(spurious)
continue;
else
break;
}
mutex_unlock(mutex_for_cond);
//do work on non critical region data
}
int main(){
for(int i=0;i<10)
create_threads;
while(1){
mutex_lock(mutex_for_back_cond);
cond_wait(back_cond,mutex_for_back_cond);
mutex_unlock(mutex_for_back_cond);
mutex_lock(mutex_for_cond);
if(nThreads==10){
break;
}else{
//spurious wakeup
mutex_unlock(mutex_for_cond);
}
}
//now all threads are waiting
//mutex_for_cond is still locked so broadcast
cond_broadcast(cond);//was type here
}
Am I missing some basic thread design pattern?
Yes. For every condition, there should be a variable that is protected by the accompanying mutex. Only the change of this variable is indicated by signals on the condition variable.
You check the variable in a loop, waiting on the condition:
mutex_lock(mutex_for_back_cond);
while ( ready_threads < 10 )
cond_wait(back_cond,mutex_for_back_cond);
mutex_unlock( mutex_for_back_cond );
Additionally, what you are trying to build is a thread barrier. It is often pre-implemented in threading libraries, like pthread_barrier_wait.
Sensible threading APIs have a barrier construct which does precisely this.
For example, with boost::thread, you would create a barrier like this:
boost::barrier bar(10); // a barrier for 10 threads
and then each thread would wait on the barrier:
bar.wait();
the barrier waits until the specified number of threads are waiting for it, and then releases them all at once. In other words, once all ten threads have been created and are ready, it'll allow them all to proceed.
That's the simple, and sane, way of doing it. Threading APIs which do not have a barrier construct require you to do it the hard way, not unlike what you're doing now.
You should associate some variable that contains the 'event state' with the condition variable. The main thread sets the event state variable appropriately just before issuing the broadcast. The threads that are interested in the event check the event state variable regardless of whether they've blocked on the condition variable or not.
With this pattern, the main thread doesn't need to know about the precise state of the threads - it just sets the event when it needs to then broadcasts the condition. Any waiting threads will be unblocked, and any threads not waiting yet will never block on the condition variable because they'll note that the event has already occurred before waiting on the condition. Something like the following pseudocode:
//mutex and conditional to signal from main to threads to do work
pthread_mutex_t mutex_for_cond;
pthread_cond_t cond;
int event_occurred = 0;
void *thread()
{
pthread_mutex_lock(&mutex_for_cond);
while (!event_occurred) {
pthread_cond_wait( &cond, &mutex_for_cond);
}
pthread_mutex_unlock(&mutex_for_cond);
//do work on non critical region data
}
int main()
{
pthread_mutex_init(&mutex_for_cond, ...);
pthread_cond_init(&cond, ...);
for(int i=0;i<10)
create_threads(...);
// do whatever needs to done to set up the work for the threads
// now let the threads know they can do their work (whether or not
// they've gotten to the "wait point" yet)
pthread_mutex_lock(&mutex_for_cond);
event_occured = 1;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex_for_cond);
}