How to fix race condition for condition variable wait/notify - c++

Answer to this question is wrong as it has chance to deadlock. Condition Variable - Wait/Notify Race Condition
I found no solution to solving race condition or dead lock issue.
Imagine we have two threads. now goal is as follows.
first condition:
Thread 1 Waits
Thread 2 Notifies
second condition:
Thread 2 Notifies
Thread 1 Should not wait and continue normal execution.
How can this be implemented correctly without having a queue for notifies? because I want this part of code to run as fast as possible and use a Boolean value instead of adding item into queue. Also there are only 2 threads, so use of queue seems overkill for me.
pseudo code when there is race condition:
Thread 1:
lock(x);
if(!signaled)
{
unlock(x); // ********
// still small gap, how to avoid?
cv.wait(); // forget spurious wakeup for sake of simplicity
signaled = false;
}
else // ********
unlock(x);
Thread 2:
lock(x);
signaled = true;
cv.notify();
unlock(x);
now if you remove two lines commented with ******** race condition will be solved and chance of deadlock will be introduced where Thread1 waits while owning the lock and Thread2 is stuck at locking x.

The source of your confusion is your misunderstanding of conditional variables. The inherent feature of them is that lock release and entering into wait mode is done atomically, i.e. nothing happens in between.
So no modification to the variable can be happening before variables enters into wait mode, because lock will not be released. This is basic guarantee of any conforming conditional_variable implementation, for example, here is the extract from the reference page for std::conditional_variable:
http://en.cppreference.com/w/cpp/thread/condition_variable/wait
Atomically releases lock, blocks the current executing thread, and
adds it to the list of threads waiting on *this. The thread will be
unblocked when notify_all() or notify_one() is executed. It may also
be unblocked spuriously. When unblocked, regardless of the reason,
lock is reacquired and wait exits. If this function exits via
exception, lock is also reacquired. (until C++14)

Related

Worker Thread permanently hibernates, after executing too fast

I am trying to incorporate threads into my project, but have a problem where using merely 1 worker thread makes it "fall asleep" permanently. Perhaps I have a race condition, but just can't notice it.
My PeriodicThreads object maintains a collection of threads. Once PeriodicThreads::exec_threads() has been invoked, the threads are notified, are awaken and preform their task. Afterwards, they fall back to sleep.
Function of such a worker-thread:
void PeriodicThreads::threadWork(size_t threadId){
//not really used, but need to decalre to use conditional_variable:
std::mutex mutex;
std::unique_lock<std::mutex> lck(mutex);
while (true){
// wait until told to start working on a task:
while (_thread_shouldWork[threadId] == false){
_threads_startSignal.wait(lck);
}
thread_iteration(threadId); //virtual function
_thread_shouldWork[threadId] = false; //vector of flags
_thread_doneSignal.notify_all();
}//end while(true) - run until terminated externally or this whole obj is deleted
}
As you can see, each thread is monitoring its own entry in a vector of flags, and once it sees that it's flag is true - performs the task then resets its flag.
Here is the function that can awaken all the threads:
std::atomic_bool _threadsWorking =false;
//blocks the current thread until all worker threads have completed:
void PeriodicThreads::exec_threads(){
if(_threadsWorking ){
throw std::runtime_error("you requested exec_threads(), but threads haven't yet finished executing the previous task!");
}
_threadsWorking = true;//NOTICE: doing this after the exception check.
//tell all threads to unpause by setting their flags to 'true'
std::fill(_thread_shouldWork.begin(), _thread_shouldWork.end(), true);
_threads_startSignal.notify_all();
//wait for threads to complete:
std::mutex mutex;
std::unique_lock<std::mutex> lck(mutex); //lock & mutex are not really used.
auto isContinueWaiting = [&]()->bool{
bool threadsWorking = false;
for (size_t i=0; i<_thread_shouldWork.size(); ++i){
threadsWorking |= _thread_shouldWork[i];
}
return threadsWorking;
};
while (isContinueWaiting()){
_thread_doneSignal.wait(lck);
}
_threadsWorking = false;//set atomic to false
}
Invoking exec_threads() works fine for several hundred or in rare cases several thousand consecutive iterations. Invocations occur from the main thread's while loop. Its worker thread processes the task, resets its flag and goes back to sleep until the next exec_threads(), and so on.
However, some time after that, the program snaps into a "hibernation", and seems to pause, but doesn't crash.
During such a "hibernation" putting a breakpoint at any while-loop of my condition_variables never actualy causes that breakpoint to trigger.
Being sneaky, I've created my own verify-thread (parallel to main) and monitor my PeriodicThreads object. As it falls into hibernation, my verify-thread keeps outputting to the console me that no threads are currently running (the _threadsWorking atomic of PeriodicThreads is permanently set to false). However, during the other tests the atomic remains as true, once that "hibernation issue" begins.
The strange thing is that if I force the PeriodicThreads::run_thread to sleep for at least 10 microseconds before resetting its flag, things work as normal, and no "hibernation" occurs. Otherwise, if we allow thread to complete it's task very quickly it might cause this whole issue.
I've wrapped each condition_variable inside a while loop to prevent spurious wakes from triggering transition, and situation where notify_all is called before .wait() is called on it. Link
Notice, this occurs even when I have only 1 worker thread
What could be the cause?
Edit
Abandoning these vector flags and just testing on a single atomic_bool with 1 worker thread still shows the same issue.
All shared data should be protected by a mutex. The mutex should have (at least) the same scope as the shared data.
Your _thread_shouldWork container is shared data. You can make a global array of mutexes and each one can protect its own _thread_shouldWork element. (see note below). You should also have at least as many condition variables as you have mutexes. (You can use 1 mutex with several different condition variables, but you should not use several different mutexes with 1 condition variable.)
A condition_variable should protect an actual condition (in this case, the state of an individual element of _thread_shouldWork at any given point) and the mutex is used to protect the variables that encompass that condition.
If you're just using a random local mutex (as you are in your thread code) or just not using a mutex at all (in the main code), then all bets are off. It's undefined behavior. Although I could see it working (by luck) most of the time. What I suspect is happening is that a worker thread is missing the signal from the main thread. It could also be that your main thread is missing the signal from a worker thread. (Thread A reads the state and enters the while loop, then Thread B changes the state and sends the notification, then Thread A goes to sleep... waiting for a notification that was already sent)
Mutexes with local scope are a red flag!
Note: If you're using a vector, you have to watch out because adding or removing items can trigger a resize which will touch elements without grabbing the mutex first (because of course the vector doesn't know about your mutex).
You also have to watch out for false sharing when using arrays
Edit: Here's a video that #Kari found useful for explaining false sharing
https://www.youtube.com/watch?v=dznxqe1Uk3E

unlock the mutex after condition_variable::notify_all() or before?

Looking at several videos and the documentation example, we unlock the mutex before calling the notify_all(). Will it be better to instead call it after?
The common way:
Inside the Notifier thread:
//prepare data for several worker-threads;
//and now, awaken the threads:
std::unique_lock<std::mutex> lock2(sharedMutex);
_threadsCanAwaken = true;
lock2.unlock();
_conditionVar.notify_all(); //awaken all the worker threads;
//wait until all threads completed;
//cleanup:
_threadsCanAwaken = false;
//prepare new batches once again, etc, etc
Inside one of the worker threads:
while(true){
// wait for the next batch:
std::unique_lock<std::mutex> lock1(sharedMutex);
_conditionVar.wait(lock1, [](){return _threadsCanAwaken});
lock1.unlock(); //let sibling worker-threads work on their part as well
//perform the final task
//signal the notifier that one more thread has completed;
//loop back and wait until the next task
}
Notice how the lock2 is unlocked before we notify the condition variable - should we instead unlock it after the notify_all() ?
Edit
From my comment below: My concern is that, what if the worker spuriously awakes, sees that the mutex is unlocked, super-quickly completes the task and loops back to the start of while. Now the slow-poke Notifier finally calls notify_all(), causing the worker to loop an additional time (excessive and undesired).
There are no advantages to unlocking the mutex before signaling the condition variable unless your implementation is unusual. There are two disadvantages to unlocking before signaling:
If you unlock before you signal, the signal may wake a thread that choose to block on the condition variable after you unlocked. This can lead to a deadlock if you use the same condition variable to signal more than one logical condition. This kind of bug is hard to create, hard to diagnose, and hard to understand. It is trivially avoided by always signaling before unlocking. This ensures that the change of shared state and the signal are an atomic operation and that race conditions and deadlocks are impossible.
There is a performance penalty for unlocking before signaling that is avoided by unlocking after signaling. If you signal before you unlock, a good implementation will know that your signal cannot possibly render any thread ready-to-run because the mutex is held by the calling thread and any thread affects by the condition variable necessarily cannot make forward progress without the mutex. This permits a significant optimization (often called "wait morphing") that is not possible if you unlock first.
So signal while holding the lock unless you have some unusual reason to do otherwise.
should we instead unlock it after the notify_all() ?
It is correct to do it either way but you may have different behavior in different situations. It is quite difficult to predict how it will affect performance of your program - I've seen both positive and negative effects for different applications. So it is better you profile your program and make decision on your particular situation based on profiling.
As mentioned here : cppreference.com
The notifying thread does not need to hold the lock on the same mutex
as the one held by the waiting thread(s); in fact doing so is a
pessimization, since the notified thread would immediately block
again, waiting for the notifying thread to release the lock.
That said, documentation for wait
At the moment of blocking the thread, the function automatically calls
lck.unlock(), allowing other locked threads to continue.
Once notified (explicitly, by some other thread), the function
unblocks and calls lck.lock(), leaving lck in the same state as when
the function was called. Then the function returns (notice that this
last mutex locking may block again the thread before returning).
so when notified wait will re-attempt to gain the lock and in that process it will get blocked again till original notifying thread releases the lock.
So I'll suggest that release the lock before calling notify. As done in example on cppreference.com and most importantly
Don't be Pessimistic.
David's answer seems to me wrong.
First, assuming the simple case of two threads, one waiting for the other on a condition variable, unlocking first by the notifier will not waken the other waiting thread, as the signal has not arrived. Then the notify call will immediately waken the waiting thread. You do not need any special optimizations.
On the other hand, signalling first has the potential of waking up a thread and making it sleep immediately again, as it cannot hold the lock—unless wait morphing is implemented.
Wait morphing does not exist in Linux at least, according to the answer under this StackOverflow question: Which OS / platforms implement wait morphing optimization?
The cppreference example also unlocks first before signalling: https://en.cppreference.com/w/cpp/thread/condition_variable/notify_all
It explicit says:
The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s). Doing so may be a pessimization, since the notified thread would immediately block again, waiting for the notifying thread to release the lock, though some implementations recognize the pattern and do not attempt to wake up the thread that is notified under lock.
should we instead unlock it after the notify_all() ?
After reading several related posts, I've formed the opinion that it's purely a performance issue. If OS supports "wait morphing", unlock after; otherwise, unlock before.
I'm adding an answer here to augment that of #DavidSchwartz 's. Particularly, I'd like to clarify his point 1.
If you unlock before you signal, the signal may wake a thread that choose to block on the condition variable after you unlocked. This can lead to a deadlock if you use the same condition variable to signal more than one logical condition. This kind of bug is hard to create, hard to diagnose, and hard to understand. It is trivially avoided by always signaling before unlocking. This ensures that the change of shared state and the signal are an atomic operation and that race conditions and deadlocks are impossible.
The 1st thing I said is that, because it's a CV and not a Mutex, a better term for the so-called "deadlock" might be "sleep paralysis" - a mistake some programs make is that
a thread that's supposed to wake
went to sleep due to not rechecking the condition it's been waiting for before wait'ng again.
The 2nd thing is that, when waking some other thread(s),
the default choice should be broadcast/notify_all (broadcast is the POSIX term, which is equivalent to its C++ counterpart).
signal/notify is an optimized special case used for when there's only 1 other thread is waiting.
Finally 3rd, David is adamant that
it's better to unlock after notify,
because it can avoid the "deadlock" which I've been referring to as "sleep paralysis".
If it's unlock then notify, then there's a window where another thread (let's call this the "wrong" thread) may i.) acquire the mutex, ii.)going into wait, and iii.) wake up. The steps i. ii. and iii. happens too quickly, consumed the signal, leaving the intended (let's call it "correct") thread in sleep.
I discussed this extensively with David, he clarified that only when all 3 points are violated ( 1. condvar associated with several separate conditions and/or didn't check it before waiting again; 2. signal/notify only 1 thread when there're more than 1 other threads using the condvar; 3. unlock before notify creating a window for race condition ), the "sleep paralysis" would occur.
Finally, my recommendation is that, point 1 and 2 are essential for correctness of the program, and fixing issues associated with 1 and 2 should be prioritized over 3, which should only be a augmentative "last resort".
For the purpose of providing reference, manpage for signal/broadcast and wait contains some info from version 3 of Single Unix Specification that gave some explanations on point 1 and 2, and partly 3. Although specified for POSIX/Unix/Linux in C, it's concepts are applicable to C++.
As of this writing (2023-01-31), the 2018 edition of version 4 of Single Unix Specification is released, and the drafting of version 5 is underway.

Use condition variable notify without locking mutex [duplicate]

I’m reading up on pthread.h; the condition variable related functions (like pthread_cond_wait(3)) require a mutex as an argument. Why? As far as I can tell, I’m going to be creating a mutex just to use as that argument? What is that mutex supposed to do?
It's just the way that condition variables are (or were originally) implemented.
The mutex is used to protect the condition variable itself. That's why you need it locked before you do a wait.
The wait will "atomically" unlock the mutex, allowing others access to the condition variable (for signalling). Then when the condition variable is signalled or broadcast to, one or more of the threads on the waiting list will be woken up and the mutex will be magically locked again for that thread.
You typically see the following operation with condition variables, illustrating how they work. The following example is a worker thread which is given work via a signal to a condition variable.
thread:
initialise.
lock mutex.
while thread not told to stop working:
wait on condvar using mutex.
if work is available to be done:
do the work.
unlock mutex.
clean up.
exit thread.
The work is done within this loop provided that there is some available when the wait returns. When the thread has been flagged to stop doing work (usually by another thread setting the exit condition then kicking the condition variable to wake this thread up), the loop will exit, the mutex will be unlocked and this thread will exit.
The code above is a single-consumer model as the mutex remains locked while the work is being done. For a multi-consumer variation, you can use, as an example:
thread:
initialise.
lock mutex.
while thread not told to stop working:
wait on condvar using mutex.
if work is available to be done:
copy work to thread local storage.
unlock mutex.
do the work.
lock mutex.
unlock mutex.
clean up.
exit thread.
which allows other consumers to receive work while this one is doing work.
The condition variable relieves you of the burden of polling some condition instead allowing another thread to notify you when something needs to happen. Another thread can tell that thread that work is available as follows:
lock mutex.
flag work as available.
signal condition variable.
unlock mutex.
The vast majority of what are often erroneously called spurious wakeups was generally always because multiple threads had been signalled within their pthread_cond_wait call (broadcast), one would return with the mutex, do the work, then re-wait.
Then the second signalled thread could come out when there was no work to be done. So you had to have an extra variable indicating that work should be done (this was inherently mutex-protected with the condvar/mutex pair here - other threads needed to lock the mutex before changing it however).
It was technically possible for a thread to return from a condition wait without being kicked by another process (this is a genuine spurious wakeup) but, in all my many years working on pthreads, both in development/service of the code and as a user of them, I never once received one of these. Maybe that was just because HP had a decent implementation :-)
In any case, the same code that handled the erroneous case also handled genuine spurious wakeups as well since the work-available flag would not be set for those.
A condition variable is quite limited if you could only signal a condition, usually you need to handle some data that's related to to condition that was signalled. Signalling/wakeup have to be done atomically in regards to achieve that without introducing race conditions, or be overly complex
pthreads can also give you , for rather technical reasons, a spurious wakeup . That means you need to check a predicate, so you can be sure the condition actually was signalled - and distinguish that from a spurious wakeup. Checking such a condition in regards to waiting for it need to be guarded - so a condition variable needs a way to atomically wait/wake up while locking/unlocking a mutex guarding that condition.
Consider a simple example where you're notified that some data are produced. Maybe another thread made some data that you want, and set a pointer to that data.
Imagine a producer thread giving some data to another consumer thread through a 'some_data'
pointer.
while(1) {
pthread_cond_wait(&cond); //imagine cond_wait did not have a mutex
char *data = some_data;
some_data = NULL;
handle(data);
}
you'd naturally get a lot of race condition, what if the other thread did some_data = new_data right after you got woken up, but before you did data = some_data
You cannot really create your own mutex to guard this case either .e.g
while(1) {
pthread_cond_wait(&cond); //imagine cond_wait did not have a mutex
pthread_mutex_lock(&mutex);
char *data = some_data;
some_data = NULL;
pthread_mutex_unlock(&mutex);
handle(data);
}
Will not work, there's still a chance of a race condition in between waking up and grabbing the mutex. Placing the mutex before the pthread_cond_wait doesn't help you, as you will now
hold the mutex while waiting - i.e. the producer will never be able to grab the mutex.
(note, in this case you could create a second condition variable to signal the producer that you're done with some_data - though this will become complex, especially so if you want many producers/consumers.)
Thus you need a way to atomically release/grab the mutex when waiting/waking up from the condition. That's what pthread condition variables does, and here's what you'd do:
while(1) {
pthread_mutex_lock(&mutex);
while(some_data == NULL) { // predicate to acccount for spurious wakeups,would also
// make it robust if there were several consumers
pthread_cond_wait(&cond,&mutex); //atomically lock/unlock mutex
}
char *data = some_data;
some_data = NULL;
pthread_mutex_unlock(&mutex);
handle(data);
}
(the producer would naturally need to take the same precautions, always guarding 'some_data' with the same mutex, and making sure it doesn't overwrite some_data if some_data is currently != NULL)
POSIX condition variables are stateless. So it is your responsibility to maintain the state. Since the state will be accessed by both threads that wait and threads that tell other threads to stop waiting, it must be protected by a mutex. If you think you can use condition variables without a mutex, then you haven't grasped that condition variables are stateless.
Condition variables are built around a condition. Threads that wait on a condition variable are waiting for some condition. Threads that signal condition variables change that condition. For example, a thread might be waiting for some data to arrive. Some other thread might notice that the data has arrived. "The data has arrived" is the condition.
Here's the classic use of a condition variable, simplified:
while(1)
{
pthread_mutex_lock(&work_mutex);
while (work_queue_empty()) // wait for work
pthread_cond_wait(&work_cv, &work_mutex);
work = get_work_from_queue(); // get work
pthread_mutex_unlock(&work_mutex);
do_work(work); // do that work
}
See how the thread is waiting for work. The work is protected by a mutex. The wait releases the mutex so that another thread can give this thread some work. Here's how it would be signalled:
void AssignWork(WorkItem work)
{
pthread_mutex_lock(&work_mutex);
add_work_to_queue(work); // put work item on queue
pthread_cond_signal(&work_cv); // wake worker thread
pthread_mutex_unlock(&work_mutex);
}
Notice that you need the mutex to protect the work queue. Notice that the condition variable itself has no idea whether there's work or not. That is, a condition variable must be associated with a condition, that condition must be maintained by your code, and since it's shared among threads, it must be protected by a mutex.
Not all condition variable functions require a mutex: only the waiting operations do. The signal and broadcast operations do not require a mutex. A condition variable also is not permanently associated with a specific mutex; the external mutex does not protect the condition variable. If a condition variable has internal state, such as a queue of waiting threads, this must be protected by an internal lock inside the condition variable.
The wait operations bring together a condition variable and a mutex, because:
a thread has locked the mutex, evaluated some expression over shared variables and found it to be false, such that it needs to wait.
the thread must atomically move from owning the mutex, to waiting on the condition.
For this reason, the wait operation takes as arguments both the mutex and condition: so that it can manage the atomic transfer of a thread from owning the mutex to waiting, so that the thread does not fall victim to the lost wake up race condition.
A lost wakeup race condition will occur if a thread gives up a mutex, and then waits on a stateless synchronization object, but in a way which is not atomic: there exists a window of time when the thread no longer has the lock, and has not yet begun waiting on the object. During this window, another thread can come in, make the awaited condition true, signal the stateless synchronization and then disappear. The stateless object doesn't remember that it was signaled (it is stateless). So then the original thread goes to sleep on the stateless synchronization object, and does not wake up, even though the condition it needs has already become true: lost wakeup.
The condition variable wait functions avoid the lost wake up by making sure that the calling thread is registered to reliably catch the wakeup before it gives up the mutex. This would be impossible if the condition variable wait function did not take the mutex as an argument.
I do not find the other answers to be as concise and readable as this page. Normally the waiting code looks something like this:
mutex.lock()
while(!check())
condition.wait(mutex) # atomically unlocks mutex and sleeps. Calls
# mutex.lock() once the thread wakes up.
mutex.unlock()
There are three reasons to wrap the wait() in a mutex:
without a mutex another thread could signal() before the wait() and we'd miss this wake up.
normally check() is dependent on modification from another thread, so you need mutual exclusion on it anyway.
to ensure that the highest priority thread proceeds first (the queue for the mutex allows the scheduler to decide who goes next).
The third point is not always a concern - historical context is linked from the article to this conversation.
Spurious wake-ups are often mentioned with regard to this mechanism (i.e. the waiting thread is awoken without signal() being called). However, such events are handled by the looped check().
Condition variables are associated with a mutex because it is the only way it can avoid the race that it is designed to avoid.
// incorrect usage:
// thread 1:
while (notDone) {
pthread_mutex_lock(&mutex);
bool ready = protectedReadyToRunVariable
pthread_mutex_unlock(&mutex);
if (ready) {
doWork();
} else {
pthread_cond_wait(&cond1); // invalid syntax: this SHOULD have a mutex
}
}
// signalling thread
// thread 2:
prepareToRunThread1();
pthread_mutex_lock(&mutex);
protectedReadyToRuNVariable = true;
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cond1);
Now, lets look at a particularly nasty interleaving of these operations
pthread_mutex_lock(&mutex);
bool ready = protectedReadyToRunVariable;
pthread_mutex_unlock(&mutex);
pthread_mutex_lock(&mutex);
protectedReadyToRuNVariable = true;
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cond1);
if (ready) {
pthread_cond_wait(&cond1); // uh o!
At this point, there is no thread which is going to signal the condition variable, so thread1 will wait forever, even though the protectedReadyToRunVariable says it's ready to go!
The only way around this is for condition variables to atomically release the mutex while simultaneously starting to wait on the condition variable. This is why the cond_wait function requires a mutex
// correct usage:
// thread 1:
while (notDone) {
pthread_mutex_lock(&mutex);
bool ready = protectedReadyToRunVariable
if (ready) {
pthread_mutex_unlock(&mutex);
doWork();
} else {
pthread_cond_wait(&mutex, &cond1);
}
}
// signalling thread
// thread 2:
prepareToRunThread1();
pthread_mutex_lock(&mutex);
protectedReadyToRuNVariable = true;
pthread_cond_signal(&mutex, &cond1);
pthread_mutex_unlock(&mutex);
The mutex is supposed to be locked when you call pthread_cond_wait; when you call it it atomically both unlocks the mutex and then blocks on the condition. Once the condition is signaled it atomically locks it again and returns.
This allows the implementation of predictable scheduling if desired, in that the thread that would be doing the signalling can wait until the mutex is released to do its processing and then signal the condition.
It appears to be a specific design decision rather than a conceptual need.
Per the pthreads docs the reason that the mutex was not separated is that there is a significant performance improvement by combining them and they expect that because of common race conditions if you don't use a mutex, it's almost always going to be done anyway.
https://linux.die.net/man/3/pthread_cond_wait​
Features of Mutexes and Condition Variables
It had been suggested that the mutex acquisition and release be
decoupled from condition wait. This was rejected because it is the
combined nature of the operation that, in fact, facilitates realtime
implementations. Those implementations can atomically move a
high-priority thread between the condition variable and the mutex in a
manner that is transparent to the caller. This can prevent extra
context switches and provide more deterministic acquisition of a mutex
when the waiting thread is signaled. Thus, fairness and priority
issues can be dealt with directly by the scheduling discipline.
Furthermore, the current condition wait operation matches existing
practice.
There are a tons of exegeses about that, yet I want to epitomize it with an example following.
1 void thr_child() {
2 done = 1;
3 pthread_cond_signal(&c);
4 }
5 void thr_parent() {
6 if (done == 0)
7 pthread_cond_wait(&c);
8 }
What's wrong with the code snippet? Just ponder somewhat before going ahead.
The issue is genuinely subtle. If the parent invokes
thr_parent() and then vets the value of done, it will see that it is 0 and
thus try to go to sleep. But just before it calls wait to go to sleep, the parent
is interrupted between lines of 6-7, and the child runs. The child changes the state variable
done to 1 and signals, but no thread is waiting and thus no thread is
woken. When the parent runs again, it sleeps forever, which is really egregious.
What if they are carried out while acquired locks individually?
I made an exercice in class if you want a real example of condition variable :
#include "stdio.h"
#include "stdlib.h"
#include "pthread.h"
#include "unistd.h"
int compteur = 0;
pthread_cond_t varCond = PTHREAD_COND_INITIALIZER;
pthread_mutex_t mutex_compteur;
void attenteSeuil(arg)
{
pthread_mutex_lock(&mutex_compteur);
while(compteur < 10)
{
printf("Compteur : %d<10 so i am waiting...\n", compteur);
pthread_cond_wait(&varCond, &mutex_compteur);
}
printf("I waited nicely and now the compteur = %d\n", compteur);
pthread_mutex_unlock(&mutex_compteur);
pthread_exit(NULL);
}
void incrementCompteur(arg)
{
while(1)
{
pthread_mutex_lock(&mutex_compteur);
if(compteur == 10)
{
printf("Compteur = 10\n");
pthread_cond_signal(&varCond);
pthread_mutex_unlock(&mutex_compteur);
pthread_exit(NULL);
}
else
{
printf("Compteur ++\n");
compteur++;
}
pthread_mutex_unlock(&mutex_compteur);
}
}
int main(int argc, char const *argv[])
{
int i;
pthread_t threads[2];
pthread_mutex_init(&mutex_compteur, NULL);
pthread_create(&threads[0], NULL, incrementCompteur, NULL);
pthread_create(&threads[1], NULL, attenteSeuil, NULL);
pthread_exit(NULL);
}

Why do both the notify and wait function of a std::condition_variable need a locked mutex

On my neverending quest to understand std::contion_variables I've run into the following. On this page it says the following:
void print_id (int id) {
std::unique_lock<std::mutex> lck(mtx);
while (!ready) cv.wait(lck);
// ...
std::cout << "thread " << id << '\n';
}
And after that it says this:
void go() {
std::unique_lock<std::mutex> lck(mtx);
ready = true;
cv.notify_all();
}
Now as I understand it, both of these functions will halt on the std::unqique_lock line. Until a unique lock is acquired. That is, no other thread has a lock.
So say the print_id function is executed first. The unique lock will be aquired and the function will halt on the wait line.
If the go function is then executed (on a separate thread), the code there will halt on the unique lock line. Since the mutex is locked by the print_id function already.
Obviously this wouldn't work if the code was like that. But I really don't see what I'm not getting here. So please enlighten me.
What you're missing is that wait unlocks the mutex and then waits for the signal on cv.
It locks the mutex again before returning.
You could have found this out by clicking on wait on the page where you found the example:
At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.
Once notified (explicitly, by some other thread), the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called.
There's one point you've missed—calling wait() unlocks the mutex. The thread atomically (releases the mutex + goes to sleep). Then, when woken by the signal, it tries to re-acquire the mutex (possibly blocking); once it acquires it, it can proceed.
Notice that it's not necessary to have the mutex locked for calling notify_*, only for wait*
To answer the question as posed, which seems necessary regarding claims that you should not acquire a lock on notification for performance reasons (isn't correctness more important than performance?): The necessity to lock on "wait" and the recommendation to always lock around "notify" is to protect the user from himself and his program from data and logical races. Without the lock in "go", the program you posted would immediately have a data race on "ready". However, even if ready were itself synchronized (e.g. atomic) you would have a logical race with a missed notification, because without the lock in "go" it is possible for the notify to occur just after the check for "ready" and just before the actual wait, and the waiting thread may then remain blocked indefinitely. The synchronization on the atomic variable itself is not enough to prevent this. This is why helgrind will warn when a notification is done without holding the lock. There are some fringe cases where the mutex lock is really not required around the notify. In all of these cases, there needs to be a bidirectional synchronization beforehand so that the producing thread can know for sure that the other thread is already waiting. IMO these cases are for experts only. Actually, I have seen an expert, giving a talk about multi-threading, getting this wrong — he thought an atomic counter would suffice. That said, the lock around the wait is always necessary for correctness (or, at least, an operation that is atomic with the wait), and this is why the standard library enforces it and atomically unlocks the mutex on entering the wait.
POSIX condition variables are, unlike Windows events, not "idiot-proof" because they are stateless (apart from being aware of waiting threads). The recommendation to use a lock on the notify is there to protect you from the worst and most common screwups. You can build a Windows-like stateful event using a mutex + condition var + bool variable if you like, of course.

why should a condition variable must always associated with a mutex? [duplicate]

I’m reading up on pthread.h; the condition variable related functions (like pthread_cond_wait(3)) require a mutex as an argument. Why? As far as I can tell, I’m going to be creating a mutex just to use as that argument? What is that mutex supposed to do?
It's just the way that condition variables are (or were originally) implemented.
The mutex is used to protect the condition variable itself. That's why you need it locked before you do a wait.
The wait will "atomically" unlock the mutex, allowing others access to the condition variable (for signalling). Then when the condition variable is signalled or broadcast to, one or more of the threads on the waiting list will be woken up and the mutex will be magically locked again for that thread.
You typically see the following operation with condition variables, illustrating how they work. The following example is a worker thread which is given work via a signal to a condition variable.
thread:
initialise.
lock mutex.
while thread not told to stop working:
wait on condvar using mutex.
if work is available to be done:
do the work.
unlock mutex.
clean up.
exit thread.
The work is done within this loop provided that there is some available when the wait returns. When the thread has been flagged to stop doing work (usually by another thread setting the exit condition then kicking the condition variable to wake this thread up), the loop will exit, the mutex will be unlocked and this thread will exit.
The code above is a single-consumer model as the mutex remains locked while the work is being done. For a multi-consumer variation, you can use, as an example:
thread:
initialise.
lock mutex.
while thread not told to stop working:
wait on condvar using mutex.
if work is available to be done:
copy work to thread local storage.
unlock mutex.
do the work.
lock mutex.
unlock mutex.
clean up.
exit thread.
which allows other consumers to receive work while this one is doing work.
The condition variable relieves you of the burden of polling some condition instead allowing another thread to notify you when something needs to happen. Another thread can tell that thread that work is available as follows:
lock mutex.
flag work as available.
signal condition variable.
unlock mutex.
The vast majority of what are often erroneously called spurious wakeups was generally always because multiple threads had been signalled within their pthread_cond_wait call (broadcast), one would return with the mutex, do the work, then re-wait.
Then the second signalled thread could come out when there was no work to be done. So you had to have an extra variable indicating that work should be done (this was inherently mutex-protected with the condvar/mutex pair here - other threads needed to lock the mutex before changing it however).
It was technically possible for a thread to return from a condition wait without being kicked by another process (this is a genuine spurious wakeup) but, in all my many years working on pthreads, both in development/service of the code and as a user of them, I never once received one of these. Maybe that was just because HP had a decent implementation :-)
In any case, the same code that handled the erroneous case also handled genuine spurious wakeups as well since the work-available flag would not be set for those.
A condition variable is quite limited if you could only signal a condition, usually you need to handle some data that's related to to condition that was signalled. Signalling/wakeup have to be done atomically in regards to achieve that without introducing race conditions, or be overly complex
pthreads can also give you , for rather technical reasons, a spurious wakeup . That means you need to check a predicate, so you can be sure the condition actually was signalled - and distinguish that from a spurious wakeup. Checking such a condition in regards to waiting for it need to be guarded - so a condition variable needs a way to atomically wait/wake up while locking/unlocking a mutex guarding that condition.
Consider a simple example where you're notified that some data are produced. Maybe another thread made some data that you want, and set a pointer to that data.
Imagine a producer thread giving some data to another consumer thread through a 'some_data'
pointer.
while(1) {
pthread_cond_wait(&cond); //imagine cond_wait did not have a mutex
char *data = some_data;
some_data = NULL;
handle(data);
}
you'd naturally get a lot of race condition, what if the other thread did some_data = new_data right after you got woken up, but before you did data = some_data
You cannot really create your own mutex to guard this case either .e.g
while(1) {
pthread_cond_wait(&cond); //imagine cond_wait did not have a mutex
pthread_mutex_lock(&mutex);
char *data = some_data;
some_data = NULL;
pthread_mutex_unlock(&mutex);
handle(data);
}
Will not work, there's still a chance of a race condition in between waking up and grabbing the mutex. Placing the mutex before the pthread_cond_wait doesn't help you, as you will now
hold the mutex while waiting - i.e. the producer will never be able to grab the mutex.
(note, in this case you could create a second condition variable to signal the producer that you're done with some_data - though this will become complex, especially so if you want many producers/consumers.)
Thus you need a way to atomically release/grab the mutex when waiting/waking up from the condition. That's what pthread condition variables does, and here's what you'd do:
while(1) {
pthread_mutex_lock(&mutex);
while(some_data == NULL) { // predicate to acccount for spurious wakeups,would also
// make it robust if there were several consumers
pthread_cond_wait(&cond,&mutex); //atomically lock/unlock mutex
}
char *data = some_data;
some_data = NULL;
pthread_mutex_unlock(&mutex);
handle(data);
}
(the producer would naturally need to take the same precautions, always guarding 'some_data' with the same mutex, and making sure it doesn't overwrite some_data if some_data is currently != NULL)
POSIX condition variables are stateless. So it is your responsibility to maintain the state. Since the state will be accessed by both threads that wait and threads that tell other threads to stop waiting, it must be protected by a mutex. If you think you can use condition variables without a mutex, then you haven't grasped that condition variables are stateless.
Condition variables are built around a condition. Threads that wait on a condition variable are waiting for some condition. Threads that signal condition variables change that condition. For example, a thread might be waiting for some data to arrive. Some other thread might notice that the data has arrived. "The data has arrived" is the condition.
Here's the classic use of a condition variable, simplified:
while(1)
{
pthread_mutex_lock(&work_mutex);
while (work_queue_empty()) // wait for work
pthread_cond_wait(&work_cv, &work_mutex);
work = get_work_from_queue(); // get work
pthread_mutex_unlock(&work_mutex);
do_work(work); // do that work
}
See how the thread is waiting for work. The work is protected by a mutex. The wait releases the mutex so that another thread can give this thread some work. Here's how it would be signalled:
void AssignWork(WorkItem work)
{
pthread_mutex_lock(&work_mutex);
add_work_to_queue(work); // put work item on queue
pthread_cond_signal(&work_cv); // wake worker thread
pthread_mutex_unlock(&work_mutex);
}
Notice that you need the mutex to protect the work queue. Notice that the condition variable itself has no idea whether there's work or not. That is, a condition variable must be associated with a condition, that condition must be maintained by your code, and since it's shared among threads, it must be protected by a mutex.
Not all condition variable functions require a mutex: only the waiting operations do. The signal and broadcast operations do not require a mutex. A condition variable also is not permanently associated with a specific mutex; the external mutex does not protect the condition variable. If a condition variable has internal state, such as a queue of waiting threads, this must be protected by an internal lock inside the condition variable.
The wait operations bring together a condition variable and a mutex, because:
a thread has locked the mutex, evaluated some expression over shared variables and found it to be false, such that it needs to wait.
the thread must atomically move from owning the mutex, to waiting on the condition.
For this reason, the wait operation takes as arguments both the mutex and condition: so that it can manage the atomic transfer of a thread from owning the mutex to waiting, so that the thread does not fall victim to the lost wake up race condition.
A lost wakeup race condition will occur if a thread gives up a mutex, and then waits on a stateless synchronization object, but in a way which is not atomic: there exists a window of time when the thread no longer has the lock, and has not yet begun waiting on the object. During this window, another thread can come in, make the awaited condition true, signal the stateless synchronization and then disappear. The stateless object doesn't remember that it was signaled (it is stateless). So then the original thread goes to sleep on the stateless synchronization object, and does not wake up, even though the condition it needs has already become true: lost wakeup.
The condition variable wait functions avoid the lost wake up by making sure that the calling thread is registered to reliably catch the wakeup before it gives up the mutex. This would be impossible if the condition variable wait function did not take the mutex as an argument.
I do not find the other answers to be as concise and readable as this page. Normally the waiting code looks something like this:
mutex.lock()
while(!check())
condition.wait(mutex) # atomically unlocks mutex and sleeps. Calls
# mutex.lock() once the thread wakes up.
mutex.unlock()
There are three reasons to wrap the wait() in a mutex:
without a mutex another thread could signal() before the wait() and we'd miss this wake up.
normally check() is dependent on modification from another thread, so you need mutual exclusion on it anyway.
to ensure that the highest priority thread proceeds first (the queue for the mutex allows the scheduler to decide who goes next).
The third point is not always a concern - historical context is linked from the article to this conversation.
Spurious wake-ups are often mentioned with regard to this mechanism (i.e. the waiting thread is awoken without signal() being called). However, such events are handled by the looped check().
Condition variables are associated with a mutex because it is the only way it can avoid the race that it is designed to avoid.
// incorrect usage:
// thread 1:
while (notDone) {
pthread_mutex_lock(&mutex);
bool ready = protectedReadyToRunVariable
pthread_mutex_unlock(&mutex);
if (ready) {
doWork();
} else {
pthread_cond_wait(&cond1); // invalid syntax: this SHOULD have a mutex
}
}
// signalling thread
// thread 2:
prepareToRunThread1();
pthread_mutex_lock(&mutex);
protectedReadyToRuNVariable = true;
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cond1);
Now, lets look at a particularly nasty interleaving of these operations
pthread_mutex_lock(&mutex);
bool ready = protectedReadyToRunVariable;
pthread_mutex_unlock(&mutex);
pthread_mutex_lock(&mutex);
protectedReadyToRuNVariable = true;
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cond1);
if (ready) {
pthread_cond_wait(&cond1); // uh o!
At this point, there is no thread which is going to signal the condition variable, so thread1 will wait forever, even though the protectedReadyToRunVariable says it's ready to go!
The only way around this is for condition variables to atomically release the mutex while simultaneously starting to wait on the condition variable. This is why the cond_wait function requires a mutex
// correct usage:
// thread 1:
while (notDone) {
pthread_mutex_lock(&mutex);
bool ready = protectedReadyToRunVariable
if (ready) {
pthread_mutex_unlock(&mutex);
doWork();
} else {
pthread_cond_wait(&mutex, &cond1);
}
}
// signalling thread
// thread 2:
prepareToRunThread1();
pthread_mutex_lock(&mutex);
protectedReadyToRuNVariable = true;
pthread_cond_signal(&mutex, &cond1);
pthread_mutex_unlock(&mutex);
The mutex is supposed to be locked when you call pthread_cond_wait; when you call it it atomically both unlocks the mutex and then blocks on the condition. Once the condition is signaled it atomically locks it again and returns.
This allows the implementation of predictable scheduling if desired, in that the thread that would be doing the signalling can wait until the mutex is released to do its processing and then signal the condition.
It appears to be a specific design decision rather than a conceptual need.
Per the pthreads docs the reason that the mutex was not separated is that there is a significant performance improvement by combining them and they expect that because of common race conditions if you don't use a mutex, it's almost always going to be done anyway.
https://linux.die.net/man/3/pthread_cond_wait​
Features of Mutexes and Condition Variables
It had been suggested that the mutex acquisition and release be
decoupled from condition wait. This was rejected because it is the
combined nature of the operation that, in fact, facilitates realtime
implementations. Those implementations can atomically move a
high-priority thread between the condition variable and the mutex in a
manner that is transparent to the caller. This can prevent extra
context switches and provide more deterministic acquisition of a mutex
when the waiting thread is signaled. Thus, fairness and priority
issues can be dealt with directly by the scheduling discipline.
Furthermore, the current condition wait operation matches existing
practice.
There are a tons of exegeses about that, yet I want to epitomize it with an example following.
1 void thr_child() {
2 done = 1;
3 pthread_cond_signal(&c);
4 }
5 void thr_parent() {
6 if (done == 0)
7 pthread_cond_wait(&c);
8 }
What's wrong with the code snippet? Just ponder somewhat before going ahead.
The issue is genuinely subtle. If the parent invokes
thr_parent() and then vets the value of done, it will see that it is 0 and
thus try to go to sleep. But just before it calls wait to go to sleep, the parent
is interrupted between lines of 6-7, and the child runs. The child changes the state variable
done to 1 and signals, but no thread is waiting and thus no thread is
woken. When the parent runs again, it sleeps forever, which is really egregious.
What if they are carried out while acquired locks individually?
I made an exercice in class if you want a real example of condition variable :
#include "stdio.h"
#include "stdlib.h"
#include "pthread.h"
#include "unistd.h"
int compteur = 0;
pthread_cond_t varCond = PTHREAD_COND_INITIALIZER;
pthread_mutex_t mutex_compteur;
void attenteSeuil(arg)
{
pthread_mutex_lock(&mutex_compteur);
while(compteur < 10)
{
printf("Compteur : %d<10 so i am waiting...\n", compteur);
pthread_cond_wait(&varCond, &mutex_compteur);
}
printf("I waited nicely and now the compteur = %d\n", compteur);
pthread_mutex_unlock(&mutex_compteur);
pthread_exit(NULL);
}
void incrementCompteur(arg)
{
while(1)
{
pthread_mutex_lock(&mutex_compteur);
if(compteur == 10)
{
printf("Compteur = 10\n");
pthread_cond_signal(&varCond);
pthread_mutex_unlock(&mutex_compteur);
pthread_exit(NULL);
}
else
{
printf("Compteur ++\n");
compteur++;
}
pthread_mutex_unlock(&mutex_compteur);
}
}
int main(int argc, char const *argv[])
{
int i;
pthread_t threads[2];
pthread_mutex_init(&mutex_compteur, NULL);
pthread_create(&threads[0], NULL, incrementCompteur, NULL);
pthread_create(&threads[1], NULL, attenteSeuil, NULL);
pthread_exit(NULL);
}