In their example usage of std::condition_variable they have essentially
std::mutex m;
std::condition_variable cv;
bool ready = false;
void worker_thread()
{
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return ready;});
// more ...
}
int main()
{
std::thread worker(worker_thread);
data = "Example data";
// send data to the worker thread
{
std::lock_guard<std::mutex> lk(m);
ready = true;
}
cv.notify_one();
// more...
}
My problem now is the variable ready which is not declared std::atomic*.
Why doesn't this introduce a race condition if spurious wakeup occurs?
No, there is no race condition.
Even if the condition variable spuriously wakes up, it must re-acquire the lock. hence, two things happen:
no thread can touch ready while the lock is held, as a lock protects it.
by re-acquiring the lock, the boolean must be synchronized, because the lock enforce memory order acquire, which causes ready to have the latest value.
So there is no way for race condition to happen.
Related
Consider the following quote about std::condition_variable from cppreference:
The condition_variable class is a synchronization primitive that can be used to block a thread, or multiple threads at the same time, until another thread both modifies a shared variable (the condition), and notifies the condition_variable.
The thread that intends to modify the variable has to
acquire a std::mutex (typically via std::lock_guard)
perform the modification while the lock is held
execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)
Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.
An exemplary typical scenario of this approach is as follows:
// shared (e.d., global) variables:
bool proceed = false;
std::mutex m;
std::condition_variable cv;
// thread #1:
{
std::unique_lock<std::mutex> l(m);
while (!proceed) cv.wait(l); // or, cv.wait(l, []{ return proceed; });
}
// thread #2:
{
std::lock_guard<std::mutex> l(m);
proceed = true;
}
cv.notify_one();
However, #Tim in this question came up with (kind-of an academic) alternative:
std::atomic<bool> proceed {false}; // atomic to avoid data race
std::mutex m;
std::condition_variable cv;
// thread #1:
{
std::unique_lock<std::mutex> l(m);
while (!proceed) cv.wait(l);
}
// thread #2:
proceed = true; // not protected with mutex
{ std::lock_guard<std::mutex> l(m); }
cv.notify_one();
It obviously does not meet the requirements of the above cppreference quote since proceed shared variable (atomic in this case) is not modified under the mutex.
The question is if this code is correct. In my opinion, it is, since:
First option is that !proceed in while (!proceed) is evaluated as false. In that case, cv.wait(l); is not invoked at all.
Second option is that !proceed in while (!proceed) is evaluated as true. In that case, cv.notify_one() cannot happen until thread #1 enters cv.wait(l);.
Am I missing something or is cppreference wrong in this regard? (For instance, are some reordering issues involved?)
And, what if proceed = true; would be changed to proceed.store(true, std::memory_order_relaxed);?
Let's say I have something like this:
bool signalled = false;
std::condition_variable cv;
void thread1() {
while (true) {
std::unique_lock l(mutex);
cv.wait_until(l, [] { return signalled; });
return;
}
}
void thread2...N() {
signalled = true;
cv.notify_all();
}
Is that considered thread-safe? The boolean may be set to true in many threads to interrupt thread1.
Edit: If not thread-safe I’m looking for a description of what the race condition is so I can better understand the underlying issue and fill in a knowledge gap.
Writing to a non-atomic variable without synchronization is UB. However, making signalled an atomic<bool> will not solve the problem.
C++ reference on std::condition_variable reads:
Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.
You should do this:
bool signalled = false;
void thread2...N()
{
std::unique_lock l(mutex);
signalled = true;
cv.notify_all();
}
Related questions:
Mutex protecting std::condition_variable
Shared atomic variable is not properly published if it is not modified under mutex
Why do I need to acquire a lock to modify a shared "atomic" variable before notifying condition_variable
I have some code like this:
std::queue<myData*> _qDatas;
std::mutex _qDatasMtx;
Generator function
void generator_thread()
{
std::this_thread::sleep_for(std::chrono::milliseconds(1));
{
std::lock_guard<std::mutex> lck(_qDatasMtx);
_qData.push(new myData);
}
//std::lock_guard<std::mutex> lck(cvMtx); //need lock here?
cv.notify_one();
}
Consumer function
void consumer_thread()
{
for(;;)
{
std::unique_lock lck(_qDatasMtx);
if(_qDatas.size() > 0)
{
delete _qDatas.front();
_qDatas.pop();
}
else
cv.wait(lck);
}
}
So if I have dozens of generator threads and one consumer thread, do I need a mutex lock when calling cv.notify_one() in each thread?
And is std::condition_variable thread-safe?
do I need a mutex lock when calling cv.notify_one() in each thread?
No
is std::condition_variable thread-safe?
Yes
When calling wait, you pass a locked mutex which is immediately unlocked for use. When calling notify you don't lock it with the same mutex, since what will happen is (detailed on links):
Notify
Wakeup thread that is waiting
Lock mutex for use
From std::condition_variable:
execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)
and from std::condition_variable::notify_all:
The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s);
With regards to your code snippet:
//std::lock_guard<std::mutex> lck(cvMtx); //need lock here?
cv.notify_one();
No you do not need a lock there.
notify_one can be called from multiple threads without a lock.
However, in order for normal operation to work and no chance of notifications "slipping through", you must hold the lock the condition variable is guarded by between the point where you modify the spurious wakeup guard and/or package the condition variable checks on read, and the point where you notify one.
Your code appears to pass that test. However this version is more clear:
std::unique_lock lck(_qDatasMtx);
for(;;) {
cv.wait(lck,[&]{ return _qDatas.size()>0; });
delete _qDatas.front();
_qDatas.pop();
}
and it reduces spurious unlocking/relocking.
I found the following example for condition variable on www.cppreference.com, http://en.cppreference.com/w/cpp/thread/condition_variable. The call to cv.notify_one() is placed outside the lock. My question is if the call should be made while holding the lock to guarantee that waiting threads are in fact in waiting state and will receive the notify signal.
#include <iostream>
#include <string>
#include <thread>
#include <mutex>
#include <condition_variable>
std::mutex m;
std::condition_variable cv;
std::string data;
bool ready = false;
bool processed = false;
void worker_thread()
{
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return ready;});
// after the wait, we own the lock.
std::cout << "Worker thread is processing data\n";
data += " after processing";
// Send data back to main()
processed = true;
std::cout << "Worker thread signals data processing completed\n";
// Manual unlocking is done before notifying, to avoid waking up
// the waiting thread only to block again (see notify_one for details)
lk.unlock();
cv.notify_one();
}
int main()
{
std::thread worker(worker_thread);
data = "Example data";
// send data to the worker thread
{
std::lock_guard<std::mutex> lk(m);
ready = true;
std::cout << "main() signals data ready for processing\n";
}
cv.notify_one();
// wait for the worker
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return processed;});
}
std::cout << "Back in main(), data = " << data << '\n';
worker.join();
}
Should the notify_one() call be moved inside the lock to guarantee waiting threads receive the notify signal,
// send data to the worker thread
{
std::lock_guard<std::mutex> lk(m);
ready = true;
cv.notify_one();
std::cout << "main() signals data ready for processing\n";
}
You do not need to notify under lock. However, since notify is logically happening when the actual value is changed (otherwise, why would you notify?) and that change must happen under lock, it is often done within the lock.
There would be no practical observable difference.
if I understand your question correctly, it's equivilant to "should the notifier thread lock the mutex while trying to notify some CV in other thread"
no, it is not mandatory and even does some counter-effect.
when condition_variable is notified from another thread it tries to-relock the mutex on which it was put to sleep. locking that mutex from its working thread will block the other thread which trying to lock it, untill the that lock-wrapper gets out of scope.
PS
if you do remove the locking from the function that send data to the worker threads, ready and processed should at least be atomics. currently they are synchronized by the lock , but when you remove the lock they cease to be thread-safe
There is a scenario where it is crucial that the lock is held while notify_all is called: when the condition_variable is destroyed after waiting, a spurious wake-up can cause the wait to be ended and the condition_variable destroyed before notify_all is called on the already destroyed object.
If you do not wait on a condition variable then the notification is lost. It does not matter if you are holding any locks. A condition variable is a synchronization primitive and do not need a lock for protection.
You can miss signals with and without lock. The mutex protects only normal data like ready or processed.
boost::condition_variable cond;
boost::mutex mut;
//thread1
{
"read_socket()"
cond.notify_one();
}
//thread2
{
for(;;)
{
...
boost::unique_lock<boost::mutex> lock(mut);
cond.wait(lock);
}
}
versus
boost::condition_variable cond;
boost::mutex mut;
//thread1
{
"read_socket()"
boost::unique_lock<boost::mutex> lock(mut);
cond.notify_one();
}
//thread2
{
for(;;)
{
...
boost::unique_lock<boost::mutex> lock(mut);
cond.wait(lock);
}
Is there an impact if I omit the lock before calling cond.notify_one() ?
The C++11 standard does not state any requirement for notify_one and notify_all; so not holding the lock when you signal a condition_variable is fine. However, it's often necessary for the signaling thread to hold the lock until it sets the condition checked by the waiting thread after it's woken up. If it does not, the program may contain races. For an example, see this SO question: Boost synchronization.
When thread2 is waking, it will attempt to re-aquire the lock. If thread1 is holding the lock, thread2 will block until thread1 releases the lock.
In the code shown here, this doesn't significantly impact behavior. If you were to add any behavior in thread1 after cond.notify_one();, that behavior would be guaranteed to execute before thread2 proceeds in the second code block only.
Alternatively, you could construct the unique lock in thread2 before entering the for loop, rather than just before waiting on the condition variable. This would ensure that thread1 blocks when trying to construct its own unique lock until thread2 is waiting for a signal, provided that thread1 is not executing before thread2 has initialized itself and entered the loop. This would allow you to guarantee that thread1 doesn't send any notifications that thread2 isn't waiting for.