Two std::unique_lock used on same mutex causes deadlock ? - c++

I found this code on code review stack exchange which implements a producer-consumer problem. I am posting a section of code here.
In the given code, let's consider a scenario when producer produces a value by calling void add(int num), it acquires lock on mutex mu and buffer.size()==size_ this makes the producer go on wait queue due to the conditional variable cond.
At the same moment, a context switch takes place and consumer calls function int remove() to consume value , it tries to acquire the lock on mutex mu , however the lock has already been acquired previously by the producer so it fails and never consumes the value, hence causing a deadlock.
Where am I going wrong here ? Because the code seems to work properly when I run it, debugging it didn't help me.
Thanks
void add(int num) {
while (true) {
std::unique_lock<std::mutex> locker(mu);
cond.wait(locker, [this](){return buffer_.size() < size_;});
buffer_.push_back(num);
locker.unlock();
cond.notify_all();
return;
}
}
int remove() {
while (true)
{
std::unique_lock<std::mutex> locker(mu);
cond.wait(locker, [this](){return buffer_.size() > 0;});
int back = buffer_.back();
buffer_.pop_back();
locker.unlock();
cond.notify_all();
return back;
}
}

The idea for std::condition_variable::wait(lock, predicate), is that you you wait until the predicate is met and have the lock on mutex afterwards. To do this atomically (which is important most of the time) you have to lock the mutex first, then the wait will release it and lock it for checking the predicate. If it is met the mutex stays locked and the execution continues. If not, the mutex will be released again.

OutOfBound's answer is good, but a bit more detail on exactly what is "atomic" is useful.
The wait operation on a condition variable has a precondition and a postcondition that the passed in mutex is locked by the caller. The wait operation unlocks the mutex internally and does so in a way that is guaranteed not to miss any notify or notify_all operations from other threads that happen as a result of unlocking the mutex. Inside wait the unlock of the mutex and entering a state waiting for notifies are atomic with respect to each other. This avoids sleep/wakeup races.
The conditional critical section form tests the predicate internally. It still depends on notifies being done correctly however.
In some sense, one can think of wait as doing this:
while (!predicate()) {
mutex.unlock();
/* sleep for a short time or spin */
mutex.lock();
}
The condition variable with notifies allows the commented line in the middle to be efficient. Which gives:
while (!predicate()) {
atomic { /* This is the key part. */
mutex.unlock();
sleep_until_notified();
}
mutex.lock();
}

Related

Is it mandatory to lock mutex before signaling on condition variable?

We have implemented TaskRunner whose functions will be called by different threads to start, stop and post tasks. TaskRunner will internally create a thread and if the queue is not empty, it will pop the task from queue and executes it. Start() will check if the thread is running. If not creates a new thread. Stop() will join the thread. The code is as below.
bool TaskRunnerImpl::PostTask(Task* task) {
tasks_queue_.push_back(task);
return true;
}
void TaskRunnerImpl::Start() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
if(is_running_) {
return;
}
is_running_ = true;
runner_thread_ = std::thread(&TaskRunnerImpl::Run, this);
}
void TaskRunnerImpl::Run() {
while(is_running_) {
if(tasks_queue_.empty()) {
continue;
}
Task* task_to_run = tasks_queue_.front();
task_to_run->Run();
tasks_queue_.pop_front();
delete task_to_run;
}
}
void TaskRunnerImpl::Stop() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
is_running_ = false;
if(runner_thread_.joinable()) {
runner_thread_.join();
}
}
We want to use conditional variables now otherwise the thread will be continuously checking whether the task queue is empty or not. We implemented as below.
Thread function (Run()) will wait on condition variable.
PostTask() will signal if some one posts a task.
Stop() will signal if some one calls stop.
Code is as below.
bool TaskRunnerImpl::PostTask(Task* task) {
std::lock_guard<std::mutex> taskGuard(m_task_mutex);
tasks_queue_.push_back(task);
m_task_cond_var.notify_one();
return true;
}
void TaskRunnerImpl::Start() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
if(is_running_) {
return;
}
is_running_ = true;
runner_thread_ = std::thread(&TaskRunnerImpl::Run, this);
}
void TaskRunnerImpl::Run() {
while(is_running_) {
Task* task_to_run = nullptr;
{
std::unique_lock<std::mutex> mlock(m_task_mutex);
m_task_cond_var.wait(mlock, [this]() {
return !(is_running_ && tasks_queue_.empty());
});
if(!is_running_) {
return;
}
if(!tasks_queue_.empty()) {
task_to_run = tasks_queue_.front();
task_to_run->Run();
tasks_queue_.pop_front();
}
}
if(task_to_run)
delete task_to_run;
}
}
void TaskRunnerImpl::Stop() {
std::lock_guard<std::mutex> lock(is_running_mutex_);
is_running_ = false;
m_task_cond_var.notify_one();
if(runner_thread_.joinable()) {
runner_thread_.join();
}
}
I have couple of questions as below. Can some one please help me to understand these.
Condition variable m_task_cond_var is linked with mutex m_task_mutex. But Stop() already locks mutex is_running_mutex to gaurd 'is_running_'. Do I need to lock m_task_mutex before signaling? Here I am not convinced why to lock m_task_mutex as we are not protecting any thing related to task queue.
In Thread function(Run()), we are reading is_running_ without locking is_running_mutex. Is this correct?
Do I need to lock m_task_mutex before signaling [In Stop]?
When the predicate being tested in condition_variable::wait method depends on something happening in the signaling thread (which is almost always), then you should obtain the mutex before signaling. Consider the following possibility if you are not holding the m_task_mutex:
The watcher thread (TaskRunnerImpl::Run) wakes up (via spurious wakeup or being notified from elsewhere) and obtains the mutex.
The watcher thread checks its predicate and sees that it is false.
The signaler thread (TaskRunnerImpl::Stop) changes the predicate to return true (by setting is_running_ = false;).
The signaler thread signals the condition variable.
The watcher thread waits to be signaled (bad)
the signal has already come and gone
the predicate was false, so the watcher begins waiting, possibly indefinitely.
The worst that can happen if you are holding the mutex when you signal is that, the blocked thread (TaskRunnerImpl::Run) wakes up and is immediately blocked when trying to obtain the mutex. This can have some performance implications.
In [TaskRunnerImpl::Run] , we are reading is_running_ without locking is_running_mutex. Is this correct?
In general no. Even if it's of type bool. Because a boolean is typically implemented as a single byte, it's possible that one thread is writing to the byte while you are reading, resulting in a partial read. In practice, however, it's safe. That said, you should obtain the mutex before you read (and then release immediately afterwards).
In fact, it may be preferable to use std::atomic<bool> instead of a bool + mutex combination (or std::atomic_flag if you want to get fancy) which will have the same effect, but be easier to work with.
Do I need to lock m_task_mutex before signaling [In Stop]?
Yes you do. You must change condition under the same mutex and send signal either after the mutex is locked or unlocked after the change. If you do not use the same mutex, or send signal before that mutex is locked you create race condition that std::condition_variable is created to solve.
Logic is this:
Watching thread locks mutex and checks watched condition. If it did not happen it goes to sleep and unlocks the mutex atomically. So signaling thread lock the mutex, change condition and signal. If signalling thread does that before watching one locks the mutex, then watchiong one would see condition happen and would not go to sleep. If it locks before, it would go to sleep and woken when signalling thread raise the signal.
Note: you can signal condition variable before or after mutex is unlocked, both cases is correct but may affect performance. But it is incorrect to signal before locking the mutex.
Condition variable m_task_cond_var is linked with mutex m_task_mutex. But Stop() already locks mutex is_running_mutex to gaurd 'is_running_'. Do I need to lock m_task_mutex before signaling? Here I am not convinced why to lock m_task_mutex as we are not protecting any thing related to task queue.
You overcomlicated your code and made things worse. You should use only one mutex in this case and it would work as intended.
In Thread function(Run()), we are reading is_running_ without locking is_running_mutex. Is this correct?
On x86 hardware it may "work", but from language point of view this is UB.

Use of mutex after condition variable has been notified

What is the reason for a notified condition variable to re-lock the mutex after being notified.
The following piece of code deadlock if unique_lock is not scoped or if mutex is not explicitely unlocked
#include <future>
#include <mutex>
#include <iostream>
using namespace std;
int main()
{
std::mutex mtx;
std::condition_variable cv;
//simulate another working thread sending notification
auto as = std::async([&cv](){ std::this_thread::sleep_for(std::chrono::seconds(2));
cv.notify_all();});
//uncomment scoping (or unlock below) to prevent deadlock
//{
std::unique_lock<std::mutex> lk(mtx);
//Spurious Wake-Up Prevention not adressed in this short sample
//UNLESS it is part of the answer / reason to lock again
cv.wait(lk);
//}
std::cout << "CV notified\n" << std::flush;
//uncomment unlock (or scoping above) to prevent deadlock
//mtx.unlock();
mtx.lock();
//do something
mtx.unlock();
std::cout << "End may never be reached\n" << std::flush;
return 0;
}
Even re-reading some documentation and examples I still do not find this obvious.
Most examples that can be found over the net are small code samples that have inherent scoping of the unique_lock.
Shall we use different mutex to deal with critical sections (mutex 1) and condition variables wait and notify (mutex 2) ?
Note: Debug shows that after end of the waiting phase, the "internal" "mutex count" (I think field __count of structure __pthread_mutex_s ) goes from 1 to 2. It reaches back 0 after unlock
You're trying to lock the mutex twice. Once with the unique_lock and again with the explicit mutex.lock() call. For non-recursive mutex, it will deadlock on a re-lock attempt to let you know you have a bug.
std::unique_lock<std::mutex> lk(mtx); // This locks for the lifetime of the unique_lock object
cv.wait(lk); // this will unlock while waiting, but relock on return
std::cout << "CV notified\n" << std::flush;
mtx.lock(); // This attempts to lock the mutex again, but will deadlock since unique_lock has already invoked mutex.lock() in its constructor.
The fix is pretty close to what you have with those curly braces uncommented. Just make sure you only have one lock active at a time on the mutex.
Also, as you have it, your code is prone to spurious wake-up. Here's some adjustments for you. You should always stay in the wait loop until the condition or state (usually guarded by the mutex itself) has actually occurred. For a simple notification, a bool will do.
int main()
{
std::mutex mtx;
std::condition_variable cv;
bool conditon = false;
//simulate another working thread sending notification
auto as = std::async([&cv, &mtx, &condition](){
std::this_thread::sleep_for(std::chrono::seconds(2));
mtx.lock();
condition = true;
mtx.unlock();
cv.notify_all();});
std::unique_lock<std::mutex> lk(mtx); // acquire the mutex lock
while (!condition)
{
cv.wait(lk);
}
std::cout << "CV notified\n" << std::flush;
//do something - while still under the lock
return 0;
}
Because the condition wait might return for reasons besides being notified such as a signal, or just because someone else wrote onto the same 64-byte cache line. Or it might have been notified but the condition is no longer true because another thread handled it.
So the mutex is locked so that your code can check its condition variable while holding the mutex. Maybe that's just a boolean value saying it's ready to go.
Do NOT skip that part. If you do, you will regret it.
Let's temporarily imagine that the mutex is not locked on return from wait:
Thread 1:
Locks mutex, checks predicate (whatever that may be), and upon finding the predicate not in an acceptable form, waits for some other thread to put it in an acceptable form. The wait atomically puts thread 1 to sleep and unlocks the mutex. With the mutex unlocked, some other thread will have permission to put the predicate in the acceptable state (the predicate is not naturally thread safe).
Thread 2:
Simultaneously, this thread is trying to lock the mutex and put the predicate in a state that is acceptable for thread 1 to continue past its wait. It must do this with the mutex locked. The mutex protects the predicate from being accessed (either read or written) by more than one thread at a time.
Once thread 2 puts the mutex in an acceptable state, it notifies the condition_variable and unlocks the mutex (the order of these two actions is not relevant to this argument).
Thread 1:
Now thread 1 has been notified and we presume the hypothetical that the mutex isn't locked on return from wait. The first thing thread 1 has to do is check the predicate to see if it is actually acceptable (this could be a spurious wakeup). But it shouldn't check the predicate without the mutex being locked. Otherwise some other thread could change the predicate right after this thread checks it, invalidating the result of that check.
So the very first thing this thread has to do upon waking is lock the mutex, and then check the predicate.
So it is really more of a convenience that the mutex is locked upon return from wait. Otherwise the waiting thread would have to manually lock it 100% of the time.
Let's look again at the events as thread 1 is entering the wait: I said that the sleep and the unlock happen atomically. This is very important. Imagine if thread 1 has to manually unlock the mutex and then call wait: In this hypothetical scenario, thread 1 could unlock the mutex, and then be interrupted while another thread obtains the mutex, changes the predicate, unlocks the mutex and signals the condition_variable, all before thread 1 calls wait. Now thread 1 sleeps forever, because no thread is going to see that the predicate needs changing, and the condition_variable needs signaling.
So it is imperative that the unlock/enter-wait happen atomically. And it makes the API easier to use if the lock/exit-wait also happens atomically.

Why is this printing 1,3,2? [duplicate]

For simplicity, let's assume that we have only one conditional variable to match a single condition that is reflected by a boolean.
1) Why does std::condition_variable::wait(...) locks the mutex again after a "notify" has been sent to un-sleep it?
2) Seeing the behaviour in "1)", does that mean that when you do std::condition_variable::notify_all it only makes it so that all of the waiting threads are unblocked/woken up... but in order instead of all at once? If so, what can be done to do it all at once?
3) If I only care about threads sleeping until a condition is met and not care a single bit for any mutex acquisition, what can I do? Is there an alternative or should current std::condition_variable::wait(...) approach(es) be hacked around this?
If "hackery" is to be used, will this function work for unblocking all waiting threads on a condition and can it be called from any(per thread) threads:
//declared somehwere and modified before sending "notify"(ies)
std::atomic<bool> global_shared_condition_atomic_bool;
//the single(for simplicity in our case) condition variable matched with the above boolean result
std::condition_variable global_shared_condition_variable;
static void MyClass:wait()
{
std::mutex mutex;
std::unique_lock<std::mutex> lock(mutex);
while (!global_shared_condition_atomic_bool) global_shared_condition_variable.wait(lock);
}
it would have been called from random "waiting" threads like so:
void random_thread_run()
{
while(someLoopControlValue)
{
//random code...
MyClass:wait(); //wait for whatever condition the class+method is for.
//more random code...
}
}
Edit:
Gate class
#ifndef Gate_Header
#define Gate_Header
#include <mutex>
#include <condition_variable>
class Gate
{
public:
Gate()
{
gate_open = false;
}
void open()
{
m.lock();
gate_open = true;
m.unlock();
cv.notify_all();
}
void wait()
{
std::unique_lock<std::mutex> lock(m);
while (!gate_open) cv.wait(lock);
}
void close()
{
m.lock();
gate_open = false;
m.unlock();
}
private:
std::mutex m;
std::condition_variable cv;
bool gate_open;
};
#endif
Condition variables wake things up spuriously.
You must have a mutex and it must guard a message of some kind for them to work, or you have zero guarantee that any such wakeup occurred.
This was done, presumably, because efficient implementations of a non-spurious version end up being implemeneted in terms of such a spurious version anyhow.
If you fail to guard the message editing with a mutex (ie, no synchronization on it, the state of the message is undefined behavior. This can cause compilers to optimize the read from memory to skip it after the first read.
Even excluding that undefined behavior (imagine you use atomics), there are race conditions where a message is set, a notification occurs, and nobody waiting on the notification sees the message being set if you fail to have the mutex acquired in the time between the variable being set and the condition variable being notified.
Barring extreme cases, you usually want to use the lambda version of wait.
Auditing condition variable code is not possible unless you audit both the notification code and the wait code.
struct gate {
bool gate_open = false;
mutable std::condition_variable cv;
mutable std::mutex m;
void open_gate() {
std::unique_lock<std::mutex> lock(m);
gate_open=true;
cv.notify_all();
}
void wait_at_gate() const {
std::unique_lock<std::mutex> lock(m);
cv.wait( lock, [this]{ return gate_open; } );
}
};
or
void open_gate() {
{
std::unique_lock<std::mutex> lock(m);
gate_open=true;
}
cv.notify_all();
}
No, your code will not work.
The mutex protects modifications to the shared variable. As such, all of the waiting threads and the signaling thread must lock that specific mutex instance. With what you've written, each thread has its own mutex instance.
The main reason for all of this mutex stuff is due to the concept of spurious wakeup, an unfortunate aspect of OS implementations of condition variables. Threads waiting on them sometimes just start running even though the condition hasn't been satisfied yet.
The mutex-bound check of the actual variable allows the thread to test whether it was spuriously awoken or not.
wait atomically releases the mutex and starts waiting on the condition. When wait exits, the mutex is atomically reacquired as part of the wakeup process. Now, consider a race between a spurious wakeup and the notifying thread. The notifying thread can be in one of 2 states: about to modify the variable, or after modifying it and about to notify everyone to wake up.
If the spurious wakeup happens when the notifying thread is about to modify the varaible, then one of them will get to the mutex first. So the spuriously awoken thread will either see the old value or the new value. If it sees the new, then it has been notified and will go do its business. If it sees the old, then it will wait on the condition again. But if it saw the old, then it blocked the notifying thread from modifying that variable, so it had to wait until the spurious thread went back to sleep.
Why does std::condition_variable::wait(...) locks the mutex again after a "notify" has been sent to un-sleep it?
Because the mutex locks access to the condition variable. And the first thing you have to do after waking up from a wait call is to check the condition variable. As such, that must be done under the protection of the mutex.
The signalling thread must be prevented from modifying the variable while other threads are reading it. That's what the mutex is for.
Seeing the behaviour in "1)", does that mean that when you do std::condition_variable::notify_all it only makes it so that all of the waiting threads are unblocked/woken up... but in order instead of all at once?
The order they wake up in is not specified. However, by the time notify_all returns, all threads are guaranteed to have been unblocked.
If I only care about threads sleeping until a condition is met and not care a single bit for any mutex acquisition, what can I do?
Nothing. condition_variable requires that access to the actual variable you're checking is controlled via a mutex.

Using std::conditional_variable to wait on a condition

For simplicity, let's assume that we have only one conditional variable to match a single condition that is reflected by a boolean.
1) Why does std::condition_variable::wait(...) locks the mutex again after a "notify" has been sent to un-sleep it?
2) Seeing the behaviour in "1)", does that mean that when you do std::condition_variable::notify_all it only makes it so that all of the waiting threads are unblocked/woken up... but in order instead of all at once? If so, what can be done to do it all at once?
3) If I only care about threads sleeping until a condition is met and not care a single bit for any mutex acquisition, what can I do? Is there an alternative or should current std::condition_variable::wait(...) approach(es) be hacked around this?
If "hackery" is to be used, will this function work for unblocking all waiting threads on a condition and can it be called from any(per thread) threads:
//declared somehwere and modified before sending "notify"(ies)
std::atomic<bool> global_shared_condition_atomic_bool;
//the single(for simplicity in our case) condition variable matched with the above boolean result
std::condition_variable global_shared_condition_variable;
static void MyClass:wait()
{
std::mutex mutex;
std::unique_lock<std::mutex> lock(mutex);
while (!global_shared_condition_atomic_bool) global_shared_condition_variable.wait(lock);
}
it would have been called from random "waiting" threads like so:
void random_thread_run()
{
while(someLoopControlValue)
{
//random code...
MyClass:wait(); //wait for whatever condition the class+method is for.
//more random code...
}
}
Edit:
Gate class
#ifndef Gate_Header
#define Gate_Header
#include <mutex>
#include <condition_variable>
class Gate
{
public:
Gate()
{
gate_open = false;
}
void open()
{
m.lock();
gate_open = true;
m.unlock();
cv.notify_all();
}
void wait()
{
std::unique_lock<std::mutex> lock(m);
while (!gate_open) cv.wait(lock);
}
void close()
{
m.lock();
gate_open = false;
m.unlock();
}
private:
std::mutex m;
std::condition_variable cv;
bool gate_open;
};
#endif
Condition variables wake things up spuriously.
You must have a mutex and it must guard a message of some kind for them to work, or you have zero guarantee that any such wakeup occurred.
This was done, presumably, because efficient implementations of a non-spurious version end up being implemeneted in terms of such a spurious version anyhow.
If you fail to guard the message editing with a mutex (ie, no synchronization on it, the state of the message is undefined behavior. This can cause compilers to optimize the read from memory to skip it after the first read.
Even excluding that undefined behavior (imagine you use atomics), there are race conditions where a message is set, a notification occurs, and nobody waiting on the notification sees the message being set if you fail to have the mutex acquired in the time between the variable being set and the condition variable being notified.
Barring extreme cases, you usually want to use the lambda version of wait.
Auditing condition variable code is not possible unless you audit both the notification code and the wait code.
struct gate {
bool gate_open = false;
mutable std::condition_variable cv;
mutable std::mutex m;
void open_gate() {
std::unique_lock<std::mutex> lock(m);
gate_open=true;
cv.notify_all();
}
void wait_at_gate() const {
std::unique_lock<std::mutex> lock(m);
cv.wait( lock, [this]{ return gate_open; } );
}
};
or
void open_gate() {
{
std::unique_lock<std::mutex> lock(m);
gate_open=true;
}
cv.notify_all();
}
No, your code will not work.
The mutex protects modifications to the shared variable. As such, all of the waiting threads and the signaling thread must lock that specific mutex instance. With what you've written, each thread has its own mutex instance.
The main reason for all of this mutex stuff is due to the concept of spurious wakeup, an unfortunate aspect of OS implementations of condition variables. Threads waiting on them sometimes just start running even though the condition hasn't been satisfied yet.
The mutex-bound check of the actual variable allows the thread to test whether it was spuriously awoken or not.
wait atomically releases the mutex and starts waiting on the condition. When wait exits, the mutex is atomically reacquired as part of the wakeup process. Now, consider a race between a spurious wakeup and the notifying thread. The notifying thread can be in one of 2 states: about to modify the variable, or after modifying it and about to notify everyone to wake up.
If the spurious wakeup happens when the notifying thread is about to modify the varaible, then one of them will get to the mutex first. So the spuriously awoken thread will either see the old value or the new value. If it sees the new, then it has been notified and will go do its business. If it sees the old, then it will wait on the condition again. But if it saw the old, then it blocked the notifying thread from modifying that variable, so it had to wait until the spurious thread went back to sleep.
Why does std::condition_variable::wait(...) locks the mutex again after a "notify" has been sent to un-sleep it?
Because the mutex locks access to the condition variable. And the first thing you have to do after waking up from a wait call is to check the condition variable. As such, that must be done under the protection of the mutex.
The signalling thread must be prevented from modifying the variable while other threads are reading it. That's what the mutex is for.
Seeing the behaviour in "1)", does that mean that when you do std::condition_variable::notify_all it only makes it so that all of the waiting threads are unblocked/woken up... but in order instead of all at once?
The order they wake up in is not specified. However, by the time notify_all returns, all threads are guaranteed to have been unblocked.
If I only care about threads sleeping until a condition is met and not care a single bit for any mutex acquisition, what can I do?
Nothing. condition_variable requires that access to the actual variable you're checking is controlled via a mutex.

Sync is unreliable using std::atomic and std::condition_variable

In a distributed job system written in C++11 I have implemented a fence (i.e. a thread outside the worker thread pool may ask to block until all currently scheduled jobs are done) using the following structure:
struct fence
{
std::atomic<size_t> counter;
std::mutex resume_mutex;
std::condition_variable resume;
fence(size_t num_threads)
: counter(num_threads)
{}
};
The code implementing the fence looks like this:
void task_pool::fence_impl(void *arg)
{
auto f = (fence *)arg;
if (--f->counter == 0) // (1)
// we have zeroed this fence's counter, wake up everyone that waits
f->resume.notify_all(); // (2)
else
{
unique_lock<mutex> lock(f->resume_mutex);
f->resume.wait(lock); // (3)
}
}
This works very well if threads enter the fence over a period of time. However, if they try to do it almost simultaneously, it seems to sometimes happen that between the atomic decrementation (1) and starting the wait on the conditional var (3), the thread yields CPU time and another thread decrements the counter to zero (1) and fires the cond. var (2). This results in the previous thread waiting forever in (3), because it starts waiting on it after it has already been notified.
A hack to make the thing workable is to put a 10 ms sleep just before (2), but that's unacceptable for obvious reasons.
Any suggestions on how to fix this in a performant way?
Your diagnose is correct, this code is prone to lose condition notifications in the way you described. I.e. after one thread locked the mutex but before waiting on the condition variable another thread may call notify_all() so that the first thread misses that notification.
A simple fix is to lock the mutex before decrementing the counter and while notifying:
void task_pool::fence_impl(void *arg)
{
auto f = static_cast<fence*>(arg);
std::unique_lock<std::mutex> lock(f->resume_mutex);
if (--f->counter == 0) {
f->resume.notify_all();
}
else do {
f->resume.wait(lock);
} while(f->counter);
}
In this case the counter need not be atomic.
An added bonus (or penalty, depending on the point of view) of locking the mutex before notifying is (from here):
The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal().
Regarding the while loop (from here):
Spurious wakeups from the pthread_cond_timedwait() or pthread_cond_wait() functions may occur. Since the return from pthread_cond_timedwait() or pthread_cond_wait() does not imply anything about the value of this predicate, the predicate should be re-evaluated upon such return.
In order to keep the higher performance of an atomic operation instead of a full mutex, you should change the wait condition into a lock, check and loop.
All condition waits should be done in that way. The condition variable even has a 2nd argument to wait which is a predicate function or lambda.
The code might look like:
void task_pool::fence_impl(void *arg)
{
auto f = (fence *)arg;
if (--f->counter == 0) // (1)
// we have zeroed this fence's counter, wake up everyone that waits
f->resume.notify_all(); // (2)
else
{
unique_lock<mutex> lock(f->resume_mutex);
while(f->counter) {
f->resume.wait(lock); // (3)
}
}
}