I am working with condition variables and I am assuming they unlock their associated mutex on wait. Otherwise, the mutex would never be released. Yet, I can't find this information on any documentation. Consider the following code:
std::condition_variable consumerWakeMeUp;
std::mutex queueMutex;
// this locks the mutex
std::unique_lock<std::mutex> lk(queueMutex);
// going to sleep now
consumerWakeMeUp.wait(lk);
Does the "consumerWakeMeUp.wait(lk)" unlock the mutex? It must I assume otherwise the thread would hand on that mutext forever. But if anyone knows more the details I'd appreciate the input.
thank you.
never mind
"Atomically releases lock, blocks the current executing thread, and adds it to the list of threads waiting on *this. The thread will be unblocked when notify_all() or notify_one() is executed. It may also be unblocked spuriously. When unblocked, regardless of the reason, lock is reacquired and wait exits. If this function exits via exception, lock is also reacquired. (until C++14)"
Related
Consider this:
// Somewhere
std::mutex mutex;
std::unique_lock lock{mutex};
std::condition_variable condition;
// Thread01
condition.wait(lock);
// Thread02
while (lock.owns_lock());
So I got a situation like this where the loop at Thread02 never ends, regardless of the fact that Thread01 is waiting for the condition.
This means that unlocking the lock at std::condition_variable::wait does not synchronize with checking if the lock is locked at std::unique_lock::owns_lock. Here it is explicitly told that the wait "Atomically unlocks lock..." but here nothing is told about owns_lock of being atomic or being synchronised with the lock operations.
So the question is: how can I atomically check it the wait has atomically unlocked the lock?
EDIT:
The desire is to know at Thread02 if Thread01 is waiting for the condition. That's why I've accepted the answer.
You can momentarily lock the lock and get the desired Thread02:
// Thread02
std::unique_lock{mutex};
// mutex isn’t locked
When a thread waits on a condition variable, the associated mutex is (atomically) released (unlocked). When that condition variable is signaled (by a different thread), one (for signal) or all (for broadcast) waiting thread(s) is/are awakened, automatically re-acquiring (locking) the mutex.
What will happen if one or more other threads are waiting to acquire (lock) that same mutex, but not waiting on the same condition? Are the thread(s) waiting on the condition variable guaranteed to be awakened (and thus acquire the mutex) before the mutex can be acquired (locked) by the other threads, or could the other thread(s) acquire (lock) the mutex before the thread(s) waiting on the condition variable?
[Note: the example below is simplified for clarity. Thread_B does not really start Thread_C, but Thread_C is guaranteed to not be running until after Thread_B has acquired the mutex - it does not compete with Thread_B for the mutex after Thread_A waits on the condition variable]
Thread_A:
pthread_mutex_lock(&myMutex);
while (!someState) {
pthread_cond_wait(&myCondVar,&myMutex);
}
// do something
pthread_mutex_unlock(&myMutex);
Thread_B:
pthread_mutex_lock(&myMutex);
// do other things
someState = true;
// start Thread_C here
pthread_cond_signal(&myCondVar);
pthread_mutex_unlock(&myMutex);
Thread_C:
pthread_mutex_lock(&myMutex);
// can I reach this point after Thread_B releases the mutex,
// but before Thread_A re-acquires it after being signaled?
// do things that may interfere with Thread_A...
pthread_mutex_unlock(&myMutex);
Edit: The accepted answer below was chosen because it makes clear that whether or not a reader agrees with the interpretation given, there is enough ambiguity that the only safe assumption to make is that of the respondent. Note that others well-versed in C++-standard-speak may find the text totally unambiguous... I am not in that group.
There's nothing special about acquiring a mutex when awakened from pthread_cond_[timed]wait() compared to any other thread already blocked in pthread_mutex_lock() trying to acquire the same mutex.
Per the POSIX 7 pthread_cond_signal() documentation (bolding mine):
If more than one thread is blocked on a condition variable, the scheduling policy shall determine the order in which threads are unblocked. When each thread unblocked as a result of a pthread_cond_broadcast() or pthread_cond_signal() returns from its call to pthread_cond_wait() or pthread_cond_timedwait(), the thread shall own the mutex with which it called pthread_cond_wait() or pthread_cond_timedwait(). The thread(s) that are unblocked shall contend for the mutex according to the scheduling policy (if applicable), and as if each had called pthread_mutex_lock().
Acquiring the mutex after waking up from pthread_cond_[timed]wait() is required to be exactly as if the thread had called pthread_mutex_lock().
In short, any of the threads can acquire the mutex.
On my neverending quest to understand std::contion_variables I've run into the following. On this page it says the following:
void print_id (int id) {
std::unique_lock<std::mutex> lck(mtx);
while (!ready) cv.wait(lck);
// ...
std::cout << "thread " << id << '\n';
}
And after that it says this:
void go() {
std::unique_lock<std::mutex> lck(mtx);
ready = true;
cv.notify_all();
}
Now as I understand it, both of these functions will halt on the std::unqique_lock line. Until a unique lock is acquired. That is, no other thread has a lock.
So say the print_id function is executed first. The unique lock will be aquired and the function will halt on the wait line.
If the go function is then executed (on a separate thread), the code there will halt on the unique lock line. Since the mutex is locked by the print_id function already.
Obviously this wouldn't work if the code was like that. But I really don't see what I'm not getting here. So please enlighten me.
What you're missing is that wait unlocks the mutex and then waits for the signal on cv.
It locks the mutex again before returning.
You could have found this out by clicking on wait on the page where you found the example:
At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.
Once notified (explicitly, by some other thread), the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called.
There's one point you've missed—calling wait() unlocks the mutex. The thread atomically (releases the mutex + goes to sleep). Then, when woken by the signal, it tries to re-acquire the mutex (possibly blocking); once it acquires it, it can proceed.
Notice that it's not necessary to have the mutex locked for calling notify_*, only for wait*
To answer the question as posed, which seems necessary regarding claims that you should not acquire a lock on notification for performance reasons (isn't correctness more important than performance?): The necessity to lock on "wait" and the recommendation to always lock around "notify" is to protect the user from himself and his program from data and logical races. Without the lock in "go", the program you posted would immediately have a data race on "ready". However, even if ready were itself synchronized (e.g. atomic) you would have a logical race with a missed notification, because without the lock in "go" it is possible for the notify to occur just after the check for "ready" and just before the actual wait, and the waiting thread may then remain blocked indefinitely. The synchronization on the atomic variable itself is not enough to prevent this. This is why helgrind will warn when a notification is done without holding the lock. There are some fringe cases where the mutex lock is really not required around the notify. In all of these cases, there needs to be a bidirectional synchronization beforehand so that the producing thread can know for sure that the other thread is already waiting. IMO these cases are for experts only. Actually, I have seen an expert, giving a talk about multi-threading, getting this wrong — he thought an atomic counter would suffice. That said, the lock around the wait is always necessary for correctness (or, at least, an operation that is atomic with the wait), and this is why the standard library enforces it and atomically unlocks the mutex on entering the wait.
POSIX condition variables are, unlike Windows events, not "idiot-proof" because they are stateless (apart from being aware of waiting threads). The recommendation to use a lock on the notify is there to protect you from the worst and most common screwups. You can build a Windows-like stateful event using a mutex + condition var + bool variable if you like, of course.
I have a problem with a deadlock in my code related to the use of condition variables. This is more of a design question than a pure code question. I have no problem actually writing code once I understand the correct design. I have the following scenario:
Thread A waits on a condition variable.
Thread B calls notify_all, and thread A wakes up.
This is of course what I want to happen, and is what does happen when everything works as expected. But sometimes, I get the following scenario instead:
Thread A executes the code right before it begins to wait on the condition variable.
Thread B calls notify_all, thinking that thread A is waiting.
Thread A begins waiting on the condition variable, not realizing that thread B already told it to stop waiting. Deadlock.
What is the best way to resolve this? I can't think of a reliable way to check whether thread A is actually waiting, in order to know when I should call notify_all in thread B. Do I have to resort to timed_lock? I would hate to.
During the period just before Thread A waits on condition variable it must be holding a mutex. The easiest solution is to make sure that Thread B is holding the same mutex at the time it calls notify_all. So something like this:
std::mutex m;
std::condition_variable cv;
int the_condition = 0;
Thread A: {
std::unique_lock<std::mutex> lock(m);
do something
while (the_condition == 0) {
cv.wait(lock);
}
now the_condition != 0 and thread A has the mutex
do something else
} // releases the mutex;
Thread B: {
std::unique_lock<std::mutex> lock(m);
do something that makes the_condition != 0
cv.notify_all();
} // releases the mutex
This guarantees that Thread B only does the notify_all() either before Thread A acquires the mutex or while Thread A is waiting on the condition variable.
The other key here, though, is the while loop waiting for the_condition to become true. Once A has the mutex it should not be possible for any other thread to change the_condition until A has tested the_condition, found it false, and started waiting (thus releasing the mutex).
The point is: what you are really waiting for is for the value of the_condition to become non-zero, the std::condition_variable::notify_all is just telling you that thread B thinks thread A should wake up and retest.
A condition variable must always be associated with a mutex to avoid a race condition created by one thread preparing to wait and another thread which may signal the condition before the first thread actually waits on it resulting in a deadlock. The thread will be perpetually waiting for a signal that is never sent. Any mutex can be used, there is no explicit link between the mutex and the condition variable.
I am implementing manual reset event using pthread in Linux which is similar to WaitForSingleEvent in Windows. I found this post
pthread-like windows manual-reset event
and follow it, however there a thing that confuse me:
void mrevent_wait(struct mrevent *ev) {
pthread_mutex_lock(&ev->mutex);
while (!ev->triggered)
pthread_cond_wait(&ev->cond, &ev->mutex);
pthread_mutex_unlock(&ev->mutex);
}
pthread_cond_wait:
Atomically release mutex and cause the calling thread to block on the condition variable cond;
pthread_mutex_unlock:
Attempts to unlock the specified mutex. If the mutex type is PTHREAD_MUTEX_NORMAL, error detection is not provided. If a thread attempts to unlock a mutex that is has not locked or a mutex which is unlocked, undefined behavior results.
What I am scare is when pthread_cond_wait release the mutex, then pthread_mutex_unlock may come undefined behavior (this kind of thing would drive me crazy, how come they not handle it :-D )
Thank you.
The standard says:
Upon successful return, the mutex has
been locked and is owned by the
calling thread.
Which means that when returning, pthread_cond_wait atomically locks the associated mutex.
The workflow is like this:
You lock a mutex
pthread_cond_wait atomically blocks and unlocks the mutex (so other threads might get here)
When a condition arrives, pthread_cond_wait atomically returns and locks the mutex
You unlock the mutex
I don't think pthread_cond_wait blocks
and unlocks
That's because you didn't read the link I provided.
These functions atomically release
mutex and cause the calling thread to
block on the condition variable cond;