Does pthread_cond_wait and pthread_mutex_unlock conflict? - c++

I am implementing manual reset event using pthread in Linux which is similar to WaitForSingleEvent in Windows. I found this post
pthread-like windows manual-reset event
and follow it, however there a thing that confuse me:
void mrevent_wait(struct mrevent *ev) {
pthread_mutex_lock(&ev->mutex);
while (!ev->triggered)
pthread_cond_wait(&ev->cond, &ev->mutex);
pthread_mutex_unlock(&ev->mutex);
}
pthread_cond_wait:
Atomically release mutex and cause the calling thread to block on the condition variable cond;
pthread_mutex_unlock:
Attempts to unlock the specified mutex. If the mutex type is PTHREAD_MUTEX_NORMAL, error detection is not provided. If a thread attempts to unlock a mutex that is has not locked or a mutex which is unlocked, undefined behavior results.
What I am scare is when pthread_cond_wait release the mutex, then pthread_mutex_unlock may come undefined behavior (this kind of thing would drive me crazy, how come they not handle it :-D )
Thank you.

The standard says:
Upon successful return, the mutex has
been locked and is owned by the
calling thread.
Which means that when returning, pthread_cond_wait atomically locks the associated mutex.
The workflow is like this:
You lock a mutex
pthread_cond_wait atomically blocks and unlocks the mutex (so other threads might get here)
When a condition arrives, pthread_cond_wait atomically returns and locks the mutex
You unlock the mutex
I don't think pthread_cond_wait blocks
and unlocks
That's because you didn't read the link I provided.
These functions atomically release
mutex and cause the calling thread to
block on the condition variable cond;

Related

C++ Pthread Mutex Locking

I guess I'm a little unsure of how mutexs work. If a mutex gets locked after some conditional, will it only lock out threads that meet that same condition, or will it lock out all threads regardless until the mutex is unlocked?
Ex:
if (someBoolean)
pthread_mutex_lock(& mut);
someFunction();
pthread_mutex_unlock(& mut);
Will all threads be stopped from running someFunction();, or just those threads that pass through the if-statement?
You need to call pthread_mutex_lock() in order for a thread to be locked. If you call pthread_mutex lock in thread A, but not in thread B, thread B does not get locked (and you have likely defeated the purpose of mutual exclusion in your code, as it's only useful if every thread follows the same locking protocol to protect your code).
The code in question has a few issues outlined below:
if (someBoolean) //if someBoolean is shared among threads, you need to lock
//access to this variable as well.
pthread_mutex_lock(& mut);
someFunction(); //now you have some threads calling someFunction() with the lock
//held, and some calling it without the lock held, which
//defeats the purpose of mutual exclusion.
pthread_mutex_unlock(& mut); //If you did not lock the *mut* above, you must not
//attempt to unlock it.
Will all threads be stopped from running someFunction();, or just those threads that pass through the if-statement?
Only the threads for which someBoolean is true will obtain the lock. Therefore, only those threads will be prevented from calling someFunction() while someone else holds the same lock.
However, in the provided code, all threads will call pthread_mutex_unlock on the mutex, regardless of whether they actually locked it. For mutexes created with default parameters this constitutes undefined behavior and must be fixed.

do condition variables unlock their mutex?

I am working with condition variables and I am assuming they unlock their associated mutex on wait. Otherwise, the mutex would never be released. Yet, I can't find this information on any documentation. Consider the following code:
std::condition_variable consumerWakeMeUp;
std::mutex queueMutex;
// this locks the mutex
std::unique_lock<std::mutex> lk(queueMutex);
// going to sleep now
consumerWakeMeUp.wait(lk);
Does the "consumerWakeMeUp.wait(lk)" unlock the mutex? It must I assume otherwise the thread would hand on that mutext forever. But if anyone knows more the details I'd appreciate the input.
thank you.
never mind
"Atomically releases lock, blocks the current executing thread, and adds it to the list of threads waiting on *this. The thread will be unblocked when notify_all() or notify_one() is executed. It may also be unblocked spuriously. When unblocked, regardless of the reason, lock is reacquired and wait exits. If this function exits via exception, lock is also reacquired. (until C++14)"

Why do both the notify and wait function of a std::condition_variable need a locked mutex

On my neverending quest to understand std::contion_variables I've run into the following. On this page it says the following:
void print_id (int id) {
std::unique_lock<std::mutex> lck(mtx);
while (!ready) cv.wait(lck);
// ...
std::cout << "thread " << id << '\n';
}
And after that it says this:
void go() {
std::unique_lock<std::mutex> lck(mtx);
ready = true;
cv.notify_all();
}
Now as I understand it, both of these functions will halt on the std::unqique_lock line. Until a unique lock is acquired. That is, no other thread has a lock.
So say the print_id function is executed first. The unique lock will be aquired and the function will halt on the wait line.
If the go function is then executed (on a separate thread), the code there will halt on the unique lock line. Since the mutex is locked by the print_id function already.
Obviously this wouldn't work if the code was like that. But I really don't see what I'm not getting here. So please enlighten me.
What you're missing is that wait unlocks the mutex and then waits for the signal on cv.
It locks the mutex again before returning.
You could have found this out by clicking on wait on the page where you found the example:
At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.
Once notified (explicitly, by some other thread), the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called.
There's one point you've missed—calling wait() unlocks the mutex. The thread atomically (releases the mutex + goes to sleep). Then, when woken by the signal, it tries to re-acquire the mutex (possibly blocking); once it acquires it, it can proceed.
Notice that it's not necessary to have the mutex locked for calling notify_*, only for wait*
To answer the question as posed, which seems necessary regarding claims that you should not acquire a lock on notification for performance reasons (isn't correctness more important than performance?): The necessity to lock on "wait" and the recommendation to always lock around "notify" is to protect the user from himself and his program from data and logical races. Without the lock in "go", the program you posted would immediately have a data race on "ready". However, even if ready were itself synchronized (e.g. atomic) you would have a logical race with a missed notification, because without the lock in "go" it is possible for the notify to occur just after the check for "ready" and just before the actual wait, and the waiting thread may then remain blocked indefinitely. The synchronization on the atomic variable itself is not enough to prevent this. This is why helgrind will warn when a notification is done without holding the lock. There are some fringe cases where the mutex lock is really not required around the notify. In all of these cases, there needs to be a bidirectional synchronization beforehand so that the producing thread can know for sure that the other thread is already waiting. IMO these cases are for experts only. Actually, I have seen an expert, giving a talk about multi-threading, getting this wrong — he thought an atomic counter would suffice. That said, the lock around the wait is always necessary for correctness (or, at least, an operation that is atomic with the wait), and this is why the standard library enforces it and atomically unlocks the mutex on entering the wait.
POSIX condition variables are, unlike Windows events, not "idiot-proof" because they are stateless (apart from being aware of waiting threads). The recommendation to use a lock on the notify is there to protect you from the worst and most common screwups. You can build a Windows-like stateful event using a mutex + condition var + bool variable if you like, of course.

Calling pthread_cond_signal without locking mutex

I read somewhere that we should lock the mutex before calling pthread_cond_signal and unlock the mutex after calling it:
The pthread_cond_signal() routine is
used to signal (or wake up) another
thread which is waiting on the
condition variable. It should be
called after mutex is locked, and must
unlock mutex in order for
pthread_cond_wait() routine to
complete.
My question is: isn't it OK to call pthread_cond_signal or pthread_cond_broadcast methods without locking the mutex?
If you do not lock the mutex in the codepath that changes the condition and signals, you can lose wakeups. Consider this pair of processes:
Process A:
pthread_mutex_lock(&mutex);
while (condition == FALSE)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex);
Process B (incorrect):
condition = TRUE;
pthread_cond_signal(&cond);
Then consider this possible interleaving of instructions, where condition starts out as FALSE:
Process A Process B
pthread_mutex_lock(&mutex);
while (condition == FALSE)
condition = TRUE;
pthread_cond_signal(&cond);
pthread_cond_wait(&cond, &mutex);
The condition is now TRUE, but Process A is stuck waiting on the condition variable - it missed the wakeup signal. If we alter Process B to lock the mutex:
Process B (correct):
pthread_mutex_lock(&mutex);
condition = TRUE;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
...then the above cannot occur; the wakeup will never be missed.
(Note that you can actually move the pthread_cond_signal() itself after the pthread_mutex_unlock(), but this can result in less optimal scheduling of threads, and you've necessarily locked the mutex already in this code path due to changing the condition itself).
According to this manual :
The pthread_cond_broadcast() or
pthread_cond_signal() functions
may be called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait()
or pthread_cond_timedwait() have
associated with the condition variable
during their waits; however, if
predictable scheduling behavior is
required, then that mutex shall be
locked by the thread calling
pthread_cond_broadcast() or
pthread_cond_signal().
The meaning of the predictable scheduling behavior statement was explained by Dave Butenhof (author of Programming with POSIX Threads) on comp.programming.threads and is available here.
caf, in your sample code, Process B modifies condition without locking the mutex first. If Process B simply locked the mutex during that modification, and then still unlocked the mutex before calling pthread_cond_signal, there would be no problem --- am I right about that?
I believe intuitively that caf's position is correct: calling pthread_cond_signal without owning the mutex lock is a Bad Idea. But caf's example is not actually evidence in support of this position; it's simply evidence in support of the much weaker (practically self-evident) position that it is a Bad Idea to modify shared state protected by a mutex unless you have locked that mutex first.
Can anyone provide some sample code in which calling pthread_cond_signal followed by pthread_mutex_unlock yields correct behavior, but calling pthread_mutex_unlock followed by pthread_cond_signal yields incorrect behavior?

Non-recursive mutex ownership

I read this answer on SO:
Because the recursive mutex has a sense of ownership, the thread that grabs the mutex must be the same thread that releases the mutex. In the case of non-recursive mutexes, there is no sense of ownership and any thread can usually release the mutex no matter which thread originally took the mutex.
I'm confused by the last statement. Can one thread lock a mutex and another different thread unlock that mutex? I thought that same thread should be the only one able to unlock the mutex? Or is there any specific mutex that allows this? I hope someone can clarify.
Non-recursive mutex
Most mutexes are (or at least should be) non-recursive. A mutex is an object which can be acquired or released atomically, which allows data which is shared between multiple threads to be protected against race conditions, data corruption, and other nasty things.
A single mutex should only ever be acquired once by a single thread within the same call chain. Attempting to acquire (or hold) the same mutex twice, within the same thread context, should be considered an invalid scenario, and should be handled appropriately (usually via an ASSERT as you are breaking a fundamental contract of your code).
Recursive mutex
This should be considered a code smell, or a hack. The only way in which a recursive mutex differs from a standard mutex is that a recursive mutex can be acquired multiple times by the same thread.
The root cause of the need for a recursive mutex is lack of ownership and no clear purpose or delineation between classes. For example, your code may call into another class, which then calls back into your class. The starting class could then try to acquire the same mutex again, and because you want to avoid a crash, you implement this as a recursive mutex.
This kind of topsy-turvy class hierarchy can lead to all sorts of headaches, and a recursive mutex provides only a band-aid solution for a more fundamental architectural problem.
Regardless of the mutex type, it should always be the same thread which acquires and releases the same mutex. The general pattern that you use in code is something like this:
Thread 1
Acquire mutex A
// Modify or read shared data
Release mutex A
Thread 2
Attempt to acquire mutex A
Block as thread 1 has mutex A
When thread 1 has released mutex A, acquire it
// Modify or read shared data
Release mutex A
It gets more complicated when you have multiple mutexes that can be acquired simultaneously (say, mutex A and B). There is the risk that you'll run into a deadlock situation like this:
Thread 1
Acquire mutex A
// Access some data...
*** Context switch to thread 2 ***
Thread 2
Acquire mutex B
// Access some data
*** Context switch to thread 1 ***
Attempt to acquire mutex B
Wait for thread 2 to release mutex B
*** Context switch to thread 2 ***
Attempt to acquire mutex A
Wait for thread 1 to release mutex A
*** DEADLOCK ***
We now have a situation where each thread is waiting for the other thread to release the other lock -- this is known as the ABBA deadlock pattern.
To prevent this situation, it is important that each thread always acquires the mutexes in the same order (e.g. always A, then B).
Recursive mutexes are thread-specific by design (the same thread locking again = recursion, another thread locking meanwhile = block). Regular mutexes do not have this design and thus they can in fact be locked and unlocked in different threads.
I think this covers all your questions. Straight from linux man pages for pthreads:
If the mutex type is PTHREAD_MUTEX_NORMAL, deadlock detection shall not be provided. Attempting to relock the mutex causes deadlock. If a thread
attempts to unlock a mutex that it has not locked or a mutex which is unlocked, undefined behavior results.
If the mutex type is PTHREAD_MUTEX_ERRORCHECK, then error checking shall be provided. If a thread attempts to relock a mutex that it has already
locked, an error shall be returned. If a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked, an error shall be
returned.
If the mutex type is PTHREAD_MUTEX_RECURSIVE, then the mutex shall maintain the concept of a lock count. When a thread successfully acquires a
mutex for the first time, the lock count shall be set to one. Every time a thread relocks this mutex, the lock count shall be incremented by one.
Each time the thread unlocks the mutex, the lock count shall be decremented by one. When the lock count reaches zero, the mutex shall become
available for other threads to acquire. If a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked, an error shall
be returned.
If the mutex type is PTHREAD_MUTEX_DEFAULT, attempting to recursively lock the mutex results in undefined behavior. Attempting to unlock the mutex
if it was not locked by the calling thread results in undefined behavior. Attempting to unlock the mutex if it is not locked results in undefined
behavior.