Deadlocks related to scheduling - c++

On the Oracle docs for multithreading they have this paragraph about deadlocks when trying to require a lock:
Because there is no guaranteed order in which locks are acquired, a
problem in threaded programs is that a particular thread never
acquires a lock, even though it seems that it should.
This usually happens when the thread that holds the lock releases it,
lets a small amount of time pass, and then reacquires it. Because the
lock was released, it might seem that the other thread should acquire
the lock. But, because nothing blocks the thread holding the lock, it
continues to run from the time it releases the lock until it
reacquires the lock, and so no other thread is run.
Just to make sure I understood this, I tried to write this out in code (let me know if this is a correct interpretation):
#include <mutex>
#include <chrono>
#include <thread>
#include <iostream>
std::mutex m;
void f()
{
std::unique_lock<std::mutex> lock(m); // acquire the lock
std::cout << std::this_thread::get_id() << " now has the mutex\n";
lock.unlock(); // release the lock
std::this_thread::sleep_for(std::chrono::seconds(2)); // sleep for a while
lock.lock(); // reacquire the lock
std::cout << std::this_thread::get_id() << " has the mutex after sleep\n";
}
int main()
{
std::thread(f).join(); // thread A
std::thread(f).join(); // thread B
}
So what the quote above is saying is that the time during which the lock is released and the thread is sleeping (like the code above) is not sufficient to guarantee a thread waiting on the lock to acquire it? How does this relate to deadlocks?

The document is addressing a specific kind of fairness problem and is lumping it into its discussions about deadlock. The document correctly defines deadlock to mean a "permanent blocking of a set of threads". It then describes a slightly less obvious way of achieving the permanent blocking condition.
In the scenario described, assume two threads attempt to acquire the same lock simultaneously. Only one will win, so call it W, and the other loses, so call it L. The loser is put to sleep to wait its turn to get the lock.
The quoted text says that L may never get a chance to get the lock, even if it is released by W.
The reason this might happen is the lock does not impose fairness over which thread has acquired it. The lock is more than happy to let W acquire and release it forever. If W is implemented in such a way that it does not need to context switch after it releases the lock, it may just end up acquiring the lock again before L has a chance to wake up to see if the lock is available.
So, in the code below, if W_thread wins the initial race against L_thread, L_thread is effectively deadlocked, even though in theory it could acquire the lock between iterations of W_thread.
void W_thread () {
for (;;) {
std::unique_lock<std::mutex> lock(m);
//...
}
}
void L_thread () {
std::unique_lock<std::mutex> lock(m);
//...
}
The document recommends using thr_yield() to force a context switch to another thread. This API is specific to Solaris/SunOS. The POSIX version of it is called sched_yield(), although some UNIX versions (including Linux) provide a wrapper called pthread_yield().
In C++11, this is accomplished via std::this_thread::yield().

Related

Does it make sense to use different mutexes with the same condition variable?

The documentation of the notify_one() function of condition variable at cppreference.com states the following
The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s); in fact doing so is a pessimization, since the notified thread would immediately block again, waiting for the notifying thread to release the lock.
The first part of the sentence is strange, if I hold different mutexes in the notifying and notified threads, then the mutexes have no real meaning as the there is no 'blocking' operation here. In fact, if different mutexes are held, then the likelihood that a spurious wake up could cause the notification to be missed is possible! I get the impression that we might as well not lock on the notifying thread in such a case. Can someone clarify this?
Consider the following from cppreference page on condition variables as an example.
std::mutex m; // this is supposed to be a pessimization
std::condition_variable cv;
std::string data;
bool ready = false;
bool processed = false;
void worker_thread()
{
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m); // a different, local std::mutex is supposedly better
cv.wait(lk, []{return ready;});
// after the wait, we own the lock.
std::cout << "Worker thread is processing data\n";
data += " after processing";
// Send data back to main()
processed = true;
std::cout << "Worker thread signals data processing completed\n";
lk.unlock();
cv.notify_one();
}
int main()
{
std::thread worker(worker_thread);
data = "Example data";
// send data to the worker thread
{
std::lock_guard<std::mutex> lk(m); // a different, local std::mutex is supposedly better
ready = true;
std::cout << "main() signals data ready for processing\n";
}
cv.notify_one();
// wait for the worker
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return processed;});
}
std::cout << "Back in main(), data = " << data << '\n';
worker.join();
}
PS. I saw a few questions that are similarly titled but they refer to different aspects of the problem.
I think the wording of cppreference is somewhat awkward here. I think they were just trying to differentiate the mutex used in conjunction with the condition variable from other unrelated mutexes.
It makes no sense to use a condition variable with different mutexes. The mutex is used to make any change to the actual semantic condition (in the example it is just the variable ready) atomic and it must therefore be held whenever the condition is updated or checked. Also it is needed to ensure that a waiting thread that is unblocked can immediately check the condition without again running into race conditions.
I understand it as follows:
It is OK, not to hold the lock on the mutex associated with the condition variable when notify_one is called, or any mutex at all, however it is OK to hold other mutexes for different reasons.
The pessimisation is not that only one mutex is used, but to hold this mutex for longer than necessary when you know that another thread is supposed to immediately try to acquire the mutex after being notified.
I think that my interpretation agrees with the explanation given in cppreference on condition variable:
The thread that intends to modify the shared variable has to
acquire a std::mutex (typically via std::lock_guard)
perform the modification while the lock is held
execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)
Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.
Any thread that intends to wait on std::condition_variable has to
acquire a std::unique_lock<std::mutex>, on the same mutex as used to protect the shared variable
Furthermore the standard expressly forbids using different mutexes for wait, wait_­for, or wait_­until:
lock.mutex() returns the same value for each of the lock arguments supplied by all concurrently waiting (via wait, wait_­for, or wait_­until) threads.
The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s);
That's misleading. The problem is the word, "same." They should have said, "...does not need to hold the lock on any mutex..." That's the real point. There's an important reason why the waiting thread should have a mutex locked when it enters the wait() call: It's awaiting some change in some shared data structure, and it needs the mutex to be locked when it accesses the structure to check whether or not the awaited change actually has happened.
The notify()ing thread probably needs to lock the same mutex in order to effect that change, but the correctness of the program won't depend on whether it calls notify() before or after it releases the mutex.

Is it always necessary for a notifying thread to lock the shared data during modification?

When using a condition variable, http://en.cppreference.com/w/cpp/thread/condition_variable describes the typical steps for the thread that notifies as:
acquire a std::mutex (typically via std::lock_guard)
perform the modification while the lock is held
execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)
For the simple case shown below, is it necessary for the main thread to lock "stop" when it modifies it? While I understand that locking when modifying shared data is almost always a good idea, I'm not sure why it would be necessary in this case.
std::condition_variable cv;
std::mutex mutex;
bool stop = false;
void worker() {
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock, [] { return stop; })
}
// No lock (Why would this not work?)
void main() {
std::thread(worker);
std::this_thread::sleep_for(1s);
stop = true;
cv.notify_one();
}
// With lock: why is this neccesary?
void main() {
std::thread mythread(worker);
std::this_thread::sleep_for(1s);
{
std::unique_lock<std::mutex>(mutex);
stop = true;
}
cv.notify_one();
mythread.join();
}
For the simple case shown below, is it necessary for the main thread to lock "stop" when it modifies it?
Yes, to prevent race condition. Aside from issues of accessing shared data from different threads, which could be fixed by std::atomic, imagine this order of events:
worker_thread: checks value of `stop` it is false
main_thread: sets `stop` to true and sends signal
worker_thread: sleeps on condition variable
In this situation worker thread would possibly sleep forever and it would miss event of stop set to true simply because wakeup signal was already sent and it missed it.
Acquiring mutex on modification of shared data is required because only then you can treat checking condition and going to sleep or acting on it in worker thread as atomic operation as whole.
Is , it is necessary.
First of all, from the standard point of view, you code exhibits undefined behavior, as stop is not atomic and isn't change under a lock, while other threads may read or write to it. both reading and writing must occur under a lock if multiple threads read and write to a shared memory.
From hardware perspective, if stop isn't changed under a lock, other threads, even through they lock some lock, aren't guaranteed to "see" the change that was done. locks don't just prevent from other threads to intervene, they also force the latest change to be read or written into the main memory, and hence all the threads can work with the latest values.

Understanding std::condition_variables

I'm trying to understand the flow of the condition_variable when I have more than one thread waiting to execute. To my understanding, all threads would try to grab the unique lock, one would get it and then progress to the wait(), if you call notify_all, wouldn't there at most be one thread waiting that would be allowed through. Until it releases it's lock and allows the other threads through.
Does the cv communicate with the unique lock and let all threads through all at once? If so is it truly all at once or do the threads sequentially pass through one after another.
std::condition_variable cv;
std::mutex cv_m; // This mutex is used for three purposes:
// 1) to synchronize accesses to i
// 2) to synchronize accesses to std::cerr
// 3) for the condition variable cv
int i = 0;
void waits()
{
std::unique_lock<std::mutex> lk(cv_m);
std::cerr << "Waiting... \n";
cv.wait(lk, []{return i == 1;});
std::cerr << "...finished waiting. i == 1\n";
}
http://en.cppreference.com/w/cpp/thread/condition_variable/notify_all
When you call wait (the one-argument version) the lock is unlocked and the thread enter a wait-state until the CV is "notified". When a thread wakes up the lock is locked again.
When you call notify_one, basically a random thread waiting on the CV will be notified. When you call notify_all all threads waiting on the CV will be woken up from the wait-state, and the first that locks the lock will be the one that continues. Which that will be is also random.
Note that when I say "random", the actual implementation of threads (from the C++ library down to the operating system kernel and maybe even the hardware) on your system may be implemented in a way to make it possible to infer which thread will be the one that wakes up and gets the lock, but from the point of view of us application writers who uses condition variables, there's no predetermined order, it's random.
While the threads do have to call wait one at a time, while they're waiting, they don't hold the lock. So additional threads can get through to the wait function.

Why do both the notify and wait function of a std::condition_variable need a locked mutex

On my neverending quest to understand std::contion_variables I've run into the following. On this page it says the following:
void print_id (int id) {
std::unique_lock<std::mutex> lck(mtx);
while (!ready) cv.wait(lck);
// ...
std::cout << "thread " << id << '\n';
}
And after that it says this:
void go() {
std::unique_lock<std::mutex> lck(mtx);
ready = true;
cv.notify_all();
}
Now as I understand it, both of these functions will halt on the std::unqique_lock line. Until a unique lock is acquired. That is, no other thread has a lock.
So say the print_id function is executed first. The unique lock will be aquired and the function will halt on the wait line.
If the go function is then executed (on a separate thread), the code there will halt on the unique lock line. Since the mutex is locked by the print_id function already.
Obviously this wouldn't work if the code was like that. But I really don't see what I'm not getting here. So please enlighten me.
What you're missing is that wait unlocks the mutex and then waits for the signal on cv.
It locks the mutex again before returning.
You could have found this out by clicking on wait on the page where you found the example:
At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.
Once notified (explicitly, by some other thread), the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called.
There's one point you've missed—calling wait() unlocks the mutex. The thread atomically (releases the mutex + goes to sleep). Then, when woken by the signal, it tries to re-acquire the mutex (possibly blocking); once it acquires it, it can proceed.
Notice that it's not necessary to have the mutex locked for calling notify_*, only for wait*
To answer the question as posed, which seems necessary regarding claims that you should not acquire a lock on notification for performance reasons (isn't correctness more important than performance?): The necessity to lock on "wait" and the recommendation to always lock around "notify" is to protect the user from himself and his program from data and logical races. Without the lock in "go", the program you posted would immediately have a data race on "ready". However, even if ready were itself synchronized (e.g. atomic) you would have a logical race with a missed notification, because without the lock in "go" it is possible for the notify to occur just after the check for "ready" and just before the actual wait, and the waiting thread may then remain blocked indefinitely. The synchronization on the atomic variable itself is not enough to prevent this. This is why helgrind will warn when a notification is done without holding the lock. There are some fringe cases where the mutex lock is really not required around the notify. In all of these cases, there needs to be a bidirectional synchronization beforehand so that the producing thread can know for sure that the other thread is already waiting. IMO these cases are for experts only. Actually, I have seen an expert, giving a talk about multi-threading, getting this wrong — he thought an atomic counter would suffice. That said, the lock around the wait is always necessary for correctness (or, at least, an operation that is atomic with the wait), and this is why the standard library enforces it and atomically unlocks the mutex on entering the wait.
POSIX condition variables are, unlike Windows events, not "idiot-proof" because they are stateless (apart from being aware of waiting threads). The recommendation to use a lock on the notify is there to protect you from the worst and most common screwups. You can build a Windows-like stateful event using a mutex + condition var + bool variable if you like, of course.

pthreads: thread starvation caused by quick re-locking

I have a two threads, one which works in a tight loop, and the other which occasionally needs to perform a synchronization with the first:
// thread 1
while(1)
{
lock(work);
// perform work
unlock(work);
}
// thread 2
while(1)
{
// unrelated work that takes a while
lock(work);
// synchronizing step
unlock(work);
}
My intention is that thread 2 can, by taking the lock, effectively pause thread 1 and perform the necessary synchronization. Thread 1 can also offer to pause, by unlocking, and if thread 2 is not waiting on lock, re-lock and return to work.
The problem I have encountered is that mutexes are not fair, so thread 1 quickly re-locks the mutex and starves thread 2. I have attempted to use pthread_yield, and so far it seems to run okay, but I am not sure it will work for all systems / number of cores. Is there a way to guarantee that thread 1 will always yield to thread 2, even on multi-core systems?
What is the most effective way of handling this synchronization process?
You can build a FIFO "ticket lock" on top of pthreads mutexes, along these lines:
#include <pthread.h>
typedef struct ticket_lock {
pthread_cond_t cond;
pthread_mutex_t mutex;
unsigned long queue_head, queue_tail;
} ticket_lock_t;
#define TICKET_LOCK_INITIALIZER { PTHREAD_COND_INITIALIZER, PTHREAD_MUTEX_INITIALIZER }
void ticket_lock(ticket_lock_t *ticket)
{
unsigned long queue_me;
pthread_mutex_lock(&ticket->mutex);
queue_me = ticket->queue_tail++;
while (queue_me != ticket->queue_head)
{
pthread_cond_wait(&ticket->cond, &ticket->mutex);
}
pthread_mutex_unlock(&ticket->mutex);
}
void ticket_unlock(ticket_lock_t *ticket)
{
pthread_mutex_lock(&ticket->mutex);
ticket->queue_head++;
pthread_cond_broadcast(&ticket->cond);
pthread_mutex_unlock(&ticket->mutex);
}
Under this kind of scheme, no low-level pthreads mutex is held while a thread is within the ticketlock protected critical section, allowing other threads to join the queue.
In your case it is better to use condition variable to notify second thread when it is required to awake and perform all required operations.
pthread offers a notion of thread priority in its API. When two threads are competing over a mutex, the scheduling policy determines which one will get it. The function pthread_attr_setschedpolicy lets you set that, and pthread_attr_getschedpolicy permits retrieving the information.
Now the bad news:
When only two threads are locking / unlocking a mutex, I can’t see any sort of competition, the first who runs the atomic instruction takes it, the other blocks. I am not sure whether this attribute applies here.
The function can take different parameters (SCHED_FIFO, SCHED_RR, SCHED_OTHER and SCHED_SPORADIC), but in this question, it has been answered that only SCHED_OTHER was supported on linux)
So I would give it a shot if I were you, but not expect too much. pthread_yield seems more promising to me. More information available here.
Ticket lock above looks like the best. However, to insure your pthread_yield works, you could have a bool waiting, which is set and reset by thread2. thread1 yields as long as bool waiting is set.
Here's a simple solution which will work for your case (two threads). If you're using std::mutex then this class is a drop-in replacement. Change your mutex to this type and you are guaranteed that if one thread holds the lock and the other is waiting on it, once the first thread unlocks, the second thread will grab the lock before the first thread can lock it again.
If more than two threads happen to use the mutex simultaneously it will still function but there are no guarantees on fairness.
If you're using plain pthread_mutex_t you can easily change your locking code according to this example (unlock remains unchanged).
#include <mutex>
// Behaves the same as std::mutex but guarantees fairness as long as
// up to two threads are using (holding/waiting on) it.
// When one thread unlocks the mutex while another is waiting on it,
// the other is guaranteed to run before the first thread can lock it again.
class FairDualMutex : public std::mutex {
public:
void lock() {
_fairness_mutex.lock();
std::mutex::lock();
_fairness_mutex.unlock();
}
private:
std::mutex _fairness_mutex;
};