C++ std::scoped_lock: What happens if the lock blocks? - c++

I am interested to know more about how the std::scoped_lock operates.
I am making some modifications to some thread-unsafe code by adding a mutex around a critical section.
I am using a std::scoped_lock to do this.
There are two possible things which can happen when the line of code std::scoped_lock lock(mutex) is reached:
The mutex is locked successfully. Nothing to be concerned with here.
The mutex is being held by another thread (A). The mutex cannot be locked, and presumably the current thread (B) blocks.
Regarding point 2, what happens when the thread which was able to lock the mutex unlocks it again? ie: When thread A unlocks the mutex (scoped_lock goes out of scope) what does thread B do? Does it automatically wake up and try to lock the mutex again? (Does thread B sleep when it is not able to lock the mutex? Presumably it does not sit in an infinite while(1) like loop hogging the CPU.)
As you can see I don't have a complete understanding of how a scoped_lock works. Is anyone able to enlighten me?

Your basic assumptions are pretty much correct - block, don't hog the CPU too much (a bit is OK), and let the implementation deal with thread wake-ups.
There's one special case that should be mentioned: when a mutex is used not just to protect a shared resource, but specifically to coordinate between threads A&B, a std::condition_variable can help with the thread synchronization. This does not replace the mutex; in fact you need to lock a mutex in order to wait on a condition variable. Many Operating Systems can benefit from condition variables by waking up the right threads faster.

Related

condition_variable: notifying after releasing lock optimization effect

It was already asked few times about whether one should notify condition_variable while holding a lock:
lk.lock();
// change state
cv.notify_one();
lk.unlock();
or after releasing it:
lk.lock();
// change state
lk.unlock();
cv.notify_one();
Here's one of such questions: Do I have to acquire lock before calling condition_variable.notify_one()?
Answers point out that both are safe, as long as the lock is held at all when condition is changed, and the later form is better for performance: awaken thread is immediately unlocked.
I'm wondering how much this performance effect is generic and significant:
Does this apply to Windows CONDITION_VARIABLE (that is avaliable since Vista+, but I'm nostly asking about Windows 10 implementation)? If so, how much significant the effect is?
Does this apply to Qt QWaitCondition on Windows?
Does this apply to boost::condition_variable on Windows?
Does it apply to most Linux systems where POSIX condition variable is the underlying implementation? Is there a measure of it?
The practical application is that I have to use the first form, where notification is done under the lock, as a structure that manages condition variables needs protection as well, so I want to know how much this approach worth avoiding.

When a mutex unlocks does it notify_all or notify_one?

As I understand it, when I have a collection of threads protected by a std::lock_guard or std::unique_lock over std::mutex and the mutex is unlocked by either explicitly unlocking it or by the lock going out of scope, then the waiting threads are notified.
Is this notification a notify_one or a notify_all?
I suspect the former to avoid hurry up and wait but would like to be sure.
What you seem to be asking is whether, when thread T0 has locked mutex M, and threads T1..Tn are blocked attempting to lock M, what happens when T0 unlocks M? Obviously only one thread can successfully lock M, so there would be no reason for the system to "notify" (i.e. schedule) more than one waiter. However, your question is not specific to any one platform, so the answer might have to be "it's implementation dependent."
It depends on the implementation. The waiting threads could be actively waiting in the user space inside of the mutex::lock() call context for some short time for the mutex to be unlocked and once it's unlocked several actively waiting threads could detect it at the same time but only one would be able to lock it. Otherwise after the active period is passed mutex.lock() issues a system call and OS puts the thread into waiting list for that mutex. When it's unlocked only one thread gets awaken/notified to obtain a lock.

pthread_mutex_lock and pthread_mutex_lock in another thread

I called a pthread_mutex_lock(&th) in a thread then I want to unlock the mutex in another thread pthread_mutex_unlock(&th)
Is it possible to do that?
Or the mutex should be unlocked in the same thread ?
It should be unlocked in the same thread. From the man page: "If a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked, undefined behavior results." (http://pubs.opengroup.org/onlinepubs/009604499/functions/pthread_mutex_lock.html)
I just wanted to add to Guijt's answer:
When a thread locks a mutex, it is assumed it is inside a critical section. If we allow another thread to unlock that mutex, the first thread might still be inside the critical section, resulting in problems.
I can see several solutions to your problem:
Option 1: Rethink your algorithm
Try to understand why you need to unlock from a different thread, and see if you can get the unlocking to be done within the locking thread. This is the best solution, as it typically produces the code that is simplest to understand, and simplest to prove it is actually doing what you believe it is doing. With multithreaded programming being so complicated, the price it is worth paying for such simplicity should be quite high.
Option 2: Synchronize the threads with an event
One might argue it is just a method to implement option 1 above. The idea is that when the locking thread finishes with the critical section, it does not go out to do whatever, but waits on an event. When the second thread wishes to release the lock, it instead signals the event. The first thread then releases the lock.
This procedure has the advantage that thread 2 cannot inadvertently release the lock too soon.
Option 3: Don't use a mutex
If neither one of the above options work for you, you most likely are not using the mutex for mutual exclusion, but for synchronizations. If such is the case, you are likely using the wrong construct.
The construct most resembling a mutex is a semaphore. In fact, for years the Linux kernel did not have a mutex, claiming that it's just a semaphore with a maximal value of 1. A semaphore, unlike a mutex, does not require that the same thread lock and release.
RTFM on sem_init and friends for how to use it.
Please be reminded that you must first model your problem, and only then choose the correct synchronization construct to use. If you do it the other way around, you are almost certain to introduce lots of bugs that are really really really difficult to find and fix.
Whole purpose of using Mutex is achieve mutual exclusion in a critical section with ownership being tracked by the kernel. So mutex has to be unlocked in the same thread which has acquired it

Pthread c++ and mutex

a question about pthread and mutex.
I have architecture producer consumer with a shared queue.
I have two queue operations: push and pop.
for both of these operation I use a mutex (lock - implementation - unlock).
I did not understand a thing...
Just use mutexes?
Need I to use signal or wait to wakeup of a thread?
When a thread finds the mutex locked, this thread will locked (lock mutex is a blocking operation?)
Whenever you are sharing a common resource, using mutex is the best way to go. Sometimes semaphores might be required.
You do not need to use a signal to wake up a thread, unless you put it to sleep yourself. Usually if a thread hits a mutex that is locked it will wait. The CPU scheduling algorithm will take care of the thread, and you can be sure that it will be woken up once mutex unlocks without any performance issues.
The thread will not be locked once it finds a mutex that is blocked. The thread will simply enter waiting queue until cpu scheduling algorithm decides it should be taken out. But it really depends on your definition of what you percieve as locked.
Also please rephrase a question a little bit, it was hard to understand.

Does cancelling a thread while its in a pthread_cond_wait cause it to reacquire the related mutex?

I'm looking at code in the textbook: Programming With POSIX Threads by David R. Butenhof, and I came across a place that has me a little confused.
In the code, a cleanup handler is registered for a thread. The cleanup handler unlocks a mutex that is used by the condition within that thread.
With threads in genereal, when a pthread_cond_wait is called (with the related mutex locked as it should be), the mutex is unlocked while the thread waits - it is then reacquired when the condition wait is over, before it returns (i.e. a signal or broadcast happened).
Since, while waiting, a condition_wait doesn't have the mutex locked, I would have thought that if a thread was cancelled while waiting, it would still not have that mutex locked - so why would the cleanup handler need to free it?
In fact, I thought unlocking a mutex that was already unlocked was actually an error, making this worse. Can someone tell me where you think I'm confused?
You are correct about unlocking a mutex that is already unlocked being a Bad Thing™.
However, while pthread_cond_wait() is a cancellation point, the interface guarantees that the mutex is reacquired before the cancellation handler runs. If it did not make this guarantee it would be very difficult to know whether or not the mutex was held.
See: The specification for details.