As I understand it, when I have a collection of threads protected by a std::lock_guard or std::unique_lock over std::mutex and the mutex is unlocked by either explicitly unlocking it or by the lock going out of scope, then the waiting threads are notified.
Is this notification a notify_one or a notify_all?
I suspect the former to avoid hurry up and wait but would like to be sure.
What you seem to be asking is whether, when thread T0 has locked mutex M, and threads T1..Tn are blocked attempting to lock M, what happens when T0 unlocks M? Obviously only one thread can successfully lock M, so there would be no reason for the system to "notify" (i.e. schedule) more than one waiter. However, your question is not specific to any one platform, so the answer might have to be "it's implementation dependent."
It depends on the implementation. The waiting threads could be actively waiting in the user space inside of the mutex::lock() call context for some short time for the mutex to be unlocked and once it's unlocked several actively waiting threads could detect it at the same time but only one would be able to lock it. Otherwise after the active period is passed mutex.lock() issues a system call and OS puts the thread into waiting list for that mutex. When it's unlocked only one thread gets awaken/notified to obtain a lock.
Related
I am interested to know more about how the std::scoped_lock operates.
I am making some modifications to some thread-unsafe code by adding a mutex around a critical section.
I am using a std::scoped_lock to do this.
There are two possible things which can happen when the line of code std::scoped_lock lock(mutex) is reached:
The mutex is locked successfully. Nothing to be concerned with here.
The mutex is being held by another thread (A). The mutex cannot be locked, and presumably the current thread (B) blocks.
Regarding point 2, what happens when the thread which was able to lock the mutex unlocks it again? ie: When thread A unlocks the mutex (scoped_lock goes out of scope) what does thread B do? Does it automatically wake up and try to lock the mutex again? (Does thread B sleep when it is not able to lock the mutex? Presumably it does not sit in an infinite while(1) like loop hogging the CPU.)
As you can see I don't have a complete understanding of how a scoped_lock works. Is anyone able to enlighten me?
Your basic assumptions are pretty much correct - block, don't hog the CPU too much (a bit is OK), and let the implementation deal with thread wake-ups.
There's one special case that should be mentioned: when a mutex is used not just to protect a shared resource, but specifically to coordinate between threads A&B, a std::condition_variable can help with the thread synchronization. This does not replace the mutex; in fact you need to lock a mutex in order to wait on a condition variable. Many Operating Systems can benefit from condition variables by waking up the right threads faster.
If a stackful coroutine locks a mutex (let's first consider a non-recursive mutex) and then yield, when its execution is re-entered, the thread t2 running it might be different from the previous one before yield t1. What will happen, then?
If the mutex is a recursive mutex, t1 and t2 who owns the mutex?
It will just stay locked.
If the ren-enter happens on a different thread, the mutex will just be owned by the wrong thread, leading to UB at best.
Stackless coroutines, on the other hand are just switches in disguise, so if you use lock_guard and similar RAII-enabled containers then there might be excessive lock/unlocking, as well a races when yield happens under a lock
If your application consists of many coroutines (in fact you would use fibers) you should not use mutex's.
instead you could use something like an spinlock utilizing atomics (e.g. the mutex uses internally an spinlock instead of calling into the kernel to suspend the thread if the mutex is locked).
If a coroutine tries to lock this special kind of mutex and the mutex is already locked you could suspend the coroutine and resume another one (== executing other tasks). If the mutex gets unlocked you could resume the suspended coroutine and try to lock the mutex again.
a question about pthread and mutex.
I have architecture producer consumer with a shared queue.
I have two queue operations: push and pop.
for both of these operation I use a mutex (lock - implementation - unlock).
I did not understand a thing...
Just use mutexes?
Need I to use signal or wait to wakeup of a thread?
When a thread finds the mutex locked, this thread will locked (lock mutex is a blocking operation?)
Whenever you are sharing a common resource, using mutex is the best way to go. Sometimes semaphores might be required.
You do not need to use a signal to wake up a thread, unless you put it to sleep yourself. Usually if a thread hits a mutex that is locked it will wait. The CPU scheduling algorithm will take care of the thread, and you can be sure that it will be woken up once mutex unlocks without any performance issues.
The thread will not be locked once it finds a mutex that is blocked. The thread will simply enter waiting queue until cpu scheduling algorithm decides it should be taken out. But it really depends on your definition of what you percieve as locked.
Also please rephrase a question a little bit, it was hard to understand.
I'm looking at code in the textbook: Programming With POSIX Threads by David R. Butenhof, and I came across a place that has me a little confused.
In the code, a cleanup handler is registered for a thread. The cleanup handler unlocks a mutex that is used by the condition within that thread.
With threads in genereal, when a pthread_cond_wait is called (with the related mutex locked as it should be), the mutex is unlocked while the thread waits - it is then reacquired when the condition wait is over, before it returns (i.e. a signal or broadcast happened).
Since, while waiting, a condition_wait doesn't have the mutex locked, I would have thought that if a thread was cancelled while waiting, it would still not have that mutex locked - so why would the cleanup handler need to free it?
In fact, I thought unlocking a mutex that was already unlocked was actually an error, making this worse. Can someone tell me where you think I'm confused?
You are correct about unlocking a mutex that is already unlocked being a Bad Thing™.
However, while pthread_cond_wait() is a cancellation point, the interface guarantees that the mutex is reacquired before the cancellation handler runs. If it did not make this guarantee it would be very difficult to know whether or not the mutex was held.
See: The specification for details.
When using boost::conditional_variable, ACE_Conditional or directly pthread_cond_wait, is there any overhead for the waiting itself? These are more specific issues that trouble be:
After the waiting thread is unscheduled, will it be scheduled back before the wait expires and then unscheduled again or it will stay unscheduled until signaled?
Does wait acquires periodically the mutex? In this case, I guess it wastes each iteration some CPU time on system calls to lock and release the mutex. Is it the same as constantly acquiring and releasing a mutex?
Also, then, how much time passes between the signal and the return from wait?
Afaik, when using semaphores the acquire calls responsiveness is dependent on scheduler time slice size. How does it work in pthread_cond_wait? I assume this is platform dependent. I am more interested in Linux but if someone knows how it works on other platforms, it will help too.
And one more question: are there any additional system resources allocated for each conditional? I won't create 30000 mutexes in my code, but should I worry about 30000 conditionals that use the same one mutex?
Here's what is written in the pthread_cond man page:
pthread_cond_wait atomically unlocks the mutex and waits for the condition variable cond to be signaled. The thread execution is suspended and does not consume any CPU time until the condition variable is signaled.
So from here I'd answer to the questions as following:
The waiting thread won't be scheduled back before the wait was signaled or canceled.
There are no periodic mutex acquisitions. The mutex is reacquired only once before wait returns.
The time that passes between the signal and the wait return is similar to that of thread scheduling due to mutex release.
Regarding the resources, on the same man page:
In the LinuxThreads implementation, no resources are associated with condition variables, thus pthread_cond_destroy actually does nothing except checking that the condition has no waiting threads.
Update: I dug into the sources of pthread_cond_* functions and the behavior is as follows:
All the pthread conditionals in Linux are implemented using futex.
When a thread calls wait it is suspended and unscheduled. The thread id is inserted at the tail of a list of waiting threads.
When a thread calls signal the thread at the head of the list is scheduled back.
So, the waking is as efficient as the scheduler, no OS resources are consumed and the only memory overhead is the size of the waiting list (see futex_wake function).
You should only call pthread_cond_wait if the variable is already in the "wrong" state. Since it always waits, there is always the overhead associated with putting the current thread to sleep and switching.
When the thread is unscheduled, it is unscheduled. It should not use any resources, but of course an OS can in theory be implemented badly. It is allowed to re-acquire the mutex, and even to return, before the signal (which is why you must double-check the condition), but the OS will be implemented so this doesn't impact performance much, if it happens at all. It doesn't happen spontaneously, but rather in response to another, possibly-unrelated signal.
30000 mutexes shouldn't be a problem, but some OSes might have a problem with 30000 sleeping threads.