a question about pthread and mutex.
I have architecture producer consumer with a shared queue.
I have two queue operations: push and pop.
for both of these operation I use a mutex (lock - implementation - unlock).
I did not understand a thing...
Just use mutexes?
Need I to use signal or wait to wakeup of a thread?
When a thread finds the mutex locked, this thread will locked (lock mutex is a blocking operation?)
Whenever you are sharing a common resource, using mutex is the best way to go. Sometimes semaphores might be required.
You do not need to use a signal to wake up a thread, unless you put it to sleep yourself. Usually if a thread hits a mutex that is locked it will wait. The CPU scheduling algorithm will take care of the thread, and you can be sure that it will be woken up once mutex unlocks without any performance issues.
The thread will not be locked once it finds a mutex that is blocked. The thread will simply enter waiting queue until cpu scheduling algorithm decides it should be taken out. But it really depends on your definition of what you percieve as locked.
Also please rephrase a question a little bit, it was hard to understand.
Related
I am interested to know more about how the std::scoped_lock operates.
I am making some modifications to some thread-unsafe code by adding a mutex around a critical section.
I am using a std::scoped_lock to do this.
There are two possible things which can happen when the line of code std::scoped_lock lock(mutex) is reached:
The mutex is locked successfully. Nothing to be concerned with here.
The mutex is being held by another thread (A). The mutex cannot be locked, and presumably the current thread (B) blocks.
Regarding point 2, what happens when the thread which was able to lock the mutex unlocks it again? ie: When thread A unlocks the mutex (scoped_lock goes out of scope) what does thread B do? Does it automatically wake up and try to lock the mutex again? (Does thread B sleep when it is not able to lock the mutex? Presumably it does not sit in an infinite while(1) like loop hogging the CPU.)
As you can see I don't have a complete understanding of how a scoped_lock works. Is anyone able to enlighten me?
Your basic assumptions are pretty much correct - block, don't hog the CPU too much (a bit is OK), and let the implementation deal with thread wake-ups.
There's one special case that should be mentioned: when a mutex is used not just to protect a shared resource, but specifically to coordinate between threads A&B, a std::condition_variable can help with the thread synchronization. This does not replace the mutex; in fact you need to lock a mutex in order to wait on a condition variable. Many Operating Systems can benefit from condition variables by waking up the right threads faster.
As I understand it, when I have a collection of threads protected by a std::lock_guard or std::unique_lock over std::mutex and the mutex is unlocked by either explicitly unlocking it or by the lock going out of scope, then the waiting threads are notified.
Is this notification a notify_one or a notify_all?
I suspect the former to avoid hurry up and wait but would like to be sure.
What you seem to be asking is whether, when thread T0 has locked mutex M, and threads T1..Tn are blocked attempting to lock M, what happens when T0 unlocks M? Obviously only one thread can successfully lock M, so there would be no reason for the system to "notify" (i.e. schedule) more than one waiter. However, your question is not specific to any one platform, so the answer might have to be "it's implementation dependent."
It depends on the implementation. The waiting threads could be actively waiting in the user space inside of the mutex::lock() call context for some short time for the mutex to be unlocked and once it's unlocked several actively waiting threads could detect it at the same time but only one would be able to lock it. Otherwise after the active period is passed mutex.lock() issues a system call and OS puts the thread into waiting list for that mutex. When it's unlocked only one thread gets awaken/notified to obtain a lock.
I called a pthread_mutex_lock(&th) in a thread then I want to unlock the mutex in another thread pthread_mutex_unlock(&th)
Is it possible to do that?
Or the mutex should be unlocked in the same thread ?
It should be unlocked in the same thread. From the man page: "If a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked, undefined behavior results." (http://pubs.opengroup.org/onlinepubs/009604499/functions/pthread_mutex_lock.html)
I just wanted to add to Guijt's answer:
When a thread locks a mutex, it is assumed it is inside a critical section. If we allow another thread to unlock that mutex, the first thread might still be inside the critical section, resulting in problems.
I can see several solutions to your problem:
Option 1: Rethink your algorithm
Try to understand why you need to unlock from a different thread, and see if you can get the unlocking to be done within the locking thread. This is the best solution, as it typically produces the code that is simplest to understand, and simplest to prove it is actually doing what you believe it is doing. With multithreaded programming being so complicated, the price it is worth paying for such simplicity should be quite high.
Option 2: Synchronize the threads with an event
One might argue it is just a method to implement option 1 above. The idea is that when the locking thread finishes with the critical section, it does not go out to do whatever, but waits on an event. When the second thread wishes to release the lock, it instead signals the event. The first thread then releases the lock.
This procedure has the advantage that thread 2 cannot inadvertently release the lock too soon.
Option 3: Don't use a mutex
If neither one of the above options work for you, you most likely are not using the mutex for mutual exclusion, but for synchronizations. If such is the case, you are likely using the wrong construct.
The construct most resembling a mutex is a semaphore. In fact, for years the Linux kernel did not have a mutex, claiming that it's just a semaphore with a maximal value of 1. A semaphore, unlike a mutex, does not require that the same thread lock and release.
RTFM on sem_init and friends for how to use it.
Please be reminded that you must first model your problem, and only then choose the correct synchronization construct to use. If you do it the other way around, you are almost certain to introduce lots of bugs that are really really really difficult to find and fix.
Whole purpose of using Mutex is achieve mutual exclusion in a critical section with ownership being tracked by the kernel. So mutex has to be unlocked in the same thread which has acquired it
I am just wondering if there is any locking policy in C++11 which would prevent threads from starvation.
I have a bunch of threads which are competing for one mutex. Now, my problem is that the thread which is leaving a critical section starts immediately compete for the same mutex and most of the time wins. Therefore other threads waiting on the mutex are starving.
I do not want to let the thread, leaving a critical section, sleep for some minimal amount of time to give other threads a chance to lock the mutex.
I thought that there must be some parameter which would enable fair locking for threads waiting on the mutex but I wasn't able to find any appropriate solution.
Well I found std::this_thread::yield() function, which suppose to reschedule the order of threads execution, but it is only hint to scheduler thread and depends on scheduler thread implementation if it reschedule the threads or not.
Is there any way how to provide fair locking policy for the threads waiting on the same mutex in C++11?
What are the usual strategies?
Thanks
This is a common optimization in mutexes designed to avoid wasting time switching tasks when the same thread can take the mutex again. If the current thread still has time left in its time slice then you get more throughput in terms of user-instructions-executed-per-second by letting it take the mutex rather than suspending it, and switching to another thread (which likely causes a big reload of cache lines and various other delays).
If you have so much contention on a mutex that this is a problem then your application design is wrong. You have all these threads blocked on a mutex, and therefore not doing anything: you are probably better off without so many threads.
You should design your application so that if multiple threads compete for a mutex then it doesn't matter which thread gets the lock. Direct contention should also be a rare thing, especially direct contention with lots of threads.
The only situation where I can think this is an OK scenario is where every thread is waiting on a condition variable, which is then broadcast to wake them all. Every thread will then contend for the mutex, but if you are doing this right then they should all do a quick check that this isn't a spurious wake and then release the mutex. Even then, this is called a "thundering herd" situation, and is not ideal, precisely because it serializes all these threads.
I'm looking at code in the textbook: Programming With POSIX Threads by David R. Butenhof, and I came across a place that has me a little confused.
In the code, a cleanup handler is registered for a thread. The cleanup handler unlocks a mutex that is used by the condition within that thread.
With threads in genereal, when a pthread_cond_wait is called (with the related mutex locked as it should be), the mutex is unlocked while the thread waits - it is then reacquired when the condition wait is over, before it returns (i.e. a signal or broadcast happened).
Since, while waiting, a condition_wait doesn't have the mutex locked, I would have thought that if a thread was cancelled while waiting, it would still not have that mutex locked - so why would the cleanup handler need to free it?
In fact, I thought unlocking a mutex that was already unlocked was actually an error, making this worse. Can someone tell me where you think I'm confused?
You are correct about unlocking a mutex that is already unlocked being a Bad Thing™.
However, while pthread_cond_wait() is a cancellation point, the interface guarantees that the mutex is reacquired before the cancellation handler runs. If it did not make this guarantee it would be very difficult to know whether or not the mutex was held.
See: The specification for details.