Say the broadcasting thread broadcasts while only 3 threads are waiting and the 4th thread call pthread_cond_wait after the broadcasting thread is done with broadcast, will the 4th thread ever get out of the waiting condition. And how is the condition variable reset, so that the broadcasting thread can rebroadcast sometimes later to the waiting threads.
will the 4th thread ever get out of the waiting condition
No, not until there's another broadcast or signal.
And how is the condition variable reset, so that the broadcasting
thread can rebroadcast sometimes later to the waiting threads.
It's simplest to imagine that everything that a condition variable does, is synchronized under the mutex associated with the condition variable. So at the time you broadcast, everything waiting is (somehow) put into a state where it is trying to wake up and take the mutex. Then, the broadcasting thread releases the mutex. So it's not really the condition variable that's "reset" when you broadcast, it's the first three threads that are moved from waiting on the condition variable, to waiting on the mutex.
In order to wait on the condvar, your 4th thread must first acquire the mutex. This might happen before or after the first three threads manage to wake up and take the mutex, but it clearly happens after the broadcast, and so your 4th thread is not in "trying to wake up" state, it's in "waiting for a signal or broadcast" state.
Actually it's more complicated than that -- you don't actually have to hold the mutex in order to broadcast a condition variable. So condition variables must include some additional internal synchronization (I don't know the specific details on linux), to make sure that all threads waiting before the broadcast get their state changed as a single operation.
Normally you might as well hold the mutex to broadcast, though, because you broadcast when you've just changed something that waiters want to see, and whatever that thing is that's looked at from lots of threads, is synchronized using the mutex. And there are a couple of awkward situations that you can avoid by always doing it that way.
will the 4th thread ever get out of the waiting condition.
No, not unless the condition variable is signalled while thread 4 is waiting.
The man page explains:
The pthread_cond_broadcast() call unblocks all threads currently blocked on the specified condition variable cond.
The pthread_cond_signal() and pthread_cond_broadcast() functions have no effect if there are no threads currently blocked on cond.
Related
yield(): https://en.cppreference.com/w/cpp/thread/yield
notify_one(): http://www.cplusplus.com/reference/condition_variable/condition_variable/notify_one/
Case:
Thread A is supposed to finish whatever it is doing and then wake thread B to do its job.
I wrote a notify_one() call in the thread A' run() function.
Is a case possible that thread A signals notify_one() but still thread A is scheduled again even though thread B is ready?
Are notify_one() and yield() equivalent of each other?
yield and notify_one are unrelated.
yield is a process request (to the OS) to give up its current time slice. The thread will still be scheduled next time around. Imagine that a process is allocated 10ms. If it calls yield after 5ms, the OS can run another process. It still gets the full 10ms next time it's its turn to run. The OS does not have to fulfill the request.
condition_variable::notify_one is used in conjunction with condition_variable::wait. If there are any threads waiting, notify_one is guaranteed to wake one of them. If there are no threads waiting, notify_one will do nothing.
Note that a condition variable, when calling wait, must be used with 1 mutex that protects some shared state (the condition), and it is waiting for another thread to signal when the condition is true.
Is a case possible that thread A signals notify_one() but still thread A is scheduled again even though thread B is ready?
Yes. With Mesa semantics, signaling a waiting thread merely unblocks the other thread. The current thread may continue running until it runs out of time. With Hoare semantics, the signaling thread will immediately switch to the waiting thread. However, pretty much all implementations of conditions use Mesa semantics.
Are notify_one() and yield() equivalent of each other?
"Equivalent" would mean that they do the same thing. That is not the case. I think you mean to ask if they are complimentary, or if they are part of the same synchronization scheme, and the answer is no, as I explained above.
If we use notify_one() to wake a thread, do we still need yield()
If Thread A just woke Thread C with nofity_one and you wish to run Thread C as soon as possible, you can call yield to give up the rest of Thread A's time slice. However, the OS is not required to grant your request. And there may be many threads scheduled before Thread C that you have no control over.
There is a difference between both. There is a possibility that in your case you can use either of them with same effect. yield is more generic and notify_one provides more control on the flow of program.
yield: Relinquish processor so that OS can possibly schedule any other thread.
notify_one: Signal a condition so that one of the thread waiting on this condition can resume.
Thread A is supposed to finish whatever it is doing and then wake
thread B to do its job.
notify_one is the right choice here where one thread waits on the condition while the other thread can signal it.
Does anyone know of a condition variable class that allows notification of threads waiting for a condition to be notified in the order in which they started waiting?
I'm currently using the boost class condition_variable, but calling condition_variable::notify_one() wakes up a random thread, not the thread that first called condition_variable::wait(). I also tried adding thread ids to a queue before calling condition_variable::wait(), so that I can call condition_variable::notify_all() upon which all waiting threads wake up, check the queue and either wait again or proceed (only one thread, namely the thread first in queue). The problem is that calling notify_all() twice does not guarantee that all threads are waking up twice, thereby losing notifications. Any suggestions?
It is strange that you require threads to be woken in particular order and sounds suspicious about your design. Anyway idea is that you can have queue of condition variables (one per thread) and you would call notify_one() for one from top of the queue. In waiting thread you need to do additional logic to check that it was not sporadically interrupted from wait. Again sounds strange why you need thread to wake up in particular order and you may want to rethink your design.
What happens when a thread is put to sleep by other thread, possible by main thread, in the middle of its execution?
assuming I've a function Producer. What if Consumer sleep()s the Producer in the middle of production of one unit ?
Suppose the unit is half produced. and then its put on sleep(). The integrity of system may be in a problem
The thread that sleep is invoked on is put in the idle queue by the thread scheduler and is context switched out of the CPU it is running on, so other threads can take it's place.
All context (registers, stack pointer, base pointer, etc) are saved on the thread stack, so when it's run next time, it can continue from where it left off.
The OS is constantly doing context switches between threads in order to make your system seem like it's doing multiple things. The OS thread scheduler algorithm takes care of that.
Thread scheduling and threading is a big subject, if you want to really understand it, I suggest you start reading up on it. :)
EDIT: Using sleep for thread synchronization purposes not advised, you should use proper synchronization mechanisms to tell the thread to wait for other threads, etc.
There is no problem associated with this, unless some state is mutated while the thread sleeps, so it wakes up with a different set of values than before going to sleep.
Threads are switched in and out of execution by the CPU all the time, but that does not affect the overall outcome of their execution, assuming no data races or other bugs are present.
It would be unadvisable for one thread to forcibly and synchronously interfere with the execution of another thread. One thread could send an asynchronous message to another requesting that it reschedule itself in some way, but that would be handled by the other thread when it was in a suitable state to do so.
Assuming they communicate using channels that are thread-safe, nothing bad shoudl happen, as the sleeping thread will wake up eventually and grab data from its task queue or see that some semaphore has been set and read the prodced data.
If the threads communicate using nonvolatile variables or direct function calls that change state, that's when Bad Things occur.
I don't know of a way for a thread to forcibly cause another thread to sleep. If two threads are accessing a shared resource (like an input/output queue, which seems likely for you Produce/Consumer example), then both threads may contend for the same lock. The losing thread must wait for the other thread to release the lock if the contention is not the "trylock" variety. The thread that waits is placed into a waiting queue associated with the lock, and is removed from the schedulers run queue. When the winning thread releases the lock, the code checks the queue to see if there are threads still waiting to acquire it. If there are, one is chosen as the winner and is given the lock, and placed in the scheduler run queue.
Situation
I've got a worker thread that's being triggered by a condition_variable to do some work. Basically it is working very good and as supposed.
Problem
However there's one problem: The thread launching the worker thread may get a long enough time slice to feed the data queue and notify the condition_variable when the worker thread is not ready yet, i.e. when it didn't reach the condition_variable.wait() line yet.
Is there any way I can wait for a worker thread until the first call to wait() is made so it's guaranteed that work will be processed when I notify the worker?
There's no reason to do that. When the worker thread is ready, it will do the work. It won't need to be triggered by a condition variable because it won't ever wait.
Waiting for something that has already happened is a coding error. If you might ever even think of doing that, you fundamentally don't understand condition variables.
Before a thread waits on a condition variable, it must make sure that there is something it needs to wait for. That thing must be protected by the mutex associated with the condition variable. And when the thread returns from the condition variable wait, it usually must re-test to see if it needs to wait again.
When using boost::conditional_variable, ACE_Conditional or directly pthread_cond_wait, is there any overhead for the waiting itself? These are more specific issues that trouble be:
After the waiting thread is unscheduled, will it be scheduled back before the wait expires and then unscheduled again or it will stay unscheduled until signaled?
Does wait acquires periodically the mutex? In this case, I guess it wastes each iteration some CPU time on system calls to lock and release the mutex. Is it the same as constantly acquiring and releasing a mutex?
Also, then, how much time passes between the signal and the return from wait?
Afaik, when using semaphores the acquire calls responsiveness is dependent on scheduler time slice size. How does it work in pthread_cond_wait? I assume this is platform dependent. I am more interested in Linux but if someone knows how it works on other platforms, it will help too.
And one more question: are there any additional system resources allocated for each conditional? I won't create 30000 mutexes in my code, but should I worry about 30000 conditionals that use the same one mutex?
Here's what is written in the pthread_cond man page:
pthread_cond_wait atomically unlocks the mutex and waits for the condition variable cond to be signaled. The thread execution is suspended and does not consume any CPU time until the condition variable is signaled.
So from here I'd answer to the questions as following:
The waiting thread won't be scheduled back before the wait was signaled or canceled.
There are no periodic mutex acquisitions. The mutex is reacquired only once before wait returns.
The time that passes between the signal and the wait return is similar to that of thread scheduling due to mutex release.
Regarding the resources, on the same man page:
In the LinuxThreads implementation, no resources are associated with condition variables, thus pthread_cond_destroy actually does nothing except checking that the condition has no waiting threads.
Update: I dug into the sources of pthread_cond_* functions and the behavior is as follows:
All the pthread conditionals in Linux are implemented using futex.
When a thread calls wait it is suspended and unscheduled. The thread id is inserted at the tail of a list of waiting threads.
When a thread calls signal the thread at the head of the list is scheduled back.
So, the waking is as efficient as the scheduler, no OS resources are consumed and the only memory overhead is the size of the waiting list (see futex_wake function).
You should only call pthread_cond_wait if the variable is already in the "wrong" state. Since it always waits, there is always the overhead associated with putting the current thread to sleep and switching.
When the thread is unscheduled, it is unscheduled. It should not use any resources, but of course an OS can in theory be implemented badly. It is allowed to re-acquire the mutex, and even to return, before the signal (which is why you must double-check the condition), but the OS will be implemented so this doesn't impact performance much, if it happens at all. It doesn't happen spontaneously, but rather in response to another, possibly-unrelated signal.
30000 mutexes shouldn't be a problem, but some OSes might have a problem with 30000 sleeping threads.