Round-robin scheduling and deadlock - scheduling

Does round-robin scheduling ever cause deadlock? What happens if the CPU scheduling is based on round-robin and at one point in the schedule two different processes request the same file that no process owns? Would that cause deadlock or would the file be given to the process that is supposed to execute in the next step of the schedule?

The case you describe will not cause deadlock. Locks are atomic, so only one process can hold one at a time. Thus, whichever process has control at the time will acquire the lock and the second process will fail.
However, in a more general case, deadlock can occur in RR scheduling. Consider two processes and two locks. Process A acquires lock 1 and then yields the processor to Process B. Process B then acquires lock 2 and attempts to acquire lock 1. Because lock 1 belongs to Process A, process B will then sleep. Process A awakes and attempts to acquire lock 2. Lock 2 still belongs to process B, so neither process can move forward and you have a deadlock.

rr scheduling can course deadlock.for example,if process A requests and gets a printer but exceeded its time quantum,and processB happens to have its normal cpu burst time equals to its time quantum..then its execution will be completed..then it requests for the printer which process A is still holding..while waiting at the queue tail for entry into the memory still onhold by process B..then deadlock has occured

Related

boost: how to monitor status of mutex and force release on deadlock [2]

I am trying to use the shared_lock and unique_lock libraries from boost to implement a basic reader-writer lock on a resource. However, some of the threads accessing the resource have the potential to simply crash. I want to create another process that, given a mutex, monitors the mutex and keep track of what processes locked the resource and how long each process have the lock. The process will also force a process to release its lock if it has the lock for more than a given period of time.
Despite that the boost locks are all scoped locks and will automatically unlock once it's out of the scope, it still doesn't solve my problem if the server crashes, thus sending SIGSEGV to the process and killing it. The killed process will not call any of its destructors and thus will not release any of its held resources.
One potential solution is to somehow put a timer on the lock so that the process is forced to release the lock after a given period of lock. Even though this goes against the concept of locking, it works in our case because we can guarantee that if any process holds the lock for more than, let's say 5 minutes, then it's pretty safe to say that the process is either killed or there is a deadlock situation.
Any suggestions on how to approach this problem is greatly appreciated!
My previous thread was closed due to "possible duplicate", but the stated duplicate question does not answer my question.
boost: how to monitor status of mutex and force release on deadlock
Putting aside whether this is a good idea or not, you could roll your own mutex implementation that utilizes shared memory to store a timestamp, a process identifier, and a thread identifier.
When a thread wants to take a lock it will need to find an empty slot in the shared memory and use an atomic test and set operation, such as InterlockedCompareExchange on Windows, to set the process id if current value is the empty value. If the set doesn't occur it will need to start over. After getting the process id set, the thread will need to repeat the process for the thread identifier, and then do the same thing with the timestamp (it can't just set it though, it still needs to be done atomically).
The thread will then need to check all of the other filled slots to determine if it has the lowest timestamp. If not it needs to make note the slot that has the lowest time stamp, and poll it until it's either emptied, has a higher timestamp, or is has been timed out. Then rinse repeat until the thread has the slot with the oldest time stamp at which point the thread has acquired the lock.
If another slot has been timed out the thread should trigger the timeout handler (which may kill the other process or simply raise an exception in the thread with the lock) and then use atomic test and set operations to clear the slot.
When the thread with the lock unlocks it then uses atomic test and set operations to clear its slot.
Update: also ties among the lowest timestamps would need to be dealt with to avoid a possible deadlock and the handling of that would need to avoid creating a race condition.
#Arno: I disagree that the software needs to be so robust that it should not crash in the first place. Fault-tolerence systems (think on the lines of 5 nines of availability), need to have checks in place recover in face of sudden termination of critical processes. Something on the lines of pthread_mutexattr_*robust
Saving the owner pid, the last used timestamp for the mutex should help in recovery.

sleeping a thread in the middle of execution

What happens when a thread is put to sleep by other thread, possible by main thread, in the middle of its execution?
assuming I've a function Producer. What if Consumer sleep()s the Producer in the middle of production of one unit ?
Suppose the unit is half produced. and then its put on sleep(). The integrity of system may be in a problem
The thread that sleep is invoked on is put in the idle queue by the thread scheduler and is context switched out of the CPU it is running on, so other threads can take it's place.
All context (registers, stack pointer, base pointer, etc) are saved on the thread stack, so when it's run next time, it can continue from where it left off.
The OS is constantly doing context switches between threads in order to make your system seem like it's doing multiple things. The OS thread scheduler algorithm takes care of that.
Thread scheduling and threading is a big subject, if you want to really understand it, I suggest you start reading up on it. :)
EDIT: Using sleep for thread synchronization purposes not advised, you should use proper synchronization mechanisms to tell the thread to wait for other threads, etc.
There is no problem associated with this, unless some state is mutated while the thread sleeps, so it wakes up with a different set of values than before going to sleep.
Threads are switched in and out of execution by the CPU all the time, but that does not affect the overall outcome of their execution, assuming no data races or other bugs are present.
It would be unadvisable for one thread to forcibly and synchronously interfere with the execution of another thread. One thread could send an asynchronous message to another requesting that it reschedule itself in some way, but that would be handled by the other thread when it was in a suitable state to do so.
Assuming they communicate using channels that are thread-safe, nothing bad shoudl happen, as the sleeping thread will wake up eventually and grab data from its task queue or see that some semaphore has been set and read the prodced data.
If the threads communicate using nonvolatile variables or direct function calls that change state, that's when Bad Things occur.
I don't know of a way for a thread to forcibly cause another thread to sleep. If two threads are accessing a shared resource (like an input/output queue, which seems likely for you Produce/Consumer example), then both threads may contend for the same lock. The losing thread must wait for the other thread to release the lock if the contention is not the "trylock" variety. The thread that waits is placed into a waiting queue associated with the lock, and is removed from the schedulers run queue. When the winning thread releases the lock, the code checks the queue to see if there are threads still waiting to acquire it. If there are, one is chosen as the winner and is given the lock, and placed in the scheduler run queue.

What happens when pthreads wait in mutex_lock/cond_wait?

I have a program that should get the maximum out of my cpu.
It is multithreaded via pthreads that do their job well apart from the fact that they "only" get my cores to about 60% load which is not enough in my opinion.
I am searching for the reason and am asking myself (and hereby you) if the blocking functions mutex_lock/cond_wait are candidates?
What happens when a thread cannot run on in such a function?
Does pthread switch to another thread it handles or
does the thread yield its time to the system and if the latter is the case, can I change this behavior?
Regards,
Nobody
More Information
The setting is one mainthread that fills the taskpool and countless workers that fetch jobs from there and wait on a conditional that is signaled via broadcast when a serialized calculation is done. They go on with the values from this calculation until they are done, deliver their mail and fetch the next job...
On a typical modern pthreads implementation, each thread is managed by the kernel not unlike a separate process. Any blocking call like pthread_mutex_lock or pthread_cond_wait (but also, say, read) will yield its time to the system. The system will then find another eligible thread to schedule, whether in your process or another process, and run it.
If your program is only taking 60% of the CPU, it is more likely blocked on I/O than on pthread operations, unless you have done something way too granular with your pthread operations.
If a thread is waiting on a mutex/condition, it doesn't use resources (well, uses just a tiny amount). Whenever the thread enters waiting state, control switches to other threads. When the mutex is released (or condition variable signalled), the thread wakes up and may acquire the mutex (if no other thread grabs it first), and continue to run. If however some other thread acquires the mutex (this can happen if several threads are waiting for it), the thread returns to sleeping state.

Conditional wait overhead

When using boost::conditional_variable, ACE_Conditional or directly pthread_cond_wait, is there any overhead for the waiting itself? These are more specific issues that trouble be:
After the waiting thread is unscheduled, will it be scheduled back before the wait expires and then unscheduled again or it will stay unscheduled until signaled?
Does wait acquires periodically the mutex? In this case, I guess it wastes each iteration some CPU time on system calls to lock and release the mutex. Is it the same as constantly acquiring and releasing a mutex?
Also, then, how much time passes between the signal and the return from wait?
Afaik, when using semaphores the acquire calls responsiveness is dependent on scheduler time slice size. How does it work in pthread_cond_wait? I assume this is platform dependent. I am more interested in Linux but if someone knows how it works on other platforms, it will help too.
And one more question: are there any additional system resources allocated for each conditional? I won't create 30000 mutexes in my code, but should I worry about 30000 conditionals that use the same one mutex?
Here's what is written in the pthread_cond man page:
pthread_cond_wait atomically unlocks the mutex and waits for the condition variable cond to be signaled. The thread execution is suspended and does not consume any CPU time until the condition variable is signaled.
So from here I'd answer to the questions as following:
The waiting thread won't be scheduled back before the wait was signaled or canceled.
There are no periodic mutex acquisitions. The mutex is reacquired only once before wait returns.
The time that passes between the signal and the wait return is similar to that of thread scheduling due to mutex release.
Regarding the resources, on the same man page:
In the LinuxThreads implementation, no resources are associated with condition variables, thus pthread_cond_destroy actually does nothing except checking that the condition has no waiting threads.
Update: I dug into the sources of pthread_cond_* functions and the behavior is as follows:
All the pthread conditionals in Linux are implemented using futex.
When a thread calls wait it is suspended and unscheduled. The thread id is inserted at the tail of a list of waiting threads.
When a thread calls signal the thread at the head of the list is scheduled back.
So, the waking is as efficient as the scheduler, no OS resources are consumed and the only memory overhead is the size of the waiting list (see futex_wake function).
You should only call pthread_cond_wait if the variable is already in the "wrong" state. Since it always waits, there is always the overhead associated with putting the current thread to sleep and switching.
When the thread is unscheduled, it is unscheduled. It should not use any resources, but of course an OS can in theory be implemented badly. It is allowed to re-acquire the mutex, and even to return, before the signal (which is why you must double-check the condition), but the OS will be implemented so this doesn't impact performance much, if it happens at all. It doesn't happen spontaneously, but rather in response to another, possibly-unrelated signal.
30000 mutexes shouldn't be a problem, but some OSes might have a problem with 30000 sleeping threads.

Critical Section OwnerShip

If a critical section lock is currently owned by a thread and other threads are trying to own this very lock, then all the threads other than the thread which owns the lock enter into a wait queue for the lock to be released.
When the initial owning thread releases the critical section lock then one of the threads in the waiting queue will be selected to run and given the critical section lock allowing the thread to run.
How is the next thread to run selected as it is not guaranteed that the thread that first came will be the owner of the thread.
If threads are not served in FIFO fashion then how is the next owner Thread selected from the wait queue?
The next thread to get the critical section is chosen non-deterministically. The only thing that you should be concerned about is whether the critical section is implemented fairly, i.e., that no thread waits infinitely long to get its turn. If you need to run threads in specific order, you have to implement this yourself.
The next thread is chosen in quasi FIFO order. However many system level variables may cause this to appear non deterministic:
From Concurrent Programming On Windows by Joe Duffy: (Chapter 5)
... When a fixed number of threads
needs to be awakened, the OS uses a
semi-fair algorithm to choose between
them: as threads wait they are placed
in a FIFO queue that the awakening
logic consults when determining which
thread to wake up. Threads that have
been waiting for the longest time are
thus preferred over threads that been
waiting less time. Although the OS
does use a strict FIFO data structure
to manage wait lists; ... this
ordering is regularly perturbed by
other system code and is not reliable.
Posix threads do the FIFO queue.
What about Thread Scheduling Algorithm , the threads in waiting state get priority as per Thread Scheduling algorithm
Plz correct if I am wrong.