If a counting semaphore is initialized to n, does it mean n processes can run their critical sections concurrently?
Essentially, yes.
Remember a counting semaphore will only block when the count is negative after decrementing. Therefore, the semaphore can be decremented n times before blocking. Since all decrements must be matched with an increment, then, assuming each process decrements the semaphore only once (which is, by far, the most common case), then yes, n processes will be able to run their critical sections at the same time.
No.
If n > 0, then it means that the counting semaphore can be taken exactly n times before the requesting context blocks and waits for the counting semaphore to become available (assuming no one gives it during that period).
If n <= 0, then it means that that counting semaphore must be given (1 - n) times before anyone can successfully take that counting semaphore.
Controlling access to a critical section is typically better handled by a mutex.
Yes, if you have initialized semaphore to N, then sem_wait will not block any thread unless it has already been called N times, then only semaphore becomes -ve and that's when any thread calling sem_wait blocks.
for critical section you have to use binary semaphores or mutex.
Related
I'm trying to implement a gather function that waits for N processes to continue.
struct sembuf operations[2];
operaciones[0].sem_num = 0;
operaciones[0].sem_op = -1; // wait() or p()
operaciones[1].sem_num = 0;
operaciones[1].sem_op = 0; // wait until it becomes 0
semop ( this->id,operations,2 );
Initially, the value of the semaphore is N.
The problem is that it freezes even when all processes have executed the semop function. I think it is related to the fact that the operations are executed atomically (but I don't know exactly what it means). But I don't understand why it doesn't work.
Does the code subtract 1 from the semaphore and then block the process if it's not the last or is the code supposed to act in a different way?
It's hard to see what the code does without the whole function and algorithm.
By the looks of it, you apply 2 action in a single atomic action: subtract 1 from the semaphore and wait for 0.
There could be several issues if all processes freeze; the semaphore is not a shared between all processes, you got the number of processes wrong when initiating the semaphore or one process leaves the barrier, at a later point increases the semaphore and returns to the barrier.
I suggest debugging to see that all processes are actually in barrier, and maybe even printing each time you do any action on the semaphore (preferably on the same console).
As for what is an atomic action is; it is a single or sequence of operation that guarantied not to be interrupted while being executed. This means no other process/thread will interfere the action.
I'm trying to implement a multi-in multi-out interthread channel class. I have three mutexes: full locks when buffer is full. empty locks when buffer is empty. th locks when anyone else is modifying buffer. My single IO program looks like
operator<<(...){
full.lock() // locks when trying to push to full buffer
full.unlock() // either it's locked or not, unlock it
th.lock()
...
empty.unlock() // it won't be empty
if(...)full.lock() // it might be full
th.unlock()
operator>>(...){
// symmetric
}
This works totally fine for single IO. But for multiple IO, when consumer thread unlocks full, all provider thread will go down, only one will obtain th and buffer might be full again because of that single thread, while there's no full check anymore. I can add a full.lock() again of course, but this is endless. Is there anyway to lock full and th at same time? I do see a similar question about this, but I don't see order is the problem here.
Yes, use std::lock(full , th);, this could avoid some deadlocks
for example:
thread1:
full.lock();
th.lock();
thread2:
th.lock();
full.lock();
this could cause a deadlock, but the following don't:
thread1:
std::lock(full, th);
thread2:
std::lock(th, full);
No, you can't atomically lock two mutexes.
Additionally, it looks like you are locking a mutex in one thread and then unlocking it in another. That's not allowed.
I suggest switching to condition variables for this problem. Note that it's perfectly fine to have one mutex associated with multiple condition variables.
No, you cannot lock two mutexes at once, but you can use a std::condition_variable for the waiting threads and invoke notify_one when you are done.
See here for further details.
Functonality you try to achieve would require something similar to System V semaphores, where group of operations on semaphors could be applied atomically. In your case you would have 3 semaphores:
semaphore 1 - locking, initialized to 0
semaphore 2 - counter of available data, initialized to 0
semaphore 3 - counter of available buffers, initialized how much buffers you have
then push operation would do this group to lock:
check semaphore 1 is 0
increase semaphore 1 by +1
increase semaphore 2 by +1
decrease semaphore 3 by -1
then
decrease semaphore 1 by -1
to unlock. then to pull data first group would be changed to:
check semaphore 1 is 0
increase semaphore 1 by +1
decrease semaphore 2 by -1
increase semaphore 3 by +1
unlock is the same as before. Using mutexes, which are special case semaphores most probably would not solve your problem this way. First of all they are binary ie only have 2 states but more important API does not provide group operations on them. So you either find semaphore implementation for your platform or use single mutex with condition variable(s) to signal waiting threads that data or buffer is available.
Prove or Disprove the correctness of the following semaphore.
Here are my thoughts on this.
Well, if someone implements it so wait runs first before signal, there will be a deadlock. The program will call wait, decrement count, enter the count < 0 condition and wait at gate. Because it is waiting at gate, it cannot proceed to the signal that is right after the wait. So in that case, this might imply that the semaphore is incorrect.
However, if we assume that two processes are running, one running wait first and the other running signal first, then if the first process run waits and blocks at wait(gate), then the other process can run signal and release the process that was blocked. Thus, continuing on this scheme would make the algorithm valid and not result in a dead lock.
Given implementation follows these principles:
Binary semaphore S protect count variable from concurrent access.
If non-negative, count reflect number of free resources for general semaphore. Otherwise, absolute value of count reflect number of threads which wait (p5) or ready-to-wait (between p4 and p5) on binary semaphore gate.
Every signal() call increments count and, if its previous value is negative, signals binary semaphore gate.
But, because of possibility of ready-to-wait state, given implementation is incorrect:
Assume thread#1 calls wait(), and currently is in ready-to-wait state. Assume another thread#2 also calls wait(), and currently is in ready-to-wait state too.
Assume thread#3 calls signal() at this moment. Because count is negative (-2), the thread performs all operations including p10 (signal(gate)). Because gate is not waited at the moment, it becomes in free state.
Assume another thread#4 calls signal() at this moment. Because count is still negative (-1), the thread also performs all operations including p10. But now gate is already in free state. So, signal(gate) is no-op here, and we have missed signal event: only one of thread#1 and thread#2 will continue after executing p5 (wait(gate)). Other thread will wait forever.
Without possibility of ready-to-wait state (that is signal(S) and wait(gate) would be executed atomically) implementation would be OK.
Using the C++11 standard library (with the only help of boost::thread eventually) is there a clean way to implement a N readers - 1 producer solution, where all the readers, once notified at the same time (with std::condition_variable::notify_all() for example) by the producer, are guaranteed to enter their critical section before the producer will eventually enter its critical section a second time. In other words, all the notified readers must observe the same state of the shared resource. Once the producer noties the N readers, it cannot modify the shared resource until all the N readers have finished their reading. Note that boost::barrier is not really what I need, as I do not know N in advance. N may vary from one notification to another.
You could use atomic counters, with some polling from the producer thread.
When the counter reaches either N or 0 (it's up to you) then the producer gets to work and produce whatever it needs to produce. Before notifying the condition variable, the producers sets the counter to 0 (or N).
When a reader is done, it simply increases (or decreases) the counter.
What you describe is called a barrier
Typically a thread barrier (i.e. boost::barrier) is initialized with an integer representing the number of threads that must call boost::barrier::wait - all threads wait at that point until the condition is met and then all threads continue.
Is it possible to implement a thread barrier that can have its 'waitCount' set after it has been initialized?
Or is there an equivalent approach that will give the same behaviour?
i.e. after we have done:
int numWaiting = 2;
boost::barrier b( numWaiting );
There are no methods to set a new numWaiting value;
The reason for wanting this is basically that the number of threads available for a process may increase AFTER the barrier was initialized but BEFORE the wait condition has been met.
You can add such behavior to simple barrier implementation based on boost::mutex.
See code there: http://code.google.com/p/fengine/source/browse/trunk/src/engine/misc/barrier.hpp