How can you use a semaphore to create a special critical section that allows two threads to be executing inside instead of the usual one thread?
In pseudocode it looks like so:
s = Semaphore(2) # max 2 possible threads accessing the critical section
Each thread then uses the semaphore to serialize access:
s.decrement() # may block
# enter critical section
s.increment()
Useful resource is: The Little Book of Semaphores
Related
CASE I:
Scenario : I have two different methods, each sharing common global resource. Method1() is accessed by ThreadA and Method2() by many other Threads but not ThreadA.
Requirement :What I require is if ThreadA is accessing Method1() no other thread access Method2().
Solution :So using a common critical section object will prevent any conflict in global resource.
For eg.
Method1()
{
EnterCriticalSection(&cs)
//Access common resource (Read/Write) lets say a global queue
LeaveCriticalSection(&cs)
}
Method2()
{
EnterCriticalSection(&cs)
//Access common resource (Read/Write) lets say a global queue
LeaveCriticalSection(&cs)
}
CASE II:
Scenario :I have two different methods, and they do not share any resource.
Requirement :What I require is different threads may not run Method1() simultaneously. Similarly for Method2(). But there is no problem if any of the threads run both the methods at the same time. For eg. ThreadA and ThreadB both may not access Method1() at the same time but it should be possible that ThreadA is running Method1() and ThreadB is running Method2() simultaneously.
In this case if I use same critical section object in both the methods, then lets say ThreadA runs method1, ThreadB will have to wait for ThreadA to Leave critical section before it starts executing Method2.
Method1()
{
EnterCriticalSection(&cs)
//Do something
LeaveCriticalSection(&cs)
}
Method2()
{
EnterCriticalSection(&cs)
//Do Something
LeaveCriticalSection(&cs)
}
Solution :So in this case do we use different critical section object for different Methods?? Am I correct in bth the cases?
I am not clear with the concept of critical section object when dealing multiple functions. Please help me in understanding the concept. I tried online but could not find relevant source that could clear my doubts.
Critical sections protect resources, by blocking threads from running specific parts of your code.
For that reason, each resource has an associated critical section, but multiple pieces of code that all access the same resource must also use the same critical section object.
It is technically OK for a single critical section to protect two resources. This can actually make sense if the two objects are always used together. In fact, you could run your program with just one Critical Section for everything. Not efficient, but not unheard of either. For instance, Python uses that mechanism (Global Interpreter Lock)
But if they're two independent objects, using individual critical sections allows concurrent use of the objects. This is far more efficient. The downside is that if the two objects are sometimes used together, you should always enter their critical sections in the same order.
Your both assumptions are correct as per requirements.
Basically critical section is a code where you want only single thread to enter. Till that thread finishes no other thread is allowed to enter into that critical section.And that critical section could contain multiple functions as well.
You hold mutexes at the begining of critical sections which you release after leaving that section.
What is the difference between above two?
This question came to my mind because I found that
Monitors and locks provide mutual exclusion
Semaphores and conditional variables provide synchronization
Is this true?
Also while searching I found this article
Any clarifications please.
Mutual exclusion means that only a single thread should be able to access the shared resource at any given point of time. This avoids the race conditions between threads acquireing the resource. Monitors and Locks provide the functionality to do so.
Synchronization means that you synchronize/order the access of multiple threads to the shared resource.
Consider the example:
If you have two threads, Thread 1 & Thread 2.
Thread 1 and Thread 2 execute in parallel but before Thread 1 can execute say a statement A in its sequence it is a must that Thread 2 should execute a statement B in its sequence. What you need here is synchronization. A semaphore provides that. You put a semapohore wait before the statement A in Thread 1 and you post to the semaphore after statement B in Thread 2.
This ensures the synchronization you need.
The best way to understand the difference is with the help of an example.Below is the program to solve the classical producer consumer problem via semaphore.To provide mutual exclusion we genrally use a binary semaphore or mutex and to provide synchronization we use counting semaphore.
BufferSize = 3;
semaphore mutex = 1; // used for mutual exclusion
semaphore empty = BufferSize; // used for synchronization
semaphore full = 0; // used for synchronization
Producer()
{
int widget;
while (TRUE) { // loop forever
make_new(widget); // create a new widget to put in the buffer
down(&empty); // decrement the empty semaphore
down(&mutex); // enter critical section
put_item(widget); // put widget in buffer
up(&mutex); // leave critical section
up(&full); // increment the full semaphore
}
}
Consumer()
{
int widget;
while (TRUE) { // loop forever
down(&full); // decrement the full semaphore
down(&mutex); // enter critical section
remove_item(widget); // take a widget from the buffer
up(&mutex); // leave critical section
consume_item(widget); // consume the item
}
}
In the above code the mutex variable provides mutual exclusion(allow only one thread to access critical section) whereas full and the empty variable are used for synchonization(to aribtrate the access of shared resource among various thread).
There is global long count counter.
Thread A does
EnterCriticalSection(&crit);
// .... do something
count++; // (*1)
// .. do something else
LeaveCriticalSection(&crit);
Thread B does
InterlockedDecrement(&count); // (*2) not under critical secion.
At (*1), I am under a critical section. At (*2), I am not.
Is (*1) safe without InterlockedIncrement() ? (it is protected critical section).
Do I need InterlockedIncrement() at (*1) ?
I feel that I can argue both for and against.
You should use one or the other, not mix them.
While InterlockedDecrement is guaranteed to be atomic, operator++ is not, though in this case it likely will be depending upon your architecture. In this case, you're not actually protecting the count variable at all.
Given that you appear to want to do simple inrecrement/decrement operations, I would suggest that you simply remove the critical section in this case and use the associated Interlocked* functions.
Both threads should use either InterlockedDecrement/InterlockedIncrement, or the same critical section. There is no reason for the mixing and matching to work correctly.
Consider the following sequence of events:
Thread A: enter the critical section
Thread A: read count into a register
Thread A: increment the value in the register
Thread B: InterlockedDecrement(&count) <<< There's nothing to stop this from happening!
Thread A: write the new count
Thread A: leave the critical section
Net result: you've lost a decrement!
A useful (if deliberately simplified) mental model for critical sections is this: all entering a critical section does is prevent other threads from entering the same critical section. It doesn't automatically prevent other threads from doing other things that may require synchronization.
And all InterlockedDecrement does is ensure the atomicity of the decrement. It doesn't prevent any other thread performing computations on an outdated copy of the variable, and then writing the result back.
Yes, you do.
Otherwise it could be that:
The value is read
The value is incremented atomically
The original value is incremented and written, invalidating the previous atomic update
Furthermore, you need the same critical section for both, since it doesn't help to lock on separate things. :)
// locks a critical section, and unlocks it automatically
// when the lock goes out of scope
CAutoLock(CCritSec * plock)
The above is from wxutil.h, does it lock the access of different process , or just locks different threads in the same process?
Just across threads. From the doc of CAutoLock:
The CAutoLock constructor locks the critical section, ...
and CCritSec:
The CCritSec class provides a thread lock.
More explicitly, from the description of Critical Section Objects:
A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process.
I'm trying to do the same thing as suggested in this solution:
How can I create a thread-safe singleton pattern in Windows?
But, where should the critical section be initialized and uninitialized?
Wrap the critical section into a class (use a ready one or craft your own) and declare a global variable of that class - then the critical section will be initialized during the program startup and deinitialized on program exit. Since startup and exit are done on one thread it will work reliably.
Use pthread_once() and you can initialize the critical section before you use it for the first time. Windows has InitOnceExecuteOnce function.