How Critical Section object works exactly for multiple methods - c++

CASE I:
Scenario : I have two different methods, each sharing common global resource. Method1() is accessed by ThreadA and Method2() by many other Threads but not ThreadA.
Requirement :What I require is if ThreadA is accessing Method1() no other thread access Method2().
Solution :So using a common critical section object will prevent any conflict in global resource.
For eg.
Method1()
{
EnterCriticalSection(&cs)
//Access common resource (Read/Write) lets say a global queue
LeaveCriticalSection(&cs)
}
Method2()
{
EnterCriticalSection(&cs)
//Access common resource (Read/Write) lets say a global queue
LeaveCriticalSection(&cs)
}
CASE II:
Scenario :I have two different methods, and they do not share any resource.
Requirement :What I require is different threads may not run Method1() simultaneously. Similarly for Method2(). But there is no problem if any of the threads run both the methods at the same time. For eg. ThreadA and ThreadB both may not access Method1() at the same time but it should be possible that ThreadA is running Method1() and ThreadB is running Method2() simultaneously.
In this case if I use same critical section object in both the methods, then lets say ThreadA runs method1, ThreadB will have to wait for ThreadA to Leave critical section before it starts executing Method2.
Method1()
{
EnterCriticalSection(&cs)
//Do something
LeaveCriticalSection(&cs)
}
Method2()
{
EnterCriticalSection(&cs)
//Do Something
LeaveCriticalSection(&cs)
}
Solution :So in this case do we use different critical section object for different Methods?? Am I correct in bth the cases?
I am not clear with the concept of critical section object when dealing multiple functions. Please help me in understanding the concept. I tried online but could not find relevant source that could clear my doubts.

Critical sections protect resources, by blocking threads from running specific parts of your code.
For that reason, each resource has an associated critical section, but multiple pieces of code that all access the same resource must also use the same critical section object.
It is technically OK for a single critical section to protect two resources. This can actually make sense if the two objects are always used together. In fact, you could run your program with just one Critical Section for everything. Not efficient, but not unheard of either. For instance, Python uses that mechanism (Global Interpreter Lock)
But if they're two independent objects, using individual critical sections allows concurrent use of the objects. This is far more efficient. The downside is that if the two objects are sometimes used together, you should always enter their critical sections in the same order.

Your both assumptions are correct as per requirements.
Basically critical section is a code where you want only single thread to enter. Till that thread finishes no other thread is allowed to enter into that critical section.And that critical section could contain multiple functions as well.
You hold mutexes at the begining of critical sections which you release after leaving that section.

Related

OpenMP: how to explicitly divide code into different threads

Let's say I have a Writer class that generates some data, and a Reader class that consumes it. I want them to run all the time under different threads. How can I do that with OpenMP?
This is want I would like to have:
class Reader
{
public:
void run();
};
class Writer
{
public:
void run();
};
int main()
{
Reader reader;
Writer writer;
reader.run(); // starts asynchronously
writer.run(); // starts asynchronously
wait_until_finished();
}
I guess the first answers will point to separate each operation into a section, but sections does not guarantee that code blocks will be given to different threads.
Can tasks do it? As far as I understood after reading about task is that each code block is executed just once, but the assigned thread can change.
Any other solution?
I would like to know this to know if a code I have inherited that uses pthreads, which explicitly creates several threads, could be written with OpenMP. The issue is that some threads were not smartly written and contain active waiting loops. In that situation, if two objects with active waiting are assigned to the same OpenMP thread (and hence are executed sequentially), they can reach a deadlock. At least, I think that could happen with sections, but I am not sure about tasks.
Serialisation could also happen with tasks. One horrible solution would be to reimplement sections on your own with guarantee that each section would run in a separate thread:
#pragma omp parallel num_threads(3)
{
switch (omp_get_thread_num())
{
case 0: wait_until_finished(); break;
case 1: reader.run(); break;
case 2: writer.run(); break;
}
}
This code assumes that you would like wait_until_finished() to execute in parallel with reader.run() and writer.run(). This is necessary since in OpenMP only the scope of the parallel construct is where the program executes in parallel and there is no way to put things in the background, so to say.
If you're rewriting the code anyway, you might be better moving to Threading Building Blocks (TBB; http://www.threadingbuildingblocks.org).
TBB has explicit support for pipeline style operation (or more complicated task graphs), while maintaing cache-locality and independence of the underlying number of threads.

Is it safe to have common name of a synchronization object within two modules used in same process?

I am looking at a legacy code. I see that there are two DLLs share common code base therefore Dlls shares some common methods and synchronization object name. It means they both create and use a Critical Section synchronization object with the same name. I know global/Static variables are not shared between two modules even within a process on windows. In theory we are creating two independent synchronization objects running independently in their respective DLLs so it shouldn't be a problem.
Now consider this scenario -
There is a process Proc.exe which loads two DLLs A.dll and B.dll as mentioned above.Both of these DLLs will have a common critical section object name g_cs and some common method names, for now consider one common method name foo() which is thread safe as following :
foo()
{
....
EnterCriticalSection(g_cs)
....
....
LeaveCriticalSection(g_cs)
....
....
}
Suppose two threads T1 & T2 are running within Proc.exe and are in foo() method currently.
Sometimes I observe a deadlock. From the logs I see t1 and t2 simultaneously acquire the critical_section g_cs one after other and never unlock 'g_cs' afterwards. My understanding is that T1 and T2 can simultaneously acquire 'g_cs' only if they are running in context of A.dll and B.dll respectively. Is that is the case then this execution should be safe ?
My understanding is that critical section object belongs to the process so the problem might be because of common name 'g_cs' of synchronization object in two dlls. But theoretically that shouldn't happen.
As already mentioned, giving the same variable name is not the issue. Even if the critical section was somehow being merged, that wouldn't lead to deadlock.
You should attach debugger at the moment of deadlock. Probably the stacks of the threads will already point to the problem. If it's indeed a deadlock, add critical sections to debugger's watch. There will be a field that shows owner thread ID. Go to that thread (select it from "threads" window) and check its stack.

tricky InterlockedDecrement vs CriticalSection

There is global long count counter.
Thread A does
EnterCriticalSection(&crit);
// .... do something
count++; // (*1)
// .. do something else
LeaveCriticalSection(&crit);
Thread B does
InterlockedDecrement(&count); // (*2) not under critical secion.
At (*1), I am under a critical section. At (*2), I am not.
Is (*1) safe without InterlockedIncrement() ? (it is protected critical section).
Do I need InterlockedIncrement() at (*1) ?
I feel that I can argue both for and against.
You should use one or the other, not mix them.
While InterlockedDecrement is guaranteed to be atomic, operator++ is not, though in this case it likely will be depending upon your architecture. In this case, you're not actually protecting the count variable at all.
Given that you appear to want to do simple inrecrement/decrement operations, I would suggest that you simply remove the critical section in this case and use the associated Interlocked* functions.
Both threads should use either InterlockedDecrement/InterlockedIncrement, or the same critical section. There is no reason for the mixing and matching to work correctly.
Consider the following sequence of events:
Thread A: enter the critical section
Thread A: read count into a register
Thread A: increment the value in the register
Thread B: InterlockedDecrement(&count) <<< There's nothing to stop this from happening!
Thread A: write the new count
Thread A: leave the critical section
Net result: you've lost a decrement!
A useful (if deliberately simplified) mental model for critical sections is this: all entering a critical section does is prevent other threads from entering the same critical section. It doesn't automatically prevent other threads from doing other things that may require synchronization.
And all InterlockedDecrement does is ensure the atomicity of the decrement. It doesn't prevent any other thread performing computations on an outdated copy of the variable, and then writing the result back.
Yes, you do.
Otherwise it could be that:
The value is read
The value is incremented atomically
The original value is incremented and written, invalidating the previous atomic update
Furthermore, you need the same critical section for both, since it doesn't help to lock on separate things. :)

Multithreading and Critical Sections Use - C++

I'm a little confused as to the proper use of critical sections in multithreaded applications. In my application there are several objects (some circular buffers and a serial port object) that are shared among threads. Should access of these objects always be placed within critical sections, or only at certain times? I suspect only at certain times because when I attempted to wrap each use with an EnterCriticalSection / LeaveCriticalSection I ran into what seemed to be a deadlock condition. Any insight you may have would be appreciated. Thanks.
If you share a resource across threads, and some of those threads read while others write, then it must be protected always.
It's hard to give any more advice without knowing more about your code, but here are some general points to keep in mind.
1) Critical sections protect resources, not processes.
2) Enter/leave critical sections in the same order across all threads. If thread A enters Foo, then enters Bar, then thread B must enter Foo and Bar in the same order. If you don't, you could create a race.
3) Entering and leaving must be done in opposite order. Example, since you entered Foo then entered Bar, you must leave Bar before leaving Foo. If you don't do this, you could create a deadlock.
4) Keep locks for the shortest time period reasonably possible. If you're done with Foo before you start using Bar, release Foo before grabbing Bar. But you still have to keep the ordering rules in mind from above. In every thread that uses both Foo and Bar, you must acquire and release in the same order:
Enter Foo
Use Foo
Leave Foo
Enter Bar
Use Bar
Leave Bar
5) If you only read 99.9% of the time and write 0.1% of the time, don't try to be clever. You still have to enter the crit sec even when you're only reading. This is because you don't want a write to start when your'e in the middle of a read.
6) Keep the critical sections granular. Each critical section should protect one resource, not multiple resources. If you make the critical sections too "big", you could serialize your application or create a very mysterious set of deadlocks or races.
Use a C++ wrapper around the critical section which supports RAII:
{
CriticalSectionLock lock ( mutex_ );
Do stuff...
}
The constructor for the lock acquires the mutex and the destructor releases the mutex even if an exception is thrown.
Try not to gain more more than one lock at a time, and try to avoid calling functions outside of your class while holding locks; this helps avoid gaining locks in different places, so you tend to get fewer possibilities for deadlocks.
If you must gain more than one lock at the same time, sort the locks by their address and gain them in order. That way multiple processes gain the same locks in the same order without co-ordination.
With an IO port, consider whether you need to lock output at the same time as input - often you have a case where something tries to write, then expects to read, or visa-versa. If you have two locks, then you can get a deadlock if one thread writes then reads, and the other reads then writes. Often having one thread which does the IO and a queue of requests solves that, but that's a bit more complicated than just wrapping calls up with locks, and without much more detail I can't recommend it.

Adding locks to the class by composition

I'm writing thread-safe class in C++. All of its public methods use locks (non-recursive spin locks). Private methods are lock-free. So, everything should be OK: user calls public method, it locks object and then does the work through private methods. But I got dead lock when a public method calls another public method. I've read that recursive mutexes are bad, cause it's difficult to debug them. So I use C's stdio way: public method Foo() only locks the object and calls Foo_nolock() to do the whole work. But I don't like these _nolock() methods. I think it duplicates my code.
So I've got an idea: I'll make lock-free class BarNoLock, and thread-safe class Bar that has only one member: an instance of BarNoLock. And all Bar's methods will only lock this member and call it's methods.
Is it a good idea or maybe there are some better patterns/practices? Thanks.
Update: I know about pimpl and bridge. I ask about multi-threading patterns, not OOP.
I'm not sure why recursive mutexes would be considered bad, see this question for a discussion of them.
Recursive Lock (Mutex) vs Non-Recursive Lock (Mutex)
But I don't think that's necessarily your problem because Win32 critical sections support multiple entries from the same thread without blocking. From the doc:
When a thread owns a critical section, it can make additional calls to EnterCriticalSection or TryEnterCriticalSection without blocking its execution. This prevents a thread from deadlocking itself while waiting for a critical section that it already owns. To release its ownership, the thread must call LeaveCriticalSection one time for each time that it entered the critical section. There is no guarantee about the order in which waiting threads will acquire ownership of the critical section
So maybe you were doing something else wrong when you were deadlocking yourself? Having to work around not deadlocking yourself on the same mutex from the same thread with weird function call semantics is not something you should have to do.
Looks like you have reinvented the Bridge Pattern. Sounds perfectly in order.