One mutex vs Multiple mutexes. Which one is better for the thread pool? - c++

Example here, just want to protect the iData to ensure only one thread visit it at the same time.
struct myData;
myData iData;
Method 1, mutex inside the call function (multiple mutexes could be created):
void _proceedTest(myData &data)
{
std::mutex mtx;
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
int const nMaxThreads = std::thread::hardware_concurrency();
vector<std::thread> threads;
for (int iThread = 0; iThread < nMaxThreads; ++iThread)
{
threads.push_back(std::thread(_proceedTest, iData));
}
for (auto& th : threads) th.join();
Method2, use only one mutex:
void _proceedTest(myData &data, std::mutex &mtx)
{
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
std::mutex mtx;
int const nMaxThreads = std::thread::hardware_concurrency();
vector<std::thread> threads;
for (int iThread = 0; iThread < nMaxThreads; ++iThread)
{
threads.push_back(std::thread(_proceedTest, iData, mtx));
}
for (auto& th : threads) th.join();
I want to make sure that the Method 1 (multiple mutexes) ensures that only one thread can visit the iData at the same time.
If Method 1 is correct, not sure Method 1 is better of Method 2?
Thanks!

I want to make sure that the Method 1 (multiple mutexes) ensures that only one thread can visit the iData at the same time.
Your 1st example creates a local mutex variable on the stack, it won't be shared with the other threads. Thus it's completely useless.
It won't guarantee exclusive access to iData.
If Method 1 is correct, not sure Method 1 is better of Method 2?
It isn't correct.

The other answers are correct on the technical level, but there is an important language independent thing missing: you always prefer to minimize the number of different mutexes/locks/... !
Because: as soon as you have more than one thing that a thread needs to acquire in order to do something (to then release all acquired locks) order becomes crucial.
When you have two locks, and you have to different pieces of code, like:
getLockA() {
getLockB() {
do something
release B
release A
And
getLockB() {
getLockA() {
you can quickly run into deadlocks - because two threads/processes can acquire one lock each - and then they are both stuck, waiting for the other one to release its lock. Of course - when looking at the above example "you would never make a mistake, and always go A first then B". But what if those locks exist in completely different parts of your application? And they aren't acquired in the same method or class, but over the course of say 3, 5 nested method invocations?
Thus: when you can solve your problem with one lock - use one lock only! The more locks you need to get something done, the higher the risk to end up in dead locks.

Method 1 only works if you make the mutex variable static.
void _proceedTest(myData &data)
{
static std::mutex mtx;
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
This will make mtx be shared by all threads that enter _proceedTest.
Since a static function scope variable is only visible to users of the function, it is not really a sufficient lock for the passed in data. This is because it is conceivable that multiple threads could be calling different functions that each want to manipulate data.
Thus, even though Method 1 is salvageable, Method 2 is still better, even though the cohesion between the lock and the data is weak.

The mutex in version 1 will go out of scope once you leave the _proceedTest scope, locking a mutex like that makes no sense because it will never be accessible to the other thread.
In the second version multiple threads can share the mutex (as long as it doesn't go out of scope, for example as a class member), this way one thread can lock it and the other thread can see that it is locked (and won't be able to lock it aswell, hence the term mutual exclusion).

Related

When would getters and setters with mutex be thread safe?

Consider the following class:
class testThreads
{
private:
int var; // variable to be modified
std::mutex mtx; // mutex
public:
void set_var(int arg) // setter
{
std::lock_guard<std::mutex> lk(mtx);
var = arg;
}
int get_var() // getter
{
std::lock_guard<std::mutex> lk(mtx);
return var;
}
void hundred_adder()
{
for(int i = 0; i < 100; i++)
{
int got = get_var();
set_var(got + 1);
sleep(0.1);
}
}
};
When I create two threads in main(), each with a thread function of hundred_adder modifying the same variable var, the end result of the var is always different i.e. not 200 but some other number.
Conceptually speaking, why is this use of mutex with getter and setter functions not thread-safe? Do the lock-guards fail to prevent the race-condition to var? And what would be an alternative solution?
Thread a: get 0
Thread b: get 0
Thread a: set 1
Thread b: set 1
Lo and behold, var is 1 even though it should've been 2.
It should be obvious that you need to lock the whole operation:
for(int i = 0; i < 100; i++){
std::lock_guard<std::mutex> lk(mtx);
var += 1;
}
Alternatively, you could make the variable atomic (even a relaxed one could do in your case).
int got = get_var();
set_var(got + 1);
Your get_var() and set_var() themselves are thread safe. But this combined sequence of get_var() followed by set_var() is not. There is no mutex that protects this entire sequence.
You have multiple concurrent threads executing this. You have multiple threads calling get_var(). After the first one finishes it and unlocks the mutex, another thread can lock the mutex immediately and obtain the same value for got that the first thread did. There's absolutely nothing that prevents multiple threads from locking and obtaining the same got, concurrently.
Then both threads will call set_var(), updating the mutex-protected int to the same value.
That's just one possibility that can happen here. You could easily have multiple threads acquiring the mutex sequentially and thus incrementing var by several values, only to be followed by some other, stalled thread, that called get_var() several seconds ago, and only now getting around to calling set_var(), thus resetting var to a much smaller value.
The code show in thread-safe in a sense that it will never set or get partial value of the variable.
But your usage of the methods does not guarantee that value will correctly change: reading and writing from multiple threads can collide with each other. Both threads read the value (11), both increment it (to 12) and than both set to the same (12) - now you counted 2 but effectively incremented only once.
Option to fix:
provide "safe increment" operation
provide equivalent of InterlockedCompareExchange to make sure value you are updating correspond to original one and retry as necessary
wrap calling code into separate mutex or use other synchronization mechanism to prevent operations to intermix.
Why don't you just use std::atomic for the shared data (var in this case)? That will be more safe efficient.
This is an absolute classic.
One thread obtains the value of var, releases the mutex and another obtains the same value before the first thread has chance to update it.
Consequently the process risks losing increments.
There are three obvious solutions:
void testThreads::inc_var(){
std::lock_guard<std::mutex> lk(mtx);
++var;
}
That's safe because the mutex is held until the variable is updated.
Next up:
bool testThreads::compare_and_inc_var(int val){
std::lock_guard<std::mutex> lk(mtx);
if(var!=val) return false;
++var;
return true;
}
Then write code like:
int val;
do{
val=get_var();
}while(!compare_and_inc_var(val));
This works because the loop repeats until it confirms it's updating the value it read. This could result in live-lock though in this case it has to be transient because a thread can only fail to make progress because another does.
Finally replace int var with std::atomic<int> var and either use ++var or var.compare_exchange(val,val+1) or var.fetch_add(1); to update it.
NB: Notice compare_exchange(var,var+1) is invalid...
++ is guaranteed to be atomic on std::atomic<> types but despite 'looking' like a single operation in general no such guarantee exists for int.
std::atomic<> also provides appropriate memory barriers (and ways to hint what kind of barrier is needed) to ensure proper inter-thread communication.
std::atomic<> should be a wait-free, lock-free implementation where available. Check your documentation and the flag is_lock_free().

Using a mutex to block execution from outside the critical section

I'm not sure I got the terminology right but here goes - I have this function that is used by multiple threads to write data (using pseudo code in comments to illustrate what I want)
//these are initiated in the constructor
int* data;
std::atomic<size_t> size;
void write(int value) {
//wait here while "read_lock"
//set "write_lock" to "write_lock" + 1
auto slot = size.fetch_add(1, std::memory_order_acquire);
data[slot] = value;
//set "write_lock" to "write_lock" - 1
}
the order of the writes is not important, all I need here is for each write to go to a unique slot
Every once in a while though, I need one thread to read the data using this function
int* read() {
//set "read_lock" to true
//wait here while "write_lock"
int* ret = data;
data = new int[capacity];
size = 0;
//set "read_lock" to false
return ret;
}
so it basically swaps out the buffer and returns the old one (I've removed capacity logic to make the snippets shorter)
In theory this should lead to 2 operating scenarios:
1 - just a bunch of threads writing into the container
2 - when some thread executes the read function, all new writers will have to wait, the reader will wait until all existing writes are finished, it will then do the read logic and scenario 1 can continue.
The question part is that I don't know what kind of a barrier to use for the locks -
A spinlock would be wasteful since there are many containers like this and they all need cpu cycles
I don't know how to apply std::mutex since I only want the write function to be in a critical section if the read function is triggered. Wrapping the whole write function in a mutex would cause unnecessary slowdown for operating scenario 1.
So what would be the optimal solution here?
If you have C++14 capability then you can use a std::shared_timed_mutex to separate out readers and writers. In this scenario it seems you need to give your writer threads shared access (allowing other writer threads at the same time) and your reader threads unique access (kicking all other threads out).
So something like this may be what you need:
class MyClass
{
public:
using mutex_type = std::shared_timed_mutex;
using shared_lock = std::shared_lock<mutex_type>;
using unique_lock = std::unique_lock<mutex_type>;
private:
mutable mutex_type mtx;
public:
// All updater threads can operate at the same time
auto lock_for_updates() const
{
return shared_lock(mtx);
}
// Reader threads need to kick all the updater threads out
auto lock_for_reading() const
{
return unique_lock(mtx);
}
};
// many threads can call this
void do_writing_work(std::shared_ptr<MyClass> sptr)
{
auto lock = sptr->lock_for_updates();
// update the data here
}
// access the data from one thread only
void do_reading_work(std::shared_ptr<MyClass> sptr)
{
auto lock = sptr->lock_for_reading();
// read the data here
}
The shared_locks allow other threads to gain a shared_lock at the same time but prevent a unique_lock gaining simultaneous access. When a reader thread tries to gain a unique_lock all shared_locks will be vacated before the unique_lock gets exclusive control.
You can also do this with regular mutexes and condition variables rather than shared. Supposedly shared_mutex has higher overhead, so I'm not sure which will be faster. With Gallik's solution you'd presumably be paying to lock the shared mutex on every write call; I got the impression from your post that write gets called way more than read so maybe this is undesirable.
int* data; // initialized somewhere
std::atomic<size_t> size = 0;
std::atomic<bool> reading = false;
std::atomic<int> num_writers = 0;
std::mutex entering;
std::mutex leaving;
std::condition_variable cv;
void write(int x) {
++num_writers;
if (reading) {
--num_writers;
if (num_writers == 0)
{
std::lock_guard l(leaving);
cv.notify_one();
}
{ std::lock_guard l(entering); }
++num_writers;
}
auto slot = size.fetch_add(1, std::memory_order_acquire);
data[slot] = x;
--num_writers;
if (reading && num_writers == 0)
{
std::lock_guard l(leaving);
cv.notify_one();
}
}
int* read() {
int* other_data = new int[capacity];
{
std::unique_lock enter_lock(entering);
reading = true;
std::unique_lock leave_lock(leaving);
cv.wait(leave_lock, [] () { return num_writers == 0; });
swap(data, other_data);
size = 0;
reading = false;
}
return other_data;
}
It's a bit complicated and took me some time to work out, but I think this should serve the purpose pretty well.
In the common case where only writing is happening, reading is always false. So you do the usual, and pay for two additional atomic increments and two untaken branches. So the common path does not need to lock any mutexes, unlike the solution involving a shared mutex, this is supposedly expensive: http://permalink.gmane.org/gmane.comp.lib.boost.devel/211180.
Now, suppose read is called. The expensive, slow heap allocation happens first, meanwhile writing continues uninterrupted. Next, the entering lock is acquired, which has no immediate effect. Now, reading is set to true. Immediately, any new calls to write enter the first branch, and eventually hit the entering lock which they are unable to acquire (as its already taken), and those threads then get put to sleep.
Meanwhile, the read thread is now waiting on the condition that the number of writers is 0. If we're lucky, this could actually go through right away. If however there are threads in write in either of the two locations between incrementing and decrementing num_writers, then it will not. Each time a write thread decrements num_writers, it checks if it has reduced that number to zero, and when it does it will signal the condition variable. Because num_writers is atomic which prevents various reordering shenanigans, it is guaranteed that the last thread will see num_writers == 0; it could also be notified more than once but this is ok and cannot result in bad behavior.
Once that condition variable has been signalled, that shows that all writers are either trapped in the first branch or are done modifying the array. So the read thread can now safely swap the data, and then unlock everything, and then return what it needs to.
As mentioned before, in typical operation there are no locks, just increments and untaken branches. Even when a read does occur, the read thread will have one lock and one condition variable wait, whereas a typical write thread will have about one lock/unlock of a mutex and that's all (one, or a small number of write threads, will also perform a condition variable notification).

std::lock_guard example, explanation on why it works

I've reached a point in my project that requires communication between threads on resources that very well may be written to, so synchronization is a must. However I don't really understand synchronization at anything other than the basic level.
Consider the last example in this link: http://www.bogotobogo.com/cplusplus/C11/7_C11_Thread_Sharing_Memory.php
#include <iostream>
#include <thread>
#include <list>
#include <algorithm>
#include <mutex>
using namespace std;
// a global variable
std::list<int>myList;
// a global instance of std::mutex to protect global variable
std::mutex myMutex;
void addToList(int max, int interval)
{
// the access to this function is mutually exclusive
std::lock_guard<std::mutex> guard(myMutex);
for (int i = 0; i < max; i++) {
if( (i % interval) == 0) myList.push_back(i);
}
}
void printList()
{
// the access to this function is mutually exclusive
std::lock_guard<std::mutex> guard(myMutex);
for (auto itr = myList.begin(), end_itr = myList.end(); itr != end_itr; ++itr ) {
cout << *itr << ",";
}
}
int main()
{
int max = 100;
std::thread t1(addToList, max, 1);
std::thread t2(addToList, max, 10);
std::thread t3(printList);
t1.join();
t2.join();
t3.join();
return 0;
}
The example demonstrates how three threads, two writers and one reader, accesses a common resource(list).
Two global functions are used: one which is used by the two writer threads, and one being used by the reader thread. Both functions use a lock_guard to lock down the same resource, the list.
Now here is what I just can't wrap my head around: The reader uses a lock in a different scope than the two writer threads, yet still locks down the same resource. How can this work? My limited understanding of mutexes lends itself well to the writer function, there you got two threads using the exact same function. I can understand that, a check is made right as you are about to enter the protected area, and if someone else is already inside, you wait.
But when the scope is different? This would indicate that there is some sort of mechanism more powerful than the process itself, some sort of runtime environment blocking execution of the "late" thread. But I thought there were no such things in c++. So I am at a loss.
What exactly goes on under the hood here?
Let’s have a look at the relevant line:
std::lock_guard<std::mutex> guard(myMutex);
Notice that the lock_guard references the global mutex myMutex. That is, the same mutex for all three threads. What lock_guard does is essentially this:
Upon construction, it locks myMutex and keeps a reference to it.
Upon destruction (i.e. when the guard's scope is left), it unlocks myMutex.
The mutex is always the same one, it has nothing to do with the scope. The point of lock_guard is just to make locking and unlocking the mutex easier for you. For example, if you manually lock/unlock, but your function throws an exception somewhere in the middle, it will never reach the unlock statement. So, doing it the manual way you have to make sure that the mutex is always unlocked. On the other hand, the lock_guard object gets destroyed automatically whenever the function is exited – regardless how it is exited.
myMutex is global, which is what is used to protect myList. guard(myMutex) simply engages the lock and the exit from the block causes its destruction, dis-engaging the lock. guard is just a convenient way to engage and dis-engage the lock.
With that out of the way, mutex does not protect any data. It just provides a way to protect data. It is the design pattern that protects data. So if I write my own function to modify the list as below, the mutex cannot protect it.
void addToListUnsafe(int max, int interval)
{
for (int i = 0; i < max; i++) {
if( (i % interval) == 0) myList.push_back(i);
}
}
The lock only works if all pieces of code that need to access the data engage the lock before accessing and disengage after they are done. This design-pattern of engaging and dis-engaging the lock before and after every access is what protects the data (myList in your case)
Now you would wonder, why use mutex at all, and why not, say, a bool. And yes you can, but you will have to make sure that the bool variable will exhibit certain characteristics including but not limited to the below list.
Not be cached (volatile) across multiple threads.
Read and write will be atomic operation.
Your lock can handle situation where there are multiple execution pipelines (logical cores, etc).
There are different synchronization mechanisms that provide "better locking" (across processes versus across threads, multiple processor versus, single processor, etc) at a cost of "slower performance", so you should always choose a locking mechanism which is just about enough for your situation.
Just to add onto what others here have said...
There is an idea in C++ called Resource Acquisition Is Initialization (RAII) which is this idea of binding resources to the lifetime of objects:
Resource Acquisition Is Initialization or RAII, is a C++ programming technique which binds the life cycle of a resource that must be acquired before use (allocated heap memory, thread of execution, open socket, open file, locked mutex, disk space, database connection—anything that exists in limited supply) to the lifetime of an object.
C++ RAII Info
The use of a std::lock_guard<std::mutex> class follows the RAII idea.
Why is this useful?
Consider a case where you don't use a std::lock_guard:
std::mutex m; // global mutex
void oops() {
m.lock();
doSomething();
m.unlock();
}
in this case, a global mutex is used and is locked before the call to doSomething(). Then once doSomething() is complete the mutex is unlocked.
One problem here is what happens if there is an exception? Now you run the risk of never reaching the m.unlock() line which releases the mutex to other threads.
So you need to cover the case where you run into an exception:
std::mutex m; // global mutex
void oops() {
try {
m.lock();
doSomething();
m.unlock();
} catch(...) {
m.unlock(); // now exception path is covered
// throw ...
}
}
This works but is ugly, verbose, and inconvenient.
Now lets write our own simple lock guard.
class lock_guard {
private:
std::mutex& m;
public:
lock_guard(std::mutex& m_):(m(m_)){ m.lock(); } // lock on construction
~lock_guard() { t.unlock(); }} // unlock on deconstruction
}
When the lock_guard object is destroyed, it will ensure that the mutex is unlocked.
Now we can use this lock_guard to handle the case from before in a better/cleaner way:
std::mutex m; // global mutex
void ok() {
lock_guard lk(m); // our simple lock guard, protects against exception case
doSomething();
} // when scope is exited our lock guard object is destroyed and the mutex unlocked
This is the same idea behind std::lock_guard.
Again this approach is used with many different types of resources which you can read more about by following the link on RAII.
This is precisely what a lock does. When a thread takes the lock, regardless of where in the code it does so, it must wait its turn if another thread holds the lock. When a thread releases a lock, regardless of where in the code it does so, another thread may acquire that lock.
Locks protect data, not code. They do it by ensuring all code that accesses the protected data does so while it holds the lock, excluding other threads from any code that might access that same data.

What is critical resources to a std::mutex object

I am new to concurrency and I am having doubts in std::mutex. Say I've a int a; and books are telling me to declare a mutex amut; to get exclusive access over a. Now my question is how a mutex object is recognizing which critical resources it has to protect ? I mean which variable?
say I've two variables int a,b; now i declare mutex abmut; Now abmut will protect what???
both a and b or only a or b???
Your doubts are justified: it doesn't. That's your job as a programmer, to make sure you only access a if you've got the mutex. If somebody else got the mutex, do not access a or you will have the same problems you'd have without the mutex. That goes for all thread-syncronization constructs. You can use them to protect a resource. They don't do it on their own.
Mutex is more like a sign rather than a lock. When someone sees a sign saying "occupied" in a public washroom, he will wait until the user gets out and flips the sign. But you have to teach him to wait when seeing the sign. The sign itself won't prevent him from breaking in. Of course, the "wait" order is already set by mutex.lock(), so you can use it conveniently.
A std::mutex does not protect any data at all. A mutex works like this:
When you try to lock a mutex you look if the mutex is not already locked, else you wait until it is unlocked.
When you're finished using a mutex you unlock it, else threads that are waiting will do that forever.
How does that protect things? consider the following:
#include <iostream>
#include <future>
#include <vector>
struct example {
static int shared_variable;
static void incr_shared()
{
for(int i = 0; i < 100000; i++)
{
shared_variable++;
}
}
};
int example::shared_variable = 0;
int main() {
std::vector<std::future<void> > handles;
handles.reserve(10000);
for(int i = 0; i < 10000; i++) {
handles.push_back(std::async(std::launch::async, example::incr_shared));
}
for(auto& handle: handles) handle.wait();
std::cout << example::shared_variable << std::endl;
}
You might expect it to print 1000000000, but you don't really have a guarantee of that. We should include a mutex, like this:
struct example {
static int shared_variable;
static std::mutex guard;
static void incr_shared()
{
std::lock_guard<std::mutex>{ guard };
for(int i = 0; i < 100000; i++)
{
shared_variable++;
}
}
};
So what does this exactly do? First of all std::lock_guard uses RAII to call mutex.lock() when it's created and mutex.unlock when it's destroyed, this last one happens when it leaves scope (here when the function exits). So in this case only one thread can be executing the for loop because as soon as a thread passes the lock_guard it holds the lock, and we saw before that no other thread can hold it. Therefore this loop is now safe. Note that we could also put the lock_guard inside the loop, but that might make your program slow (locking and unlocking is relatively expensive).
So in conclusion, a mutex protects blocks of code, in our example the for-loop, not the variable itself. If you want variable protection, consider taking a look at std::atomic. The following example is for example again unsafe because decr_shared can be called simultaneously from any thread.
struct example {
static int shared_variable;
static std::mutex guard;
static void decr_shared() { shared_variable--; }
static void incr_shared()
{
std::lock_guard<std::mutex>{ guard };
for(int i = 0; i < 100000; i++)
{
shared_variable++;
}
}
};
This however is again safe, because now the variable itself is protected, in any code that uses it.
struct example {
static std::atomic_int shared_variable;
static void decr_shared() { shared_variable--; }
static void incr_shared()
{
for(int i = 0; i < 100000; i++)
{
shared_variable++;
}
}
};
std::atomic_int example::shared_variable{0};
A mutex doesn't inherently protect any specific variables... instead, the programmer needs to realise that they have some group of 1 or more variables that several threads may attempt to use, then use a mutex so that only one of those threads can be running such variable-using/changing code at any point in time.
Note especially that you're only protected from other threads' code accessing/modifying those variables if their code locks the same mutex during the variable access. A mutex used by only one thread protects nothing.
mutex is used to synchronize access to a resource. Say you have a data say int, where you are going to do read write operation using an getter and a setter. So both getter and setters will use the same mutex to to sync read/write operation.
both of these function will lock the mutex at the beginning and unlock it before it returns. you can use scoped_lock that will automatically unlock on its destructor.
void setter(value_type v){
boost::mutex::scoped_lock lock(mutex);
value = v;
}
value_type getter() const{
boost::mutex::scoped_lock lock(mutex);
return value;
}
Imagine you sit at a table with your friends and a delicious cake (the resources you want to guard, e.g. some integer a) in the middle. In addition you have a single tennis ball (this is our mutex). Now, only a single person can have the ball (lock the mutex using a lock_guard or similar mechanisms), but everyone can eat the cake (access the integer a).
As a group you can decide to set up a rule that only whoever has the ball may eat from the cake (only the person who has locked the mutex may access a). This person may relinquish the ball by putting it on the table (unlock the mutex), so another person can grab it (lock the mutex). This way you ensure that no one stabs another person with the fork, while frantically eating the cake.
Setting up and upholding a rule like described in the last paragraph is your job as a programmer. There is no inherent connection between a mutex and some resource (e.g. some integer a).

What could cause a deadlock of a single write/multiple read lock?

I have a class instance that is used by several other classes in other threads to communicate.
This class uses a slim reader/writer lock (WinAPI's SRWLOCK) as a synchronization object and a couple of RAII helper classes to actually lock/unlock the thing:
static unsigned int readCounter = 0;
class CReadLock
{
public:
CReadLock(SRWLOCK& Lock) : m_Lock(Lock) { InterlockedIncrement(&readCounter); AcquireSRWLockShared(&m_Lock); }
~CReadLock() {ReleaseSRWLockShared(m_Lock); InterlockedDecrement(&readCounter);}
private:
SRWLOCK& m_Lock;
};
class CWriteLock
{
public:
CWriteLock(SRWLOCK& Lock) : m_Lock(Lock) { AcquireSRWLockExclusive(&m_Lock); }
~CWriteLock() { ReleaseSRWLockExclusive(&m_Lock); }
private:
SRWLOCK& m_Lock;
};
The problem is the whole thing deadlocks all the time. When I pause the deadlocked program, I see:
one thread stuck in AcquireSRWLockExclusive();
two threads stuck in AcquireSRWLockShared();
readCounter global is set to 3.
The way I see it, the only way for this to happen is CReadLock instance's destructor hasn't been called somehow somewhere so the lock is perpetually stuck. However, the only way for this to happen (as far as I know) is because an exception has been thrown. It wasn't. I checked.
What might be the problem? How should I go about fixing (or at least locating the reason of) this thing?
Are you using Read lock in recursive manner?
void foo()
{
CReadLock rl(m_lock);
...
bar();
}
void bar()
{
CReadLock rl(m_lock);
...
}
void baz()
{
CWritedLock rl(m_lock);
...
}
if foo() and baz() are called simultaneously you may get deadlock:
1. (Thread A) foo locks
2. (Thread B) baz asks to create write lock now all read locks would block until all are released - waits.
3. (Thread A) bar tries to lock and waits because there is pending write lock
The fact that you have 2 threads stuck on Read lock and Read Lock counter is 3 that most likely shows that you have a recursion in one of the locks - i.e. one thread had tried to acquired read lock twice.
one thread stuck in AcquireSRWLockExclusive();
two threads stuck in AcquireSRWLockShared();
readCounter global is set to 3.
Well, as far as I can read from it, you have one thread holding the read lock currently, one write thread waiting for that read lock to be released, and two read threads waiting for that write thread to get and release the lock.
In other words, you have one dangling read thread, which has not bee destructed, like you say yourself. Add debug print to destructor and constructor.