I was going thorugh implementation of multiple readers /writer lock mentioned here
MultiplereadersWriterLock
Here code for EnterReader is
void EnterReader(void)
{
EnterCriticalSection(&m_csWrite);
EnterCriticalSection(&m_csReaderCount);
if (++m_cReaders == 1)
ResetEvent(m_hevReadersCleared);
LeaveCriticalSection(&m_csReaderCount);
LeaveCriticalSection(&m_csWrite);
}
As per my understanding with this lock we are solving problem where we can read shared resource via multiple threads .So it means multiple threads can call EnterReader function and without blocking they can continue to read.
Example
Thread T1 calls EnterReader()
It acquires m_csWrite
It acquires m_csReaderCount
Interrupted by CPU and thread T2 starts
Thread T2 calls EnterReader()
Since m_csWrite already acquired by T1 so how thread T2 can perform read.
T2 cannot proceed further since m_csWrite is already acquired that means T2 reader thread is blocked and waiting for T1 to get finish.
I am confused how this implementation is solving problem of multiple readers accessing shared resource.
EnterReader just does a quick lock to increment the count of the readers. Notice it releases the locks before returning. When reading, m_csWrite doesn't stay locked between EnterReader and LeaveReader calls like it does between EnterWriter and LeaveWriter.
Usage example:
lock->EnterReader();
/*Do the reading*/ //No locks here. m_csReaderCount is blocking writes.
lock->LeaveReader();
Related
I am making my own mutex to synchronize my threads and I am having the following issue:
The same thread seems to re-acquire the mutex right after it releases it
What I have tried:
Telling it to yield execution to another thread (SwitchToThread, Sleep, YieldProcessor)
Increasing delay between loops (Up to 1 second)
Here is how it works:
I have a structure with a state value:
volatile unsigned int state;
When I want to acquire the mutex, I check the state until it has been released (open), then acquire (close) it and break out of the infinite loop and do whatever needs to be done:
unsigned int previous = 0;
for (;;)
{
previous = InterlockedExchangeAdd(&mtx->state,
0);
if (STATE_OPEN == previous)
{
InterlockedExchange(&mtx->state,
STATE_CLOSED);
break;
}
Sleep(delay);
}
Then I simply release it for the next thread to acquire it:
InterlockedExchange(&mtx->state,
STATE_OPEN);
The way I am using it is I simply have one global volatile integer that I add 1 to in one thread and subtract 1 to in another one. Increasing the delay has helped with making it so that the number does not either go very low or very high and get stuck in a loop being executed in just a single thread, but a 1+ second delay is not going to work for my other purposes.
How could I go about making sure that all of the threads get a chance to acquire the mutex and not have it get stuck in a single thread?
The mutex does exactly what it is supposed to do: it prevents multiple threads from running at the same time.
To stop a thread from re-acquiring the mutex, the basic solution is to not access the shared resource which is protected by the mutex. The thread probably should be doing something else.
You may also have a design problem. If you have multiple resources protected by a single mutex, you may have false contention between threads. if each resource had its own mutex, multiple threads could each work on their own resource.
I guess I'm a little unsure of how mutexs work. If a mutex gets locked after some conditional, will it only lock out threads that meet that same condition, or will it lock out all threads regardless until the mutex is unlocked?
Ex:
if (someBoolean)
pthread_mutex_lock(& mut);
someFunction();
pthread_mutex_unlock(& mut);
Will all threads be stopped from running someFunction();, or just those threads that pass through the if-statement?
You need to call pthread_mutex_lock() in order for a thread to be locked. If you call pthread_mutex lock in thread A, but not in thread B, thread B does not get locked (and you have likely defeated the purpose of mutual exclusion in your code, as it's only useful if every thread follows the same locking protocol to protect your code).
The code in question has a few issues outlined below:
if (someBoolean) //if someBoolean is shared among threads, you need to lock
//access to this variable as well.
pthread_mutex_lock(& mut);
someFunction(); //now you have some threads calling someFunction() with the lock
//held, and some calling it without the lock held, which
//defeats the purpose of mutual exclusion.
pthread_mutex_unlock(& mut); //If you did not lock the *mut* above, you must not
//attempt to unlock it.
Will all threads be stopped from running someFunction();, or just those threads that pass through the if-statement?
Only the threads for which someBoolean is true will obtain the lock. Therefore, only those threads will be prevented from calling someFunction() while someone else holds the same lock.
However, in the provided code, all threads will call pthread_mutex_unlock on the mutex, regardless of whether they actually locked it. For mutexes created with default parameters this constitutes undefined behavior and must be fixed.
I have data coupled with a Lock = boost::shared_mutex.
I am locking data access with
reader locks ReadLock = boost::shared_lock<Lock>
and writer locks WriteLock = boost::unique_lock<Lock>.
Obviously, lots of readers may be reading the data at a time, and only one writing at it. But here's the catch:
A single thread may have mutliple readlocks on the same mutex, because it's calling functions that are locking the data themselves (with a ReadLock). But, as I have found, this causes inter-locking:
Thread 1 read-locks data (Lock1)
Thread 2 waits with a write-lock (LockW)
Thread 1 spawns another read-lock (Lock2) while Lock1 is still alive
Now I get a lock because Lock2 is waiting for LockW to exit, LockW is waiting for Lock1, andLock1 is stuck because of Lock2.
I don't know if it's possible to change the design so that each thread only does a single ReadLock. I believe that having a system that starves Writers would solve my issues. Is there a common way on how to handle my case?
I'm creating a multithreaded c++ program using pthread (c++98 standard).
I have a std::map that multiple threads will access. The access will be adding and removing elements, using find, and also accessing elements using the [] operator.
I understand that reading using the [] operator, or even modifying the elements with it is thread safe, but the rest of the operations are not.
First question: do I understand this correctly?
Some threads will just access the elements via [], while others will do some of the other operations. Obviously I need some form of thread synchronisation.
The way I see this should work is:
- While no "write" operation is being done to the map, threads should all be able to "read" from it concurrently.
- When a thread wants to "write" to the map, it should set a lock so no thread starts any "read" or "write" operation, and then it should wait until all "read" operations have completed, at which point it would perform the operation and release the locks.
- After the locks have been released, all threads should be able to read freely.
The main question is: what thread synchronisation methods can I use to achieve this behaviour?
I have read about mutex, conditional variables and semaphores, and as far as I can see they won't do excatly what I need. I'm familiar with mutex but not with cond. variables or semaphores.
The main problem I see is that I need a way of locking threads until something happens (the write operation ends) without those threads then locking anything in turn.
Also I need something like an inverted semaphore, that blocks while the counter is more than 1 and then wakes when it is 0 (i.e. no read operation is being done).
Thanks in advance.
P.S. It's my first post. Kindly indicate if I'm doing something wrong!
I understand that reading using the [] operator, or even modifying the elements with it is thread safe, but the rest of the operations are not.
Do I understand this correctly?
Well, what you've said isn't quite true. Concurrent readers can use [] to access existing elements, or use other const functions (like find, size()...) safely if there's no simultaneous non-const operations like erase or insert mutating the map<>. Concurrent threads can modify different elements, but if one thread modifies an element you must have some synchronisation before another thread attempts to access or further modify that specific element.
When a thread wants to "write" to the map, it should set a lock so no thread starts any "read" or "write" operation, and then it should wait until all "read" operations have completed, at which point it would perform the operation and release the locks. - After the locks have been released, all threads should be able to read freely.
That's not quite the way it works... for writers to be able to 'wait until all "read" operations have completed', the reader(s) need to acquire a lock. Writers then wait for that same lock to be released, and acquire it themselves to restrict other readers or writers until they've finished their update and release it.
what thread synchronisation methods can I use to achieve this behaviour?
A mutex is indeed suitable, though you will often get higher performance from a reader-writer lock (which allows concurrent readers, some also prioritorise waiting writers over further readers). Related POSIX threads functions include: pthread_rwlock_rdlock, pthread_rwlock_wrlock, pthread_rwlock_unlock etc..
To contrast the two approaches, with readers and writers using a mutex you get serialisation something like this:
THREAD ACTION
reader1 pthread_mutex_lock(the_mutex) returns having acquired lock, and
thread starts reading data
reader2 pthread_mutex_lock(the_mutex) "hangs", as blocked by reader1
writer1 pthread_mutex_lock(the_mutex) hangs, as blocked by reader1
reader1 pthread_mutex_unlock(the_mutex) -> releases lock
NOTE: some systems guarantee reader2 will unblock before writer1, some don't
reader2 blocked pthread_mutex_lock(the_mutex) returns having acquired lock,
and thread starts reading data
reader1 pthread_mutex_lock(the_mutex) hangs, as blocked by reader2
reader2 pthread_mutex_unlock(the_mutex) -> releases lock
writer1 blocked pthread_mutex_lock(the_mutex) returns having acquired lock,
and thread starts writing and/or reading data
writer1 pthread_mutex_unlock(the_mutex) -> releases lock
reader1 blocked pthread_mutex_lock(the_mutex) returns having acquired lock,
and thread starts reading data
...etc...
With a read-write lock, it might be more like this (notice the first two readers run concurrently):
THREAD ACTION
reader1 pthread_rwlock_rdlock(the_rwlock) returns having acquired lock, and
thread starts reading data
reader2 pthread_rwlock_rdlock(the_rwlock) returns having acquired lock, and
thread starts reading data
writer1 pthread_rwlock_wrlock(the_rwlock) hangs, as blocked by reader1/2
reader1 pthread_rwlock_unlock(the_rwlock) -> releases lock
reader1 pthread_rwlock_rwlock(the_rwlock) hangs, as pending writer
reader2 pthread_rwlock_unlock(the_rwlock) -> releases lock
writer1 blocked pthread_rwlock_wrlock(the_rwlock) returns having acquired lock,
and thread starts writing and/or reading data
writer1 pthread_rwlock_unlock(the_rwlock) -> releases lock
reader1 blocked pthread_rwlock_rwlock(the_rwlock) returns having acquired lock,
and thread starts reading data
...etc...
On my neverending quest to understand std::contion_variables I've run into the following. On this page it says the following:
void print_id (int id) {
std::unique_lock<std::mutex> lck(mtx);
while (!ready) cv.wait(lck);
// ...
std::cout << "thread " << id << '\n';
}
And after that it says this:
void go() {
std::unique_lock<std::mutex> lck(mtx);
ready = true;
cv.notify_all();
}
Now as I understand it, both of these functions will halt on the std::unqique_lock line. Until a unique lock is acquired. That is, no other thread has a lock.
So say the print_id function is executed first. The unique lock will be aquired and the function will halt on the wait line.
If the go function is then executed (on a separate thread), the code there will halt on the unique lock line. Since the mutex is locked by the print_id function already.
Obviously this wouldn't work if the code was like that. But I really don't see what I'm not getting here. So please enlighten me.
What you're missing is that wait unlocks the mutex and then waits for the signal on cv.
It locks the mutex again before returning.
You could have found this out by clicking on wait on the page where you found the example:
At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.
Once notified (explicitly, by some other thread), the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called.
There's one point you've missed—calling wait() unlocks the mutex. The thread atomically (releases the mutex + goes to sleep). Then, when woken by the signal, it tries to re-acquire the mutex (possibly blocking); once it acquires it, it can proceed.
Notice that it's not necessary to have the mutex locked for calling notify_*, only for wait*
To answer the question as posed, which seems necessary regarding claims that you should not acquire a lock on notification for performance reasons (isn't correctness more important than performance?): The necessity to lock on "wait" and the recommendation to always lock around "notify" is to protect the user from himself and his program from data and logical races. Without the lock in "go", the program you posted would immediately have a data race on "ready". However, even if ready were itself synchronized (e.g. atomic) you would have a logical race with a missed notification, because without the lock in "go" it is possible for the notify to occur just after the check for "ready" and just before the actual wait, and the waiting thread may then remain blocked indefinitely. The synchronization on the atomic variable itself is not enough to prevent this. This is why helgrind will warn when a notification is done without holding the lock. There are some fringe cases where the mutex lock is really not required around the notify. In all of these cases, there needs to be a bidirectional synchronization beforehand so that the producing thread can know for sure that the other thread is already waiting. IMO these cases are for experts only. Actually, I have seen an expert, giving a talk about multi-threading, getting this wrong — he thought an atomic counter would suffice. That said, the lock around the wait is always necessary for correctness (or, at least, an operation that is atomic with the wait), and this is why the standard library enforces it and atomically unlocks the mutex on entering the wait.
POSIX condition variables are, unlike Windows events, not "idiot-proof" because they are stateless (apart from being aware of waiting threads). The recommendation to use a lock on the notify is there to protect you from the worst and most common screwups. You can build a Windows-like stateful event using a mutex + condition var + bool variable if you like, of course.