Difference between mutual exclusion and synchronization? - concurrency

What is the difference between above two?
This question came to my mind because I found that
Monitors and locks provide mutual exclusion
Semaphores and conditional variables provide synchronization
Is this true?
Also while searching I found this article
Any clarifications please.

Mutual exclusion means that only a single thread should be able to access the shared resource at any given point of time. This avoids the race conditions between threads acquireing the resource. Monitors and Locks provide the functionality to do so.
Synchronization means that you synchronize/order the access of multiple threads to the shared resource.
Consider the example:
If you have two threads, Thread 1 & Thread 2.
Thread 1 and Thread 2 execute in parallel but before Thread 1 can execute say a statement A in its sequence it is a must that Thread 2 should execute a statement B in its sequence. What you need here is synchronization. A semaphore provides that. You put a semapohore wait before the statement A in Thread 1 and you post to the semaphore after statement B in Thread 2.
This ensures the synchronization you need.

The best way to understand the difference is with the help of an example.Below is the program to solve the classical producer consumer problem via semaphore.To provide mutual exclusion we genrally use a binary semaphore or mutex and to provide synchronization we use counting semaphore.
BufferSize = 3;
semaphore mutex = 1; // used for mutual exclusion
semaphore empty = BufferSize; // used for synchronization
semaphore full = 0; // used for synchronization
Producer()
{
int widget;
while (TRUE) { // loop forever
make_new(widget); // create a new widget to put in the buffer
down(&empty); // decrement the empty semaphore
down(&mutex); // enter critical section
put_item(widget); // put widget in buffer
up(&mutex); // leave critical section
up(&full); // increment the full semaphore
}
}
Consumer()
{
int widget;
while (TRUE) { // loop forever
down(&full); // decrement the full semaphore
down(&mutex); // enter critical section
remove_item(widget); // take a widget from the buffer
up(&mutex); // leave critical section
consume_item(widget); // consume the item
}
}
In the above code the mutex variable provides mutual exclusion(allow only one thread to access critical section) whereas full and the empty variable are used for synchonization(to aribtrate the access of shared resource among various thread).

Related

Readers Writers locks in C++

I have been looking to implement a solution for readers-writers problem using threading/synchronization constructs introduced since c++11.
I ran into this question, the most voted answer has this code -
Reader Thread
// --- read code
rw_mtx.lock(); // will block if there is a write in progress
read_count += 1; // announce intention to read
rw_mtx.unlock();
cell_value = data_base[cell_number];
rw_mtx.lock();
read_count -= 1; // announce intention to read
if (read_count == 0) rw_write_q.notify_one();
rw_mtx.unlock();
Writer Thread
// --- write code
std::unique_lock<std::mutex> rw_lock(rw_mtx);
write_count += 1;
rw_write_q.wait(rw_lock, []{return read_count == 0;});
data_base[cell_number] = cell_value;
write_count -= 1;
if (write_count > 0) rw_write_q.notify_one();
In the writer thread, before writing to data_base[cell_number], shouldn't there be a memory-barrier/fence, to synchronize access to that shared memory?(Same for reader thread)
If you agree withe the above(yay!), how can this be acheived? Looking to improve my understanding here.
Thanks for your help!
Memory barriers in high level programming languages are considered intrinsic to the behavior of the locks provided by the language. From Wikipedia's Memory Barrier page:
Multithreaded programs usually use synchronization primitives provided by a high-level programming environment, such as Java and .NET Framework, or an application programming interface (API) such as POSIX Threads or Windows API. ... In such environments explicit use of memory barriers is not generally necessary.
Wikipedia
If you dig into the source code of pthread_mutex_lock, for example, you will see reliance on futex and atomic exchange functions, which would use a memory barrier.
Your comments seem to indicate that you do not understand why the code sample you pulled from the answer implements a readers-writer lock.
As mentioned in the answer you cited, the code you show has a fairness issue, in that a waiting writer may get starved out by a constant stream of readers. However, if we ignore that issue for now, let us first agree that:
1. No writer will enter the critical section if there is already at least one reader in the critical section.
This is because the condition variable waits for the reader count to reach zero. The way a condition variable works is that it releases the mutex if the condition is false, and acquires the lock when it is signaled. Upon acquiring the mutex, it re-tests the condition, and continues its hold on the mutex if the condition is true.
2. When there are no readers, only one writer will enter the critical section.
The critical section for the writer is both the reader count state of the readers writer lock and section of code that requires write lock protection.
Since the mutex is held when the condition is true, and the lock provides exclusive access, there is only one writer in the critical section which will be writing a new value into the array.
Upon completing the critical section, a new signal is raised on the condition variable to wake up any other waiters and the governing mutex is then released. Since the mutex provides exclusive access, upon release, only one thread will be allowed to acquire the mutex, whether it is a pending writer or reader.
3. Multiple readers may enter the read critical section.
The critical section for a reader is treated differently than the writer. For the reader, we assume the state of the array is synchronized with the most recent write when the lock is acquired. But, reads will not alter the state of the array. So, the acquired mutex critical section is the reader count. Meanwhile, the readers-writer lock (implemented using the mutex and condition variable) critical section includes the part of the code that requires read access to the array.
So, on entering a critical section, the acquired mutex is used to increment the reader count. Once the reader count is non-zero, the readers-writer lock is now being held by a reader and will cause writers to wait on the condition variable. Once the reader count is incremented, the acquired mutex can be released.
Since the mutex was released, a different reader thread can now acquire the mutex and also increment the reader count, and then release the mutex. This allows multiple readers to enter the read critical section. The readers-writer lock remains held for reading since the reader count is positive. The mutex is released to allow other readers to enter.
Upon completing the read critical section, the mutex is acquired to decrement the reader count. If the count is zero, a signal on the condition variable is raised to wake up any pending writers. Then, the mutex is release.
If you looked towards the end of the answer you cited, you would have noticed a mention that C++17 has introduced a much nicer way to implement readers-writer locks. That would be with a shared_mutex.
class DataBase {
// ...
mutable std::shared_mutex rwlock_;
std::vector<ElementType> data_base_;
// ...
ElementType reader (int n) const {
std::shared_lock lock(rwlock_);
return data_base_[n];
}
// ...
void writer (int n, ElementType v) {
std::unique_lock lock(rwlock_);
data_base_[n] = v;
}
// ...
};

Cross-thread visibility of changes to std::vector synchronized only with Win32 events

Suppose I have a std::vector<Item> class member variable where Item is some class with getters and setters.
It is created in one thread (#1) but is filled from the other thread (#2) with push_backs. In the end it is read in the thread #1. The access to it is synchronized only with Windows event objects. The event is set to a signaled state when the vector is filled up.
Should I beware of cross-thread visibility issues (getting stale values) in this scenario? If yes, how could these issues be prevented?
Microsoft says waiting on event objects is enough.
According to MSDN:
The following synchronization functions use the appropriate barriers to ensure memory ordering:
Functions that enter or leave critical sections
Functions that signal synchronization objects
Wait functions
Interlocked functions
This means if Thread #1 sees the side effect of the event object signaled, it must see the side effect of Thread #2's modification of the vector.
If thread #1 only reads after #2 signals, and #2 doesn't write any new members after it signals, then you have nothing to worry about.
Otherwise, if it's possible for reads and writes to happen at the same time, you don't have visibility problems, but you have synchronization problems. std::vector is not a threadsafe data structure, so it can be corrupted if two threads access it at once. Either switch to a different, thread-safe data structure, or surround your vector access with exclusive locks. For standard solutions, look at std::mutex.
I would suggest to use standard synchronization primitives when possible.
You need to lock once, until vector is filled. E.g. thread#2 will be waiting until thread#1 is done.
using Item = int;
std::vector<Item> items;
std::mutex mutex;
// #1
auto t1 = std::thread([&items, &mutex](){
std::lock_guard<std::mutex> lock(mutex);
// fill in items...
// ...
});
// #2
auto t2 = std::thread([&items, &mutex](){
std::lock_guard<std::mutex> lock(mutex);
// read...
});
// wait...
t1.join();
t2.join();

pthreads: thread starvation caused by quick re-locking

I have a two threads, one which works in a tight loop, and the other which occasionally needs to perform a synchronization with the first:
// thread 1
while(1)
{
lock(work);
// perform work
unlock(work);
}
// thread 2
while(1)
{
// unrelated work that takes a while
lock(work);
// synchronizing step
unlock(work);
}
My intention is that thread 2 can, by taking the lock, effectively pause thread 1 and perform the necessary synchronization. Thread 1 can also offer to pause, by unlocking, and if thread 2 is not waiting on lock, re-lock and return to work.
The problem I have encountered is that mutexes are not fair, so thread 1 quickly re-locks the mutex and starves thread 2. I have attempted to use pthread_yield, and so far it seems to run okay, but I am not sure it will work for all systems / number of cores. Is there a way to guarantee that thread 1 will always yield to thread 2, even on multi-core systems?
What is the most effective way of handling this synchronization process?
You can build a FIFO "ticket lock" on top of pthreads mutexes, along these lines:
#include <pthread.h>
typedef struct ticket_lock {
pthread_cond_t cond;
pthread_mutex_t mutex;
unsigned long queue_head, queue_tail;
} ticket_lock_t;
#define TICKET_LOCK_INITIALIZER { PTHREAD_COND_INITIALIZER, PTHREAD_MUTEX_INITIALIZER }
void ticket_lock(ticket_lock_t *ticket)
{
unsigned long queue_me;
pthread_mutex_lock(&ticket->mutex);
queue_me = ticket->queue_tail++;
while (queue_me != ticket->queue_head)
{
pthread_cond_wait(&ticket->cond, &ticket->mutex);
}
pthread_mutex_unlock(&ticket->mutex);
}
void ticket_unlock(ticket_lock_t *ticket)
{
pthread_mutex_lock(&ticket->mutex);
ticket->queue_head++;
pthread_cond_broadcast(&ticket->cond);
pthread_mutex_unlock(&ticket->mutex);
}
Under this kind of scheme, no low-level pthreads mutex is held while a thread is within the ticketlock protected critical section, allowing other threads to join the queue.
In your case it is better to use condition variable to notify second thread when it is required to awake and perform all required operations.
pthread offers a notion of thread priority in its API. When two threads are competing over a mutex, the scheduling policy determines which one will get it. The function pthread_attr_setschedpolicy lets you set that, and pthread_attr_getschedpolicy permits retrieving the information.
Now the bad news:
When only two threads are locking / unlocking a mutex, I can’t see any sort of competition, the first who runs the atomic instruction takes it, the other blocks. I am not sure whether this attribute applies here.
The function can take different parameters (SCHED_FIFO, SCHED_RR, SCHED_OTHER and SCHED_SPORADIC), but in this question, it has been answered that only SCHED_OTHER was supported on linux)
So I would give it a shot if I were you, but not expect too much. pthread_yield seems more promising to me. More information available here.
Ticket lock above looks like the best. However, to insure your pthread_yield works, you could have a bool waiting, which is set and reset by thread2. thread1 yields as long as bool waiting is set.
Here's a simple solution which will work for your case (two threads). If you're using std::mutex then this class is a drop-in replacement. Change your mutex to this type and you are guaranteed that if one thread holds the lock and the other is waiting on it, once the first thread unlocks, the second thread will grab the lock before the first thread can lock it again.
If more than two threads happen to use the mutex simultaneously it will still function but there are no guarantees on fairness.
If you're using plain pthread_mutex_t you can easily change your locking code according to this example (unlock remains unchanged).
#include <mutex>
// Behaves the same as std::mutex but guarantees fairness as long as
// up to two threads are using (holding/waiting on) it.
// When one thread unlocks the mutex while another is waiting on it,
// the other is guaranteed to run before the first thread can lock it again.
class FairDualMutex : public std::mutex {
public:
void lock() {
_fairness_mutex.lock();
std::mutex::lock();
_fairness_mutex.unlock();
}
private:
std::mutex _fairness_mutex;
};

making sure threads are created and waiting before broadcasting

I have 10 threads that are supposed to be waiting for signal.
Until now I've simply done 'sleep(3)', and that has been working fine, but is there are a more secure way to make sure, that all threads have been created and are indeed waiting.
I made the following construction where I in critical region, before the wait, increment a counter telling how many threads are waiting. But then I have to have an additional mutex and conditional for signalling back to the main that all threads are created, it seems overly complex.
Am I missing some basic thread design pattern?
Thanks
edit: fixed types
edit: clarifying information below
A barrier won't work in this case, because I'm not interested in letting my threads wait until all threads are ready. This already happens with the 'cond_wait'.
I'm interested in letting the main function know, when all threads are ready and waiting.
//mutex and conditional to signal from main to threads to do work
mutex_t mutex_for_cond;
condt_t cond;
//mutex and conditional to signal back from thread to main that threads are ready
mutex_t mutex_for_back_cond;
condt_t back_cond;
int nThreads=0;//threadsafe by using mutex_for_cond
void *thread(){
mutex_lock(mutex_for_cond);
nThreads++;
if(nThreads==10){
mutex_lock(mutex_for_back_cond)
cond_signal(back_cond);
mutex_unlock(mutex_for_back_cond)
}while(1){
cond_wait(cond,mutext_for_cond);
if(spurious)
continue;
else
break;
}
mutex_unlock(mutex_for_cond);
//do work on non critical region data
}
int main(){
for(int i=0;i<10)
create_threads;
while(1){
mutex_lock(mutex_for_back_cond);
cond_wait(back_cond,mutex_for_back_cond);
mutex_unlock(mutex_for_back_cond);
mutex_lock(mutex_for_cond);
if(nThreads==10){
break;
}else{
//spurious wakeup
mutex_unlock(mutex_for_cond);
}
}
//now all threads are waiting
//mutex_for_cond is still locked so broadcast
cond_broadcast(cond);//was type here
}
Am I missing some basic thread design pattern?
Yes. For every condition, there should be a variable that is protected by the accompanying mutex. Only the change of this variable is indicated by signals on the condition variable.
You check the variable in a loop, waiting on the condition:
mutex_lock(mutex_for_back_cond);
while ( ready_threads < 10 )
cond_wait(back_cond,mutex_for_back_cond);
mutex_unlock( mutex_for_back_cond );
Additionally, what you are trying to build is a thread barrier. It is often pre-implemented in threading libraries, like pthread_barrier_wait.
Sensible threading APIs have a barrier construct which does precisely this.
For example, with boost::thread, you would create a barrier like this:
boost::barrier bar(10); // a barrier for 10 threads
and then each thread would wait on the barrier:
bar.wait();
the barrier waits until the specified number of threads are waiting for it, and then releases them all at once. In other words, once all ten threads have been created and are ready, it'll allow them all to proceed.
That's the simple, and sane, way of doing it. Threading APIs which do not have a barrier construct require you to do it the hard way, not unlike what you're doing now.
You should associate some variable that contains the 'event state' with the condition variable. The main thread sets the event state variable appropriately just before issuing the broadcast. The threads that are interested in the event check the event state variable regardless of whether they've blocked on the condition variable or not.
With this pattern, the main thread doesn't need to know about the precise state of the threads - it just sets the event when it needs to then broadcasts the condition. Any waiting threads will be unblocked, and any threads not waiting yet will never block on the condition variable because they'll note that the event has already occurred before waiting on the condition. Something like the following pseudocode:
//mutex and conditional to signal from main to threads to do work
pthread_mutex_t mutex_for_cond;
pthread_cond_t cond;
int event_occurred = 0;
void *thread()
{
pthread_mutex_lock(&mutex_for_cond);
while (!event_occurred) {
pthread_cond_wait( &cond, &mutex_for_cond);
}
pthread_mutex_unlock(&mutex_for_cond);
//do work on non critical region data
}
int main()
{
pthread_mutex_init(&mutex_for_cond, ...);
pthread_cond_init(&cond, ...);
for(int i=0;i<10)
create_threads(...);
// do whatever needs to done to set up the work for the threads
// now let the threads know they can do their work (whether or not
// they've gotten to the "wait point" yet)
pthread_mutex_lock(&mutex_for_cond);
event_occured = 1;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex_for_cond);
}

searching concurrent linked list

I have a concurrent linked list. I need to prioritize finds on this list, so if a thread begins iterating over the list and subsequent insert/delete requests show up I'd want to queue those but if there are find requests from other threads I'd let those happen. What's the best way to implement this situation?
EDIT: I don't want to make copies of the list. Too expensive. Mom pays for my hardware.
Sounds like you are looking for a readers-/writer-lock. This is a lock that can be used to let many threads read the data-structure, but when one thread has write-access, all others are locked out.
Boost offers an implementation.
Are you on Windows ?
If so, you can use synchronization objects like Events or Mutexes.
For insert and delete, you can lock the mutex at the begining of those functions and release the mutex at the end of the function.
Steps shall be like below.
Create event object. http://msdn.microsoft.com/en-us/library/ms682655(VS.85).aspx
Lock the Event obect using WaitForSingleObject function at the start of insert/delete function.
http://msdn.microsoft.com/en-us/library/ms687032(VS.85).aspx
Use SetEvent to unlock the event object at the end of insert/delete function, so that the thread waiting on this event object will get the turn to do insertion/deletion.
http://msdn.microsoft.com/en-us/library/ms686211(VS.85).aspx
Reads do not need to acquire the lock. So no need of Event or Mutex for reading. Multiple threads can read concurrently from a shared buffer.
You can find general info on Reader-writer lock at
http://en.wikipedia.org/wiki/Readers-writer_lock
You can get example program at
http://msdn.microsoft.com/en-us/library/ms686915(v=VS.85).aspx
Sounds like a vanilla reader-writer lock is not quite what you want. As soon as you try to acquire the writer side of the lock, further readers will block until the writer completes, whereas you said you wanted additional threads doing reads to gain access even if there are inserts pending.
I'm not sure if what you want is safe. If there are enough reads going on, your insert/deletes could block forever.
If you really want this you can easily build it yourself, just as you can easily build a reader/writer lock on top of a standard mutex.
Note that this code is probably broken but maybe it gives you a general idea.
class UnfairReaderWriterMutex {
Mutex mutex;
CondVar condition;
int readers; // if positive, reader count; if -1, writer lock is held
public:
UnfairReaderWriterMutex() : counter(0) { }
void ReaderLock() {
mutex.Lock();
while (counter < 0) {
condition.Wait();
}
++readers;
mutex.Unlock();
}
void ReaderUnlock() {
mutex.Lock();
assert(readers > 0);
--readers;
condition.Notify(); // only need to wake up one writer
mutex.Unlock();
}
void WriterLock() {
mutex.Lock();
while (counter > 0) {
condition.Wait();
}
readers = -1;
mutex.Unlock();
}
void WriterUnlock() {
mutex.Lock();
assert(readers == -1);
readers = 0;
condition.NotifyAll(); // there may be multiple readers waiting
mutex.Unlock();
}
};
A typical reader/writer lock works just like this except that there's a flag stopping readers from acquiring the lock while there is a writer waiting.