Lock two mutex at same time - c++

I'm trying to implement a multi-in multi-out interthread channel class. I have three mutexes: full locks when buffer is full. empty locks when buffer is empty. th locks when anyone else is modifying buffer. My single IO program looks like
operator<<(...){
full.lock() // locks when trying to push to full buffer
full.unlock() // either it's locked or not, unlock it
th.lock()
...
empty.unlock() // it won't be empty
if(...)full.lock() // it might be full
th.unlock()
operator>>(...){
// symmetric
}
This works totally fine for single IO. But for multiple IO, when consumer thread unlocks full, all provider thread will go down, only one will obtain th and buffer might be full again because of that single thread, while there's no full check anymore. I can add a full.lock() again of course, but this is endless. Is there anyway to lock full and th at same time? I do see a similar question about this, but I don't see order is the problem here.

Yes, use std::lock(full , th);, this could avoid some deadlocks
for example:
thread1:
full.lock();
th.lock();
thread2:
th.lock();
full.lock();
this could cause a deadlock, but the following don't:
thread1:
std::lock(full, th);
thread2:
std::lock(th, full);

No, you can't atomically lock two mutexes.
Additionally, it looks like you are locking a mutex in one thread and then unlocking it in another. That's not allowed.
I suggest switching to condition variables for this problem. Note that it's perfectly fine to have one mutex associated with multiple condition variables.

No, you cannot lock two mutexes at once, but you can use a std::condition_variable for the waiting threads and invoke notify_one when you are done.
See here for further details.

Functonality you try to achieve would require something similar to System V semaphores, where group of operations on semaphors could be applied atomically. In your case you would have 3 semaphores:
semaphore 1 - locking, initialized to 0
semaphore 2 - counter of available data, initialized to 0
semaphore 3 - counter of available buffers, initialized how much buffers you have
then push operation would do this group to lock:
check semaphore 1 is 0
increase semaphore 1 by +1
increase semaphore 2 by +1
decrease semaphore 3 by -1
then
decrease semaphore 1 by -1
to unlock. then to pull data first group would be changed to:
check semaphore 1 is 0
increase semaphore 1 by +1
decrease semaphore 2 by -1
increase semaphore 3 by +1
unlock is the same as before. Using mutexes, which are special case semaphores most probably would not solve your problem this way. First of all they are binary ie only have 2 states but more important API does not provide group operations on them. So you either find semaphore implementation for your platform or use single mutex with condition variable(s) to signal waiting threads that data or buffer is available.

Related

Is it possible a lock wouldn't release in a while loop

I have two threads using a common semaphore to conduct some processing. What I noticed is Thread 1 appears to hog the semaphore, and thread 2 is never able to acquire it. My running theory is maybe through compiler optimization/thread priority, somehow it just keeps giving it to thread 1.
Thread 1:
while(condition) {
mySemaphore->aquire();
//do some stuff
mySemaphore->release();
}
Thread 2:
mySemaphore->aquire();
//block of code i never reach...
mySemaphore->release();
As soon as I add a delay before Thread 1s next iteration, it allows thread 2 in. Which I think confirms my theory.
Basically for this to work I might need some sort of ordering aware lock. Does my reasoning make sense?

Non blocking shared memory producer using boost interprocess condition to notify

I am trying to develop an application with one producer and several consumers.
The producers is one process and each consumer is one process. The shared resource is some kind of buffer in the shared memory.
The producer should work completely independent from the consumers. It should not be blocked in any case. Therefor the consumers are responsible to check if the data they read from the shared memory is valid and handle it if the producer has already overwritten the data. (They do this using some kind of hashing. Not important.)
The consumers should be informed when new data is available in the buffer. I think boost interprocess conditions are suitable for this usecase. (More suitable would be boost signals2, but this library is not working in an interprocess way).
Conditionas always need a mutex. But I do not need the mutex in my producer. In the consumers I only need the mutex for condition#wait.
Is it ok to only use the codnition#notify_all in the producer and do not use the mutex? Or is this an abuse of the library?
Thanks in advance
It's okay to signal without holding the mutex, but it could lead to worst-case behaviour in rare cases (thread starvation).
Signaling under the mutex guarantees fair scheduling of the waiters under POSIX as far as I am aware ¹
That said, I think the commenters are right when they smell overcomplication in the design. I'd simplify. Optimize when you need it.
¹ See e.g. here: http://linux.die.net/man/3/pthread_cond_signal
The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal().
The producer should work completely independent from the consumers. It
should not be blocked in any case.
Why not? This should not affect the performance if you do not lock too frequently. You can have a data counter in shared memory and you would lock access to that counter only. Data can be stored in circular buffer in shared memory and access to it does not need to be locked because consumers check how much data is available to read using counter. Of course consumers need to read data fast enough. If the data is overwritten then the internal consumer counter can be reset to the value of interprocess counter.
Also producer can store data using many threads. Each thread can calculate future position of the data at the beginning of the thread and then update the counter after the data is stored at the end of the thread. Then additional locking is needed for future position calculations so that this value can be passed between threads.
In details, the non-multithreaded scenario could work like this:
Producer loop:
receive X samples of data
lock access to interprocess counter, increment the counter, unlock the access
Then each consumer has it's own internal counter so that it can compare with interprocess counter if and how much data is available to read (simply polling for data):
Consumer loop:
lock access to interprocess counter, read the counter value, unlock the access
compare the read value with internal counter
if values equal // no data available
sleep, then continue to the beginning of the loop
else if data overwritten // no need for hashing here, counter can be use to figure that out although doing it this way is probably a bit risky
set internal counter to the value of the interprocess counter
then continue to the beginning of the loop
else
read available data
increment internal counter

Mutex granularity

I have a question regarding threads. It is known that basically when we call for mutex(lock) that means that thread keeps on executing the part of code uninterrupted by other threads until it meets mutex(unlock). (At least that's what they say in the book) So my question is if it is actually possible to have several scoped WriteLocks which do not interfere with each other. For example something like this:
If I have a buffer with N elements without any new elements coming, however with high frequency updates (like change value of Kth element) is it possible to set a different lock on each element so that the only time threads would stall and wait is if actually 2 or more threads are trying to update the same element?
To answer your question about N mutexes: yes, that is indeed possible. What resources are protected by a mutex depends entirely on you as the user of that mutex.
This leads to the first (statement) part of your question. A mutex by itself does not guarantee that a thread will work uninterrupted. All it guarantees is MUTual EXclusion - if thread B attempts to lock a mutex which thread A has locked, thread B will block (execute no code) until thread A unlocks the mutex.
This means mutexes can be used to guarantee that a thread executes a block of code uninterrupted; but this works only if all threads follow the same mutex-locking protocol around that block of code. Which means it is your responsibility to assign semantics (or meaning) to each individual mutex, and correctly adhere to those semantics in your code.
If you decide for the semantics to be "I have an array a of N data elements and an array m of N mutexes, and accessing a[i] can only be done when m[i] is locked," then that's how it will work.
The need to consistently stick to the same protocol is why you should generally encapsulate the mutex and the code/data protected by it in a class in some way or another, so that outside code doesn't need to know the details of the protocol. It just knows "call this member function, and the synchronisation will happen automagically." This "automagic" will be the class correcrtly implementing the protocol.
A crucial consideration when deciding between a mutex per array and a mutex per element is whether there are operations - like tracking the number of "in-use" array elements, the "active" element, or moving a pointer-to-array to a larger buffer - that can only be done safely by one thread while all the others are blocked.
A lesser but sometimes important consideration is the amount of extra memory more mutexes use.
If you genuinely need to do this kind of update as quickly as possible in a highly contested multi-threaded program, you may also want to learn about lock-free atomic types and their compare-and-swap / exchange operations, but I'd recommend against considering that unless profiling the existing locking is significant in your overall program performance.
A mutex does not stop other threads from running completely, it only stops other threads from locking the same mutex. I.e. while one thread is keeping the mutex locked, the operating system continues to do context switches letting other threads run also, but if any other thread is trying to lock the same mutex its execution will be halted until the mutex is unlocked.
So yes, you can indeed have several different mutexes and lock/unlock them independently. Just beware of deadlocks, i.e. if one thread can lock more than one mutex at a time you can run into a situation where thread 1 has locked mutex A and is trying to lock mutex B but blocks because thread 2 already has mutex B locked and it is trying to lock mutex A..
Its not completely clear that your use case is:
the threads gets a buffer assigned on that they have to work
the threads have some results and request a special buffer to update.
On the first variant you need some assignment logic that assigns a buffer to a thread.
This logic has to be exectued in an atomic way. so the best is to use a mutex to protect the assignment logic.
On the other variant it may be the best to have a vector of mutexes, one for each buffer element.
In Both cases the buffer does not need a protection because it (or better each field of it) is only accessed from one thread at a time.
You also may inform yourself about 'semaphores'. These contain a counter that allows to manage ressources that have a limited amount but more than one. Mutexes are a special case of semaphores with n=1.
You can have mutex per entry, C++11 mutex can be easily converted into an adaptive-spinlock, so you can achieve good CPU/Latency tradeoff.
Or, if you need very low latency yet have enough CPUs you can use an atomic "busy" flag per entry and spin in a tight compare-exchange loop on contention.
From experience, though, the best performance and scalability are achieved when concurrent writes are serialized via a command queue (or a queue of smaller immutable buffers to be concatenated at destination) and a single thread processing the queue.

Idiom or pattern for N concurrent readers and 1 producer

Using the C++11 standard library (with the only help of boost::thread eventually) is there a clean way to implement a N readers - 1 producer solution, where all the readers, once notified at the same time (with std::condition_variable::notify_all() for example) by the producer, are guaranteed to enter their critical section before the producer will eventually enter its critical section a second time. In other words, all the notified readers must observe the same state of the shared resource. Once the producer noties the N readers, it cannot modify the shared resource until all the N readers have finished their reading. Note that boost::barrier is not really what I need, as I do not know N in advance. N may vary from one notification to another.
You could use atomic counters, with some polling from the producer thread.
When the counter reaches either N or 0 (it's up to you) then the producer gets to work and produce whatever it needs to produce. Before notifying the condition variable, the producers sets the counter to 0 (or N).
When a reader is done, it simply increases (or decreases) the counter.
What you describe is called a barrier

How do I safely read a variable from one thread and modify it from another?

I have a class instances which is being used in multiple threads. I am updating multiple member variables from one thread and reading the same member variables from one thread. What is the correct way to maintain the thread safety?
eg:
phthread_mutex_lock(&mutex1)
obj1.memberV1 = 1;
//unlock here?
Should I unlock the mutex over here? ( if another thread access the obj1 member variables 1 and 2 now, the accessed data might not be correct because memberV2 has not yet be updated. However, if I does not release the lock, the other thread might block because there is time consuming operation below.
//perform some time consuming operation which must be done before the assignment to memberV2 and after the assignment to memberV1
obj1.memberV2 = update field 2 from some calculation
pthread_mutex_unlock(&mutex1) //should I only unlock here?
Thanks
Your locking is correct. You should not release the lock early just to allow another thread to proceed (because that would allow the other thread to see the object in an inconsistent state.)
Perhaps it would be better to do something like:
//perform time consuming calculation
pthread_mutex_lock(&mutex1)
obj1.memberV1 = 1;
obj1.memberV2 = result;
pthread_mutex_unlock(&mutex1)
This of course assumes that the values used in the calculation won't be modified on any other thread.
Its hard to tell what you are doing that is causing problems. The mutex pattern is pretty simple. You Lock the mutex, access the shared data, unlock the mutex. This protects data, becuase the mutex will only let one thread get the lock at a time. Any thread that fails to get the lock has to wait till the mutex is unlocked. Unlocking wakes the waiters up. They will then fight to attain the lock. Losers go back to sleep. The time it takes to wake up might be multiple ms or more from the time the lock is released. Make sure you always unlock the mutex eventually.
Make sure you don't to keep locks locked for a long period of time. Most of the time, a long period of time is like a micro second. I prefer to keep it down around "a few lines of code." Thats why people have suggested that you do the long running calculation outside the lock. The reason for not keeping locks a long time is you increase the number of times other threads will hit the lock and have to spin or sleep, which decreases performance. You also increase the probability that your thread might be pre-empted while owning the lock, which means the lock is enabled while that thread sleeps. Thats even worse performance.
Threads that fail a lock dont have to sleep. Spinning means a thread encountering a locked mutex doesn't sleep, but loops repeatedly testing the lock for a predefine period before giving up and sleeping. This is a good idea if you have multiple cores or cores capable of multiple simultaneous threads. Multiple active threads means two threads can be executing the code at the same time. If the lock is around a small amount of code, then the thread that got the lock is going to be done real soon. the other thread need only wait a couple nano secs before it will get the lock. Remember, sleeping your thread is a context switch and some code to attach your thread to the waiters on the mutex, all have costs. Plus, once your thread sleeps, you have to wait for a period of time before the scheduler wakes it up. that could be multiple ms. Lookup spinlocks.
If you only have one core, then if a thread encounters a lock it means another sleeping thread owns the lock and no matter how long you spin it aint gonna unlock. So you would use a lock that sleeps a waiter immediately in hopes that the thread owning the lock will wake up and finish.
You should assume that a thread can be preempted at any machine code instruction. Also you should assume that each line of c code is probably many machine code instructions. The classic example is i++. This is one statement in c, but a read, an increment, and a store in machine code land.
If you really care about performance, try to use atomic operations first. Look to mutexes as a last resort. Most concurrency problems are easily solved with atomic operations (google gcc atomic operations to start learning) and very few problems really need mutexes. Mutexes are way way way slower.
Protect your shared data wherever it is written and wherever it is read. else...prepare for failure. You don't have to protect shared data during periods of time when only a single thread is active.
Its often useful to be able to run your app with 1 thread as well as N threads. This way you can debug race conditions easier.
Minimize the shared data that you protect with locks. Try to organize data into structures such that a single thread can gain exclusive access to the entire structure (perhaps by setting a single locked flag or version number or both) and not have to worry about anything after that. Then most of the code isnt cluttered with locks and race conditions.
Functions that ultimately write to shared variables should use temp variables until the last moment and then copy the results. Not only will the compiler generate better code, but accesses to shared variables especially changing them cause cache line updates between L2 and main ram and all sorts of other performance issues. Again if you don't care about performance disregard this. However i recommend you google the document "everything a programmer should know about memory" if you want to know more.
If you are reading a single variable from the shared data you probably don't need to lock as long as the variable is an integer type and not a member of a bitfield (bitfield members are read/written with multiple instructions). Read up on atomic operations. When you need to deal with multiple values, then you need a lock to make sure you didn't read version A of one value, get preempted, and then read version B of the next value. Same holds true for writing.
You will find that copies of data, even copies of entire structures come in handy. You can be working on building a new copy of the data and then swap it by changing a pointer in with one atomic operation. You can make a copy of the data and then do calculations on it without worrying if it changes.
So maybe what you want to do is:
lock the mutex
Make a copy of the input data to the long running calculation.
unlock the mutex
L1: Do the calculation
Lock the mutex
if the input data has changed and this matters
read the input data, unlock the mutex and go to L1
updata data
unlock mutex
Maybe, in the example above, you still store the result if the input changed, but go back and recalc. It depends if other threads can use a slightly out of date answer. Maybe other threads when they see that a thread is already doing the calculation simply change the input data and leave it to the busy thread to notice that and redo the calculation (there will be a race condition you need to handle if you do that, and easy one). That way the other threads can do other work rather than just sleep.
cheers.
Probably the best thing to do is:
temp = //perform some time consuming operation which must be done before the assignment to memberV2
pthread_mutex_lock(&mutex1)
obj1.memberV1 = 1;
obj1.memberV2 = temp; //result from previous calculation
pthread_mutex_unlock(&mutex1)
What I would do is separate the calculation from the update:
temp = some calculation
pthread_mutex_lock(&mutex1);
obj.memberV1 = 1;
obj.memberV2 = temp;
pthread_mutex_unlock(&mutex1);