Difference between shared mutex and mutex (why do both exist in C++ 11)? - c++

Haven't got an example online to demonstrate this vividly. Saw an example at http://en.cppreference.com/w/cpp/header/shared_mutex but
it is still unclear. Can somebody help?

By use of normal mutexes, you can guarantee exclusive access to some kind of critical resource – and nothing else. Shared mutexes extend this feature by allowing two levels of access: shared and exclusive as follows:
Exclusive access prevents any other thread from acquiring the mutex, just as with the normal mutex. It does not matter if the other thread tries to acquire shared or exclusive access.
Shared access allows multiple threads to acquire the mutex, but all of them only in shared mode. Exclusive access is not granted until all of the previous shared holders have returned the mutex (typically, as long as an exclusive request is waiting, new shared ones are queued to be granted after the exclusive access).
A typical scenario is a database: It does not matter if several threads read one and the same data simultaneously. But modification of the database is critical - if some thread reads data while another one is writing it might receive inconsistent data. So all reads must have finished before writing is allowed and new reading must wait until writing has finished. After writing, further reads can occur simultaneously again.
Edit: Sidenote:
Why readers need a lock?
This is to prevent the writer from acquiring the lock while reading yet occurs. Additionally, it prevents new readers from acquiring the lock if it is yet held exclusively.

A shared mutex has two levels of access 'shared' and 'exclusive'.
Multiple threads can acquire shared access but only one can hold 'exclusive' access (that includes there being no shared access).
The common scenario is a read/write lock. Recall that a Data Race can only occur when two threads access the same data at least one of which is a write.
The advantage of that is data may be read by many readers but when a writer needs access they must obtain exclusive access to the data.
Why have both? One the one hand the exclusive lock constitutes a normal mutex so arguably only Shared is needed. But there may be overheads in an shared lock implementation that can be avoided using the less featured type.
Here's an example (adapted slightly from the example here http://en.cppreference.com/w/cpp/thread/shared_mutex).
#include <iostream>
#include <mutex>
#include <shared_mutex>
#include <thread>
std::mutex cout_mutex;//Not really part of the example...
void log(const std::string& msg){
std::lock_guard guard(cout_mutex);
std::cout << msg << std::endl;
}
class ThreadSafeCounter {
public:
ThreadSafeCounter() = default;
// Multiple threads/readers can read the counter's value at the same time.
unsigned int get() const {
std::shared_lock lock(mutex_);//NB: std::shared_lock will shared_lock() the mutex.
log("get()-begin");
std::this_thread::sleep_for(std::chrono::milliseconds(500));
auto result=value_;
log("get()-end");
return result;
}
// Only one thread/writer can increment/write the counter's value.
void increment() {
std::unique_lock lock(mutex_);
value_++;
}
// Only one thread/writer can reset/write the counter's value.
void reset() {
std::unique_lock lock(mutex_);
value_ = 0;
}
private:
mutable std::shared_mutex mutex_;
unsigned int value_ = 0;
};
int main() {
ThreadSafeCounter counter;
auto increment_and_print = [&counter]() {
for (int i = 0; i < 3; i++) {
counter.increment();
auto ctr=counter.get();
{
std::lock_guard guard(cout_mutex);
std::cout << std::this_thread::get_id() << ' ' << ctr << '\n';
}
}
};
std::thread thread1(increment_and_print);
std::thread thread2(increment_and_print);
std::thread thread3(increment_and_print);
thread1.join();
thread2.join();
thread3.join();
}
Possible partial output:
get()-begin
get()-begin
get()-end
140361363867392 2
get()-end
140361372260096 2
get()-begin
get()-end
140361355474688 3
//Etc...
Notice how the two get()-begin() return show that two threads are holding the shared lock during the read.

"Shared mutexes are usually used in situations when multiple readers can access the same resource at the same time without causing data races, but only one writer can do so."
cppreference.com
This is useful when you need read/writer lock: https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock

Related

Read shared data protected by Mutex without locking Mutex

Given shared data protected by a Mutex. What is the appropriate way to read part of the shared data without needing to lock the Mutex? Is using std::atomic_ref an appropriate way as indicated in the example below?
struct A
{
std::mutex mutex;
int counter = 0;
void modify()
{
std::lock_guard<std::mutex> guard(mutex);
// do something with counter
}
int getCounter()
{
return std::atomic_ref<int>(counter).load();
}
};
If you bypass locking the mutex and perform atomic reads from the shared data (for example using std::atomic_ref), then your program will be invoking undefined behavior if one of the other thread writes using a non-atomic access.
If all threads use atomic operations to access the shared data, then there is no undefined behavior. However, in that case, there is probably no point in protecting the shared data with a mutex, if all accesses are atomic anyway.

One mutex vs Multiple mutexes. Which one is better for the thread pool?

Example here, just want to protect the iData to ensure only one thread visit it at the same time.
struct myData;
myData iData;
Method 1, mutex inside the call function (multiple mutexes could be created):
void _proceedTest(myData &data)
{
std::mutex mtx;
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
int const nMaxThreads = std::thread::hardware_concurrency();
vector<std::thread> threads;
for (int iThread = 0; iThread < nMaxThreads; ++iThread)
{
threads.push_back(std::thread(_proceedTest, iData));
}
for (auto& th : threads) th.join();
Method2, use only one mutex:
void _proceedTest(myData &data, std::mutex &mtx)
{
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
std::mutex mtx;
int const nMaxThreads = std::thread::hardware_concurrency();
vector<std::thread> threads;
for (int iThread = 0; iThread < nMaxThreads; ++iThread)
{
threads.push_back(std::thread(_proceedTest, iData, mtx));
}
for (auto& th : threads) th.join();
I want to make sure that the Method 1 (multiple mutexes) ensures that only one thread can visit the iData at the same time.
If Method 1 is correct, not sure Method 1 is better of Method 2?
Thanks!
I want to make sure that the Method 1 (multiple mutexes) ensures that only one thread can visit the iData at the same time.
Your 1st example creates a local mutex variable on the stack, it won't be shared with the other threads. Thus it's completely useless.
It won't guarantee exclusive access to iData.
If Method 1 is correct, not sure Method 1 is better of Method 2?
It isn't correct.
The other answers are correct on the technical level, but there is an important language independent thing missing: you always prefer to minimize the number of different mutexes/locks/... !
Because: as soon as you have more than one thing that a thread needs to acquire in order to do something (to then release all acquired locks) order becomes crucial.
When you have two locks, and you have to different pieces of code, like:
getLockA() {
getLockB() {
do something
release B
release A
And
getLockB() {
getLockA() {
you can quickly run into deadlocks - because two threads/processes can acquire one lock each - and then they are both stuck, waiting for the other one to release its lock. Of course - when looking at the above example "you would never make a mistake, and always go A first then B". But what if those locks exist in completely different parts of your application? And they aren't acquired in the same method or class, but over the course of say 3, 5 nested method invocations?
Thus: when you can solve your problem with one lock - use one lock only! The more locks you need to get something done, the higher the risk to end up in dead locks.
Method 1 only works if you make the mutex variable static.
void _proceedTest(myData &data)
{
static std::mutex mtx;
std::unique_lock<std::mutex> lk(mtx);
modifyData(data);
lk.unlock;
}
This will make mtx be shared by all threads that enter _proceedTest.
Since a static function scope variable is only visible to users of the function, it is not really a sufficient lock for the passed in data. This is because it is conceivable that multiple threads could be calling different functions that each want to manipulate data.
Thus, even though Method 1 is salvageable, Method 2 is still better, even though the cohesion between the lock and the data is weak.
The mutex in version 1 will go out of scope once you leave the _proceedTest scope, locking a mutex like that makes no sense because it will never be accessible to the other thread.
In the second version multiple threads can share the mutex (as long as it doesn't go out of scope, for example as a class member), this way one thread can lock it and the other thread can see that it is locked (and won't be able to lock it aswell, hence the term mutual exclusion).

std::lock_guard example, explanation on why it works

I've reached a point in my project that requires communication between threads on resources that very well may be written to, so synchronization is a must. However I don't really understand synchronization at anything other than the basic level.
Consider the last example in this link: http://www.bogotobogo.com/cplusplus/C11/7_C11_Thread_Sharing_Memory.php
#include <iostream>
#include <thread>
#include <list>
#include <algorithm>
#include <mutex>
using namespace std;
// a global variable
std::list<int>myList;
// a global instance of std::mutex to protect global variable
std::mutex myMutex;
void addToList(int max, int interval)
{
// the access to this function is mutually exclusive
std::lock_guard<std::mutex> guard(myMutex);
for (int i = 0; i < max; i++) {
if( (i % interval) == 0) myList.push_back(i);
}
}
void printList()
{
// the access to this function is mutually exclusive
std::lock_guard<std::mutex> guard(myMutex);
for (auto itr = myList.begin(), end_itr = myList.end(); itr != end_itr; ++itr ) {
cout << *itr << ",";
}
}
int main()
{
int max = 100;
std::thread t1(addToList, max, 1);
std::thread t2(addToList, max, 10);
std::thread t3(printList);
t1.join();
t2.join();
t3.join();
return 0;
}
The example demonstrates how three threads, two writers and one reader, accesses a common resource(list).
Two global functions are used: one which is used by the two writer threads, and one being used by the reader thread. Both functions use a lock_guard to lock down the same resource, the list.
Now here is what I just can't wrap my head around: The reader uses a lock in a different scope than the two writer threads, yet still locks down the same resource. How can this work? My limited understanding of mutexes lends itself well to the writer function, there you got two threads using the exact same function. I can understand that, a check is made right as you are about to enter the protected area, and if someone else is already inside, you wait.
But when the scope is different? This would indicate that there is some sort of mechanism more powerful than the process itself, some sort of runtime environment blocking execution of the "late" thread. But I thought there were no such things in c++. So I am at a loss.
What exactly goes on under the hood here?
Let’s have a look at the relevant line:
std::lock_guard<std::mutex> guard(myMutex);
Notice that the lock_guard references the global mutex myMutex. That is, the same mutex for all three threads. What lock_guard does is essentially this:
Upon construction, it locks myMutex and keeps a reference to it.
Upon destruction (i.e. when the guard's scope is left), it unlocks myMutex.
The mutex is always the same one, it has nothing to do with the scope. The point of lock_guard is just to make locking and unlocking the mutex easier for you. For example, if you manually lock/unlock, but your function throws an exception somewhere in the middle, it will never reach the unlock statement. So, doing it the manual way you have to make sure that the mutex is always unlocked. On the other hand, the lock_guard object gets destroyed automatically whenever the function is exited – regardless how it is exited.
myMutex is global, which is what is used to protect myList. guard(myMutex) simply engages the lock and the exit from the block causes its destruction, dis-engaging the lock. guard is just a convenient way to engage and dis-engage the lock.
With that out of the way, mutex does not protect any data. It just provides a way to protect data. It is the design pattern that protects data. So if I write my own function to modify the list as below, the mutex cannot protect it.
void addToListUnsafe(int max, int interval)
{
for (int i = 0; i < max; i++) {
if( (i % interval) == 0) myList.push_back(i);
}
}
The lock only works if all pieces of code that need to access the data engage the lock before accessing and disengage after they are done. This design-pattern of engaging and dis-engaging the lock before and after every access is what protects the data (myList in your case)
Now you would wonder, why use mutex at all, and why not, say, a bool. And yes you can, but you will have to make sure that the bool variable will exhibit certain characteristics including but not limited to the below list.
Not be cached (volatile) across multiple threads.
Read and write will be atomic operation.
Your lock can handle situation where there are multiple execution pipelines (logical cores, etc).
There are different synchronization mechanisms that provide "better locking" (across processes versus across threads, multiple processor versus, single processor, etc) at a cost of "slower performance", so you should always choose a locking mechanism which is just about enough for your situation.
Just to add onto what others here have said...
There is an idea in C++ called Resource Acquisition Is Initialization (RAII) which is this idea of binding resources to the lifetime of objects:
Resource Acquisition Is Initialization or RAII, is a C++ programming technique which binds the life cycle of a resource that must be acquired before use (allocated heap memory, thread of execution, open socket, open file, locked mutex, disk space, database connection—anything that exists in limited supply) to the lifetime of an object.
C++ RAII Info
The use of a std::lock_guard<std::mutex> class follows the RAII idea.
Why is this useful?
Consider a case where you don't use a std::lock_guard:
std::mutex m; // global mutex
void oops() {
m.lock();
doSomething();
m.unlock();
}
in this case, a global mutex is used and is locked before the call to doSomething(). Then once doSomething() is complete the mutex is unlocked.
One problem here is what happens if there is an exception? Now you run the risk of never reaching the m.unlock() line which releases the mutex to other threads.
So you need to cover the case where you run into an exception:
std::mutex m; // global mutex
void oops() {
try {
m.lock();
doSomething();
m.unlock();
} catch(...) {
m.unlock(); // now exception path is covered
// throw ...
}
}
This works but is ugly, verbose, and inconvenient.
Now lets write our own simple lock guard.
class lock_guard {
private:
std::mutex& m;
public:
lock_guard(std::mutex& m_):(m(m_)){ m.lock(); } // lock on construction
~lock_guard() { t.unlock(); }} // unlock on deconstruction
}
When the lock_guard object is destroyed, it will ensure that the mutex is unlocked.
Now we can use this lock_guard to handle the case from before in a better/cleaner way:
std::mutex m; // global mutex
void ok() {
lock_guard lk(m); // our simple lock guard, protects against exception case
doSomething();
} // when scope is exited our lock guard object is destroyed and the mutex unlocked
This is the same idea behind std::lock_guard.
Again this approach is used with many different types of resources which you can read more about by following the link on RAII.
This is precisely what a lock does. When a thread takes the lock, regardless of where in the code it does so, it must wait its turn if another thread holds the lock. When a thread releases a lock, regardless of where in the code it does so, another thread may acquire that lock.
Locks protect data, not code. They do it by ensuring all code that accesses the protected data does so while it holds the lock, excluding other threads from any code that might access that same data.

Is std::mutex sufficient for data synchronization between threads

If I have a global array that multiple threads are writing to and reading from, and I want to ensure that this array remains synchronized between threads, is using std::mutex enough for this purpose, as shown in pseudo code below? I came across with this resource, which makes me think that the answer is positive:
Mutual exclusion locks (such as std::mutex or atomic spinlock) are an example of release-acquire synchronization: when the lock is released by thread A and acquired by thread B, everything that took place in the critical section (before the release) in the context of thread A has to be visible to thread B (after the acquire) which is executing the same critical section.
I'm still interested in other people's opinion.
float * globalArray;
std::mutex globalMutex;
void method1()
{
std::lock_guard<std::mutex> lock(globalMutex);
// Perform reads/writes to globalArray
}
void method2()
{
std::lock_guard<std::mutex> lock(globalMutex);
// Perform reads/writes to globalArray
}
main()
{
std::thread t1(method1());
std::thread t2(method2());
std::thread t3(method1());
std::thread t4(method2());
...
std::thread tn(method1());
}
This is precisely what mutexes are for. Just try not to hold them any longer than necessary to minimize the costs of contention.

Atomic Operations in C++

I have a set of C++ functions:
funcB(){};
funcC(){};
funcA()
{
funcB();
funcC();
}
Now I want to make funcA atomic, ie funcB and funcC calls inside funcA should be executed atomically. Is there any way to achieve this?
One way you can accomplish this is to use the new (C++11) features std::mutex and std::lock_guard.
For each protected resource, you instantiate a single global std::mutex; each thread then locks that mutex, as it requires, by the creation of a std::lock_guard:
#include <thread>
#include <iostream>
#include <mutex>
#include <vector>
// A single mutex, shared by all threads. It is initialized
// into the "unlocked" state
std::mutex m;
void funcB() {
std::cout << "Hello ";
}
void funcC() {
std::cout << "World." << std::endl;
}
void funcA(int i) {
// The creation of lock_guard locks the mutex
// for the lifetime of the lock_guard
std::lock_guard<std::mutex> l(m);
// Now only a single thread can run this code
std::cout << i << ": ";
funcB();
funcC();
// As we exit this scope, the lock_guard is destroyed,
// the mutex is unlocked, and another thread is allowed to run
}
int main () {
std::vector<std::thread> vt;
// Create and launch a bunch of threads
for(int i =0; i < 10; i++)
vt.push_back(std::thread(funcA, i));
// Wait for all of them to complete
for(auto& t : vt)
t.join();
}
Notes:
In your example some code unrelated to funcA could invoke either funcB or funcC without honoring the lock that funcA set.
Depending upon how your program is structured, you may want to manage the lifetime of the mutex differently. As an example, it might want to be a class member of the class that includes funcA.
In general, NO. Atomic operations are very precisely defined. What you want is a semaphore or a mutex.
If you are using GCC 4.7 than you can use the new Transactional Memory feature to do the following:
Transactional memory is intended to make programming with threads simpler, in particular synchronizing access to data shared between several threads using transactions. As with databases, a transaction is a unit of work that either completes in its entirety or has no effect at all (i.e., transactions execute atomically). Further, transactions are isolated from each other such that each transaction sees a consistent view of memory.
Currently, transactions are only supported in C++ and C in the form of transaction statements, transaction expressions, and function transactions. In the following example, both a and b will be read and the difference will be written to c, all atomically and isolated from other transactions:
__transaction_atomic { c = a - b; }
Therefore, another thread can use the following code to concurrently update b without ever causing c to hold a negative value (and without having to use other synchronization constructs such as locks or C++11 atomics):
__transaction_atomic { if (a > b) b++; }
The precise semantics of transactions are defined in terms of the C++11/C1X memory model (see below for a link to the specification). Roughly, transactions provide synchronization guarantees that are similar to what would be guaranteed when using a single global lock as a guard for all transactions. Note that like other synchronization constructs in C/C++, transactions rely on a data-race-free program (e.g., a nontransactional write that is concurrent with a transactional read to the same memory location is a data race).
More info: http://gcc.gnu.org/wiki/TransactionalMemory