(Shared) mutex in C++ - c++

I've seen an example for a shared mutex:
class MyData {
std::vector<double> data_;
mutable shared_mutex mut_; // the mutex to protect data_;
public:
void write() {
unique_lock<shared_mutex> lk(mut_);
// ... write to data_ ...
}
void read() const {
shared_lock<shared_mutex> lk(mut_);
// ... read the data ...
}
};
Naturally I would have written instead:
public:
void write() {
mut_.lock();
// ... write to data_ ...
mut_.unlock();
}
void read() const {
mut_.lock_shared();
// ... read the data ...
mut_.unlock_shared();
}
};
Is my way also correct? And is there a difference between what I used and what was used in the example? Also, are there advantages of one over the other? Thank you!

Is my way also correct?
Consider what would happen if the code between the locking of the mutex and the unlocking throws an exception:
void write() {
mut_.lock();
// <-- exception is thrown here
mut_.unlock();
}
The mutex then remains locked.
are there advantages of one over the other?
Yes, unique_lock<> follows the RAII idiom, and therefore unlocking of the mutex is handled automatically (i.e., by its destructor) in case of an exception:
void write() {
unique_lock<shared_mutex> lk(mut_);
// <-- exception is thrown
}
In case of an exception after the creation of the unique_lock<shared_mutex> object – lk – its destructor is called, and it then unlocks the associated mutex if it was locked (remember that std::unique_lock, unlike std::lock_guard, doesn't always have ownership of the lock on the associated mutex – see std::defer_lock and std::unique_lock::unlock()).
To sum up, with lock_guard/unique_lock/shared_lock, no special handling is required from your side in case of exceptions or when leaving the member function from different execution paths.

Raw mutex is generally avoided in favour of the RAII version unique_lock(), which is safer in two situations:
exception
premature return
Like raw pointer are generally avoided in favour of RAII version smart pointers: unique_ptr or shared_ptr.
Either case, the RAII version ensure that the mutex (or pointer) is always released when the it goes out of scope.

Related

Is automating mutex like this in C++ safe?

I'm learning about mutex and threading right now. I was wondering if there's anything dangerous or inherently wrong with automating mutex with a class like this:
class AutoMutex
{
private:
std::mutex& m_Mutex;
public:
AutoMutex(std::mutex& m) : m_Mutex(m)
{
m_Mutex.lock();
}
~AutoMutex()
{
m_Mutex.unlock();
}
};
And then, of course, you would use it like this:
void SomeThreadedFunc()
{
AutoMutex m(Mutex); // With 'Mutex' being some global mutex.
// Do stuff
}
The idea is that, on construction of an AutoMutex object, it locks the mutex. Then, when it goes out of scope, the destructor automatically unlocks it.
You could even just put it in scopes if you don't need it for an entire function. Like this:
void SomeFunc()
{
// Do stuff
{
AutoMutex m(Mutex);
// Do race condition stuff.
}
// Do other stuff
}
Is this okay? I don't personally see anything wrong with it, but as I'm not the most experienced, I feel there's something I may be missing.
It's safe to use a RAII wrapper, and in fact safer than using mutex member functions directly, but it's also unnecessary to write since standard library already provides this. It's called std::lock_guard.
However, your implementation isn't entirely safe, because it's copyable, and a copy will attempt to re-unlock the mutex which will lead to undefined behaviour. std::lock_guard resolves this issue by being non-copyable.
There's also std::unique_lock which is very similar, but allows things such as releasing the lock within the lifetime. std::scoped_lock should be used if you need to lock multiple mutexes. Using multiple lock guard may lead to deadlock. std::scoped_lock is also fine to use with a single mutex, so you can replace all uses of lock guard with it.

Thread safety with std::shared_ptr

I was reading about thread safety of std::shared_ptr and about the atomic operations overloads it provides and was wondering regarding a specific use case of it in a class.
From my understanding of the thread safety that is promised from shared_ptr, it is safe to have a get method like so:
class MyClass
{
std::shared_ptr<int> _obj;
public:
void start()
{
std::lock_guard<std::mutex> lock(_mtx);
_obj = std::make_shared<int>(1);
}
void stop()
{
std::lock_guard<std::mutex> lock(_mtx);
_obj.reset();
}
std::shared_ptr<int> get_obj() const
{
return _obj; //Safe (?)
}
};
The getter should be safe since the object will either be initializes or empty at any point from any thread.
But what if I want to throw an exception if the object is empty, I need to check it before returning it, do I have to put a lock there now (since stop() might be called between the if and the return)? Or is it possible to use the locking mechanism of the shared pointer and not use a lock in this method:
std::shared_ptr<int> get_obj() const
{
auto tmp = _obj;
if(!tmp) throw std::exception();
return tmp;
}
std::shared_ptr instances are not thread safe. Multiple instances all pointing to the same object can be modified from multiple threads but a single instance is not thread safe. See https://en.cppreference.com/w/cpp/memory/shared_ptr:
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object. If multiple threads of execution access the same shared_ptr without synchronization and any of those accesses uses a non-const member function of shared_ptr then a data race will occur; the shared_ptr overloads of atomic functions can be used to prevent the data race.
You therefore either need to lock your mutex in your get_obj method or use std::atomic_load and std::atomic_store in your start and stop methods

Does a mutex unlock when it comes out of scope?

Simple question - basically, do I have to unlock a mutex, or can I simply use the scope operators and the mutex will unlock automatically?
ie:
{
pthread_mutex_lock (&myMutex);
sharedResource++;
} // my mutex is now unlocked?
or should I:
{
pthread_mutex_lock (&myMutex);
sharedResource++;
pthread_mutex_unlock (&myMutex);
}
The mutex is not going out of scope in your examples; and there is no way for the compiler to know that a particular function needs calling at the end of the scope, so the first example does not unlock the mutex.
If you are using (error-prone) functions to lock and unlock the mutex, then you will need to ensure that you always call unlock() - even if the protected operation throws an exception.
The best way to do this is to use a RAII class to manage the lock, as you would for any other resource that needs releasing after use:
class lock_guard {
public:
explicit lock_guard(mutex & m) : m(m) {mutex_lock(m);}
~lock_guard() {mutex_unlock(m);}
lock_guard(lock_guard const &) = delete;
void operator=(lock_guard &) = delete;
private:
mutex & m;
};
// Usage
{
lock_guard lock(myMutex);
shared_resource++;
} // mutex is unlocked here (even if an exception was thrown)
In modern C++, use std::lock_guard or std::unique_lock for this.
Using the RAII scope method is much better because it guarantees that the mutex will always be unlocked even in the face of exceptions or early return.
If you have access to C++11 though you might consider using a std::atomic<int> instead in which case you don't need to lock it to increment.
In this case, no the mutex will not be unlocked when this code goes out of scope.
Mutex lockers following RAII use the fact that a destructor is automatically called when a non-heap allocated object goes out of scope. It then unlocks the mutex once the object that locked the mutex goes out of scope. In the case of your code, no object is allocated within the scope of the braces, so there is no potential for the mutex to be unlocked once the scope ends.
For example, using QMutexLocker from the Qt libraries, you can ensure that your mutex is unlocked when scope is ended:
{
QMutexLocker locker(myMutex);
if(checkSomething())
{
return;
}
doSomething();
}
This code is similar to:
{
mutex_lock(myMutex);
if(checkSomething())
{
mutex_unlock(myMutex);
return;
}
doSomething();
mutex_unlock(myMutex);
}
Although as Brian Neal points out, it does not safely handle the case where checkSomething() and doSomething() throw exceptions.
An alternative to Qt's QMutexLocker would be STD's std::lock_guard.

How does scope-locking work?

I'm learning C++ and I saw that the source-code for a scope lock is quite simple. . How does it work, and how is this an example of "Resource Acquisition is Instantiation" (RAII) ?
Here is the little code that illustrates scoped lock:
void do_something()
{
//here in the constructor of scoped_lock, the mutex is locked,
//and a reference to it is kept in the object `lock` for future use
scoped_lock lock(shared_mutex_obj);
//here goes the critical section code
}//<---here : the object `lock` goes out of scope
//that means, the destructor of scoped_lock will run.
//in the destructor, the mutex is unlocked.
Read the comments. That explains how scoped_lock works.
And here is how scoped_lock is typically implemented (minimal code):
class scoped_lock : noncopyable
{
mutex_impl &_mtx; //keep ref to the mutex passed to the constructor
public:
scoped_lock(mutex_impl & mtx ) : _mtx(mtx)
{
_mtx.lock(); //lock the mutex in the constructor
}
~scoped_lock()
{
_mtx.unlock(); //unlock the mutex in the constructor
}
};
The idea of RAII (Resource Acquisition Is Initialisation) is that creating an object and initialising it are joined together into one unseparable action. This generally means they're performed in the object's constructor.
Scoped locks work by locking a mutex when they are constructed, and unlocking it when they are destructed. The C++ rules guarantee that when control flow leaves a scope (even via an exception), objects local to the scope being exited are destructed correctly. This means using a scoped lock instead of manually calling lock() and unlock() makes it impossible to accidentally not unlock the mutex, e.g. when an exception is thrown in the middle of the code between lock() and unlock().
This principle applies to all scenarios of acquiring resources which have to be released, not just to locking mutexes. It's good practice to provide such "scope guard" classes for other operations with similar syntax.
For example, I recently worked on a data structure class which normally sends signals when it's modified, but these have to be disabled for some bulk operations. Providing a scope guard class which disables them at construction and re-enables them at destruction prevents potential unbalanced calls to the disable/enable functions.
Basically it works like this:
template <class Lockable>
class lock{
public:
lock(Lockable & m) : mtx(m){
mtx.lock();
}
~lock(){
mtx.unlock();
}
private:
Lockable & mtx;
};
If you use it like
int some_function_which_uses_mtx(){
lock<std::mutex> lock(mtx);
/* Work with a resource locked by mutex */
if( some_condition())
return 1;
if( some_other_condition())
return 1;
function_witch_might_throw();
return;
}
you create a new object with a scope-based lifetime. Whenever the current scope is left and this lock gets destroyed it will automatically call mtx.unlock(). Note that in this particular example the lock on the mutex is aquired by the constructor of the lock, which is RAIII.
How would you do this without a scope guard? You would need to call mtx.unlock() whenever you leave the function. This is a) cumbersome and b) error-prone. Also you can't release the mutex after a return without a scope guard.

Locking shared resources in constructor and destructor

I believe I've got a good handle on at least the basics of multi-threading in C++, but I've never been able to get a clear answer on locking a mutex around shared resources in the constructor or the destructor. I was under the impression that you should lock in both places, but recently coworkers have disagreed. Pretend the following class is accessed by multiple threads:
class TestClass
{
public:
TestClass(const float input) :
mMutex(),
mValueOne(1),
mValueTwo("Text")
{
//**Does the mutex need to be locked here?
mValueTwo.Set(input);
mValueOne = mValueTwo.Get();
}
~TestClass()
{
//Lock Here?
}
int GetValueOne() const
{
Lock(mMutex);
return mValueOne;
}
void SetValueOne(const int value)
{
Lock(mMutex);
mValueOne = value;
}
CustomType GetValueTwo() const
{
Lock(mMutex);
return mValueOne;
}
void SetValueTwo(const CustomType type)
{
Lock(mMutex);
mValueTwo = type;
}
private:
Mutex mMutex;
int mValueOne;
CustomType mValueTwo;
};
Of course everything should be safe through the initialization list, but what about the statements inside the constructor? In the destructor would it be beneficial to do a non-scoped lock, and never unlock (essentially just call pthread_mutex_destroy)?
Multiple threads cannot construct the same object, nor should any thread be allowed to use the object before it's fully constructed. So, in sane code, construction without locking is safe.
Destruction is a slightly harder case. But again, proper lifetime management of your object can ensure that an object is never destroyed when there's a chance that some thread(s) might still use it.
A shared pointer can help in achieving this eg. :
construct the object in a certain thread
pass shared pointers to every thread that needs access to the object (including the thread that constructed it if needed)
the object will be destroyed when all threads have released the shared pointer
But obviously, other valid approaches exist. The key is to keep proper boundaries between the three main stages of an object's lifetime : construction, usage and destruction. Never allow an overlap between any of these stages.
They don't have to be locked in the constructor, as the only way anyone external can get access to that data at that point is if you pass them around from the constructor itself (or do some undefined behaviour, like calling a virtual method).
[Edit: Removed part about destructor, since as a comment rightfully asserts, you have bigger issues if you're trying to access resources from an object which might be dead]