I'm learning about mutex and threading right now. I was wondering if there's anything dangerous or inherently wrong with automating mutex with a class like this:
class AutoMutex
{
private:
std::mutex& m_Mutex;
public:
AutoMutex(std::mutex& m) : m_Mutex(m)
{
m_Mutex.lock();
}
~AutoMutex()
{
m_Mutex.unlock();
}
};
And then, of course, you would use it like this:
void SomeThreadedFunc()
{
AutoMutex m(Mutex); // With 'Mutex' being some global mutex.
// Do stuff
}
The idea is that, on construction of an AutoMutex object, it locks the mutex. Then, when it goes out of scope, the destructor automatically unlocks it.
You could even just put it in scopes if you don't need it for an entire function. Like this:
void SomeFunc()
{
// Do stuff
{
AutoMutex m(Mutex);
// Do race condition stuff.
}
// Do other stuff
}
Is this okay? I don't personally see anything wrong with it, but as I'm not the most experienced, I feel there's something I may be missing.
It's safe to use a RAII wrapper, and in fact safer than using mutex member functions directly, but it's also unnecessary to write since standard library already provides this. It's called std::lock_guard.
However, your implementation isn't entirely safe, because it's copyable, and a copy will attempt to re-unlock the mutex which will lead to undefined behaviour. std::lock_guard resolves this issue by being non-copyable.
There's also std::unique_lock which is very similar, but allows things such as releasing the lock within the lifetime. std::scoped_lock should be used if you need to lock multiple mutexes. Using multiple lock guard may lead to deadlock. std::scoped_lock is also fine to use with a single mutex, so you can replace all uses of lock guard with it.
Related
I have the namespace below which func1 and func2 will be called from diffrent threads.
#include<thread>
namespace test{
std::mutex mu;
void func1(){
std::lock_guard<mutex>lock(mu);
//the whole function needs to be protected
}
void func2() {
mu.lock();
//some code that should not be executed when func1 is executed
mu.unlock();
//some other code
}
}
is it deadlock safe to use this mutex (once with lock_guard and outside of it ) to protect these critical sections ? if not how to achieve this logic?
Yes, you can effectively mix and match different guard instances (e.g. lock_guard, unique_lock, etc...) with std::mutex in different functions. One case I run into occassionally is when I want to use std::lock_guard for most methods, but usage of std::condition_variable expects a std::unique_lock for its wait method.
To elaborate on what Oblivion said, I typically introduce a new scope block within a function so that usage of std::lock_guard is consistent. Example:
void func2() {
{ // ENTER LOCK
lock_guard<std::mutex> lck;
//some code that should not be executed when func1 is executed
} // EXIT LOCK
// some other (thread safe) code
}
The advantage of the using the above pattern is that if anything throws an exception within the critical section of code that is under a lock, the destructor of lck will still be invoked and hence, unlock the mutex.
Everything the lock_guard does is to guarantee unlock on destruction. It's a convenience to get code right when functions can take multiple paths (think of exceptions!) not a necessity. Also, it builds on the "regular" lock() and unlock() functions. In summary, it is safe.
Deadlock happens when at least two mutex are involved or the single mutex didn't unlock forever for whatever reason.
The only issue with the second function is, in case of exception the lock will not be released.
You can simply use lock_guard or anything else that gets destroyed(and unlocks the mutex at dtor) to avoid such a scenario as you did for the first function.
From what I have understood, std::unique_lock is a kind of wrapper around the underlying mutex object so as to provide a safer implementation over using raw mutexes (e.g., end up in unlocked state if an exception is thrown or on destruction). Is this all std::unique_lock is for?
Try #1
std::mutex m; // global
void foo() {
m.lock();
// critical section
m.unlock();
}
Try #2
std::mutex m; // global
void foo() {
std::unique_lock<std::mutex> ul(m);
// critical section
}
Is Try #2 preferred over Try #1, and is this what std::unique_lock is for? Please provide some other examples where std::unique_lock may be desired.
Yes, that's exactly what it's for, and why you should use it.
It does go a little beyond the simple example you gave; the time-related stuff in particular would be complex to implement on your own, but you could do it. Ultimately, though, yes, it's a wrapper.
From cppreference on unique_lock:
The class unique_lock is a general-purpose mutex ownership wrapper allowing deferred locking, time-constrained attempts at locking, recursive locking, transfer of lock ownership, and use with condition variables.
I have a object and all its function should be executed in sequential order.
I know it is possible to do that with a mutex like
#include <mutex>
class myClass {
private:
std::mutex mtx;
public:
void exampleMethod();
};
void myClass::exampleMethod() {
std::unique_lock<std::mutex> lck (mtx); // lock mutex until end of scope
// do some stuff
}
but with this technique a deadlock occurs after calling an other mutex locked method within exampleMethod.
so i'm searching for a better resolution.
the default std::atomic access is sequentially consistent, so its not possible to read an write to this object at the same time, but now when i access my object and call a method, is the whole function call also atomic or more something like
object* obj = atomicObj.load(); // read atomic
obj.doSomething(); // call method from non atomic object;
if yes is there a better way than locking the most functions with a mutex ?
Stop and think about when you actually need to lock a mutex. If you have some helper function that is called within many other functions, it probably shouldn't try to lock the mutex, because the caller already will have.
If in some contexts it is not called by another member function, and so does need to take a lock, provide a wrapper function that actually does that. It is not uncommon to have 2 versions of member functions, a public foo() and a private fooNoLock(), where:
public:
void foo() {
std::lock_guard<std::mutex> l(mtx);
fooNoLock();
}
private:
void fooNoLock() {
// do stuff that operates on some shared resource...
}
In my experience, recursive mutexes are a code smell that indicate the author hasn't really got their head around the way the functions are used - not always wrong, but when I see one I get suspicious.
As for atomic operations, they can really only be applied for small arithmetic operations, say incrementing an integer, or swapping 2 pointers. These operations are not automatically atomic, but when you use atomic operations, these are the sorts of things they can be used for. You certainly can't have any reasonable expectations about 2 separate operations on a single atomic object. Anything could happen in between the operations.
You could use a std::recursive_mutex instead. This will allow a thread that already owns to mutex to reacquire it without blocking. However, if another thread tries to acquire the lock it will block.
As #BoBTFish properly indicated, it is better to separate your class's public interface, which member functions acquire non-recursive lock and then call private methods which don't. Your code must then assume a lock is always held when a private method is run.
To be safe on this, you may add a reference to std::unique_lock<std::mutex> to each of the method that requires the lock to be held.
Thus, even if you happen to call one private method from another, you would need to make sure a mutex is locked before execution:
class myClass
{
std::mutex mtx;
//
void i_exampleMethod(std::unique_lock<std::mutex> &)
{
// execute method
}
public:
void exampleMethod()
{
std::unique_lock<std::mutex> lock(mtx);
i_exampleMethod(lock);
}
};
Simple question - basically, do I have to unlock a mutex, or can I simply use the scope operators and the mutex will unlock automatically?
ie:
{
pthread_mutex_lock (&myMutex);
sharedResource++;
} // my mutex is now unlocked?
or should I:
{
pthread_mutex_lock (&myMutex);
sharedResource++;
pthread_mutex_unlock (&myMutex);
}
The mutex is not going out of scope in your examples; and there is no way for the compiler to know that a particular function needs calling at the end of the scope, so the first example does not unlock the mutex.
If you are using (error-prone) functions to lock and unlock the mutex, then you will need to ensure that you always call unlock() - even if the protected operation throws an exception.
The best way to do this is to use a RAII class to manage the lock, as you would for any other resource that needs releasing after use:
class lock_guard {
public:
explicit lock_guard(mutex & m) : m(m) {mutex_lock(m);}
~lock_guard() {mutex_unlock(m);}
lock_guard(lock_guard const &) = delete;
void operator=(lock_guard &) = delete;
private:
mutex & m;
};
// Usage
{
lock_guard lock(myMutex);
shared_resource++;
} // mutex is unlocked here (even if an exception was thrown)
In modern C++, use std::lock_guard or std::unique_lock for this.
Using the RAII scope method is much better because it guarantees that the mutex will always be unlocked even in the face of exceptions or early return.
If you have access to C++11 though you might consider using a std::atomic<int> instead in which case you don't need to lock it to increment.
In this case, no the mutex will not be unlocked when this code goes out of scope.
Mutex lockers following RAII use the fact that a destructor is automatically called when a non-heap allocated object goes out of scope. It then unlocks the mutex once the object that locked the mutex goes out of scope. In the case of your code, no object is allocated within the scope of the braces, so there is no potential for the mutex to be unlocked once the scope ends.
For example, using QMutexLocker from the Qt libraries, you can ensure that your mutex is unlocked when scope is ended:
{
QMutexLocker locker(myMutex);
if(checkSomething())
{
return;
}
doSomething();
}
This code is similar to:
{
mutex_lock(myMutex);
if(checkSomething())
{
mutex_unlock(myMutex);
return;
}
doSomething();
mutex_unlock(myMutex);
}
Although as Brian Neal points out, it does not safely handle the case where checkSomething() and doSomething() throw exceptions.
An alternative to Qt's QMutexLocker would be STD's std::lock_guard.
I'm learning C++ and I saw that the source-code for a scope lock is quite simple. . How does it work, and how is this an example of "Resource Acquisition is Instantiation" (RAII) ?
Here is the little code that illustrates scoped lock:
void do_something()
{
//here in the constructor of scoped_lock, the mutex is locked,
//and a reference to it is kept in the object `lock` for future use
scoped_lock lock(shared_mutex_obj);
//here goes the critical section code
}//<---here : the object `lock` goes out of scope
//that means, the destructor of scoped_lock will run.
//in the destructor, the mutex is unlocked.
Read the comments. That explains how scoped_lock works.
And here is how scoped_lock is typically implemented (minimal code):
class scoped_lock : noncopyable
{
mutex_impl &_mtx; //keep ref to the mutex passed to the constructor
public:
scoped_lock(mutex_impl & mtx ) : _mtx(mtx)
{
_mtx.lock(); //lock the mutex in the constructor
}
~scoped_lock()
{
_mtx.unlock(); //unlock the mutex in the constructor
}
};
The idea of RAII (Resource Acquisition Is Initialisation) is that creating an object and initialising it are joined together into one unseparable action. This generally means they're performed in the object's constructor.
Scoped locks work by locking a mutex when they are constructed, and unlocking it when they are destructed. The C++ rules guarantee that when control flow leaves a scope (even via an exception), objects local to the scope being exited are destructed correctly. This means using a scoped lock instead of manually calling lock() and unlock() makes it impossible to accidentally not unlock the mutex, e.g. when an exception is thrown in the middle of the code between lock() and unlock().
This principle applies to all scenarios of acquiring resources which have to be released, not just to locking mutexes. It's good practice to provide such "scope guard" classes for other operations with similar syntax.
For example, I recently worked on a data structure class which normally sends signals when it's modified, but these have to be disabled for some bulk operations. Providing a scope guard class which disables them at construction and re-enables them at destruction prevents potential unbalanced calls to the disable/enable functions.
Basically it works like this:
template <class Lockable>
class lock{
public:
lock(Lockable & m) : mtx(m){
mtx.lock();
}
~lock(){
mtx.unlock();
}
private:
Lockable & mtx;
};
If you use it like
int some_function_which_uses_mtx(){
lock<std::mutex> lock(mtx);
/* Work with a resource locked by mutex */
if( some_condition())
return 1;
if( some_other_condition())
return 1;
function_witch_might_throw();
return;
}
you create a new object with a scope-based lifetime. Whenever the current scope is left and this lock gets destroyed it will automatically call mtx.unlock(). Note that in this particular example the lock on the mutex is aquired by the constructor of the lock, which is RAIII.
How would you do this without a scope guard? You would need to call mtx.unlock() whenever you leave the function. This is a) cumbersome and b) error-prone. Also you can't release the mutex after a return without a scope guard.