Simple question - basically, do I have to unlock a mutex, or can I simply use the scope operators and the mutex will unlock automatically?
ie:
{
pthread_mutex_lock (&myMutex);
sharedResource++;
} // my mutex is now unlocked?
or should I:
{
pthread_mutex_lock (&myMutex);
sharedResource++;
pthread_mutex_unlock (&myMutex);
}
The mutex is not going out of scope in your examples; and there is no way for the compiler to know that a particular function needs calling at the end of the scope, so the first example does not unlock the mutex.
If you are using (error-prone) functions to lock and unlock the mutex, then you will need to ensure that you always call unlock() - even if the protected operation throws an exception.
The best way to do this is to use a RAII class to manage the lock, as you would for any other resource that needs releasing after use:
class lock_guard {
public:
explicit lock_guard(mutex & m) : m(m) {mutex_lock(m);}
~lock_guard() {mutex_unlock(m);}
lock_guard(lock_guard const &) = delete;
void operator=(lock_guard &) = delete;
private:
mutex & m;
};
// Usage
{
lock_guard lock(myMutex);
shared_resource++;
} // mutex is unlocked here (even if an exception was thrown)
In modern C++, use std::lock_guard or std::unique_lock for this.
Using the RAII scope method is much better because it guarantees that the mutex will always be unlocked even in the face of exceptions or early return.
If you have access to C++11 though you might consider using a std::atomic<int> instead in which case you don't need to lock it to increment.
In this case, no the mutex will not be unlocked when this code goes out of scope.
Mutex lockers following RAII use the fact that a destructor is automatically called when a non-heap allocated object goes out of scope. It then unlocks the mutex once the object that locked the mutex goes out of scope. In the case of your code, no object is allocated within the scope of the braces, so there is no potential for the mutex to be unlocked once the scope ends.
For example, using QMutexLocker from the Qt libraries, you can ensure that your mutex is unlocked when scope is ended:
{
QMutexLocker locker(myMutex);
if(checkSomething())
{
return;
}
doSomething();
}
This code is similar to:
{
mutex_lock(myMutex);
if(checkSomething())
{
mutex_unlock(myMutex);
return;
}
doSomething();
mutex_unlock(myMutex);
}
Although as Brian Neal points out, it does not safely handle the case where checkSomething() and doSomething() throw exceptions.
An alternative to Qt's QMutexLocker would be STD's std::lock_guard.
Related
In C++, lock_guard allows you to be RAII compliant when using locks. It calls lock() when constructing the lock_guard, and unlock() when destroying it once it goes out of scope.
Is it possible to tighten the scope of lock_guard such that it is destroyed sooner, to avoid keeping the lock for longer than necessary?
I'm not 100% sure what you mean, but you can introduce a block scope for the std::lock_guard with curly braces like this:
void foo()
{
// do uncritical stuff
{
// critical part starts here with construction
std::lock_guard<std::mutex> myLock(someMutex);
// do critical stuff
} // critical parts end here with myLock going out of scope
// do uncritical stuff
}
A unique_lock guard can be used instead of lock_guard. unique_lock gives you same RAII guarantees as lock_guard (calls lock() when constructing the lock_guard, and unlock() when destroying), as well as exposing lock() and unlock() to allow the you to lock and unlock yourself.
With a unique_lock you can lock() before the critical path, and unlock() straight after.
unique_lock will only call unlock() in the destructor if the lock was previously acquired, therefore there's no risk of releasing the lock twice (which causes undefined behavior).
I have the namespace below which func1 and func2 will be called from diffrent threads.
#include<thread>
namespace test{
std::mutex mu;
void func1(){
std::lock_guard<mutex>lock(mu);
//the whole function needs to be protected
}
void func2() {
mu.lock();
//some code that should not be executed when func1 is executed
mu.unlock();
//some other code
}
}
is it deadlock safe to use this mutex (once with lock_guard and outside of it ) to protect these critical sections ? if not how to achieve this logic?
Yes, you can effectively mix and match different guard instances (e.g. lock_guard, unique_lock, etc...) with std::mutex in different functions. One case I run into occassionally is when I want to use std::lock_guard for most methods, but usage of std::condition_variable expects a std::unique_lock for its wait method.
To elaborate on what Oblivion said, I typically introduce a new scope block within a function so that usage of std::lock_guard is consistent. Example:
void func2() {
{ // ENTER LOCK
lock_guard<std::mutex> lck;
//some code that should not be executed when func1 is executed
} // EXIT LOCK
// some other (thread safe) code
}
The advantage of the using the above pattern is that if anything throws an exception within the critical section of code that is under a lock, the destructor of lck will still be invoked and hence, unlock the mutex.
Everything the lock_guard does is to guarantee unlock on destruction. It's a convenience to get code right when functions can take multiple paths (think of exceptions!) not a necessity. Also, it builds on the "regular" lock() and unlock() functions. In summary, it is safe.
Deadlock happens when at least two mutex are involved or the single mutex didn't unlock forever for whatever reason.
The only issue with the second function is, in case of exception the lock will not be released.
You can simply use lock_guard or anything else that gets destroyed(and unlocks the mutex at dtor) to avoid such a scenario as you did for the first function.
I was using following kind of wait/signal way to let threads inform each other.
std::condition_variable condBiz;
std::mutex mutexBar;
..
void Foo::wait()
{
std::unique_lock<std::mutex> waitPoint(mutexBar);
if (waitPoint.owns_lock())
{
condBiz.wait(waitPoint);
}
}
void Foo::signal()
{
std::unique_lock<std::mutex> waitPoint(mutexBar);
condBiz.notify_all();
}
void Foo::safeSection(std::function<void(void)> & f)
{
std::unique_lock<std::mutex> waitPoint(mutexBar);
f();
}
Then converted lock/unlock mechanism from unique_lock to lock_guard because I'm not returning unique_lock to use somewhere else(other than wait/signal) and lock_guard is said to have less overhead:
void Foo::safeSection(std::function<void(void)> & f)
{
std::lock_guard<std::mutex> waitPoint(mutexBar); // same mutex object
f();
}
and it works.
Does this work for all platforms or just looks like working for current platform? Can unique_lock and lock_guard work with each other using same mutex object?
Both std::unique_lock and std::lock_guard lock the associated mutex in the constructor and unlock it in the destructor.
std::unique_lock:
Member functions
(constructor) constructs a unique_lock, optionally locking the supplied mutex
(destructor) unlocks the associated mutex, if owned
and the same for std::lock_guard:
Member functions
(constructor) constructs a lock_guard, optionally locking the given mutex
(destructor) destructs the lock_guard object, unlocks the underlying mutex
Since both behave the same, when used as a RAII style wrapper, I see no obstacle to use them together, even with the same mutex.
It has been pointed out in the comments to your post that checking if the unique_lock is owned in Foo::wait() is pointless, because the associated mutex must be owned by the lock at that point in order for the thread to be proceeding.
Instead your condition variable should be checking some meaningful condition, and it should do so in a while loop or by using the overload of condition_variable::wait which takes a predicate as its second argument, which is required by the C++ standard to have effect as:
while (!pred()) wait(lock);
The reason for checking the predicate in a while loop is that, apart from the fact that the condition may already be satisfied so no wait is necessary, the condition variable may spuriously wake up even when not signalled to do so.
Apart from that there is no reason why the signalling thread should not use a lock_guard with respect to the associated mutex. But I am not clear what you are trying to do.
I have two use cases.
A. I want to synchronise access to a queue for two threads.
B. I want to synchronise access to a queue for two threads and use a condition variable because one of the threads will wait on content to be stored into the queue by the other thread.
For use case A I see code example using std::lock_guard<>. For use case B I see code example using std::unique_lock<>.
What is the difference between the two and which one should I use in which use case?
The difference is that you can lock and unlock a std::unique_lock. std::lock_guard will be locked only once on construction and unlocked on destruction.
So for use case B you definitely need a std::unique_lock for the condition variable. In case A it depends whether you need to relock the guard.
std::unique_lock has other features that allow it to e.g.: be constructed without locking the mutex immediately but to build the RAII wrapper (see here).
std::lock_guard also provides a convenient RAII wrapper, but cannot lock multiple mutexes safely. It can be used when you need a wrapper for a limited scope, e.g.: a member function:
class MyClass{
std::mutex my_mutex;
void member_foo() {
std::lock_guard<mutex_type> lock(this->my_mutex);
/*
block of code which needs mutual exclusion (e.g. open the same
file in multiple threads).
*/
//mutex is automatically released when lock goes out of scope
}
};
To clarify a question by chmike, by default std::lock_guard and std::unique_lock are the same.
So in the above case, you could replace std::lock_guard with std::unique_lock. However, std::unique_lock might have a tad more overhead.
Note that these days (since, C++17) one should use std::scoped_lock instead of std::lock_guard.
lock_guard and unique_lock are pretty much the same thing; lock_guard is a restricted version with a limited interface.
A lock_guard always holds a lock from its construction to its destruction. A unique_lock can be created without immediately locking, can unlock at any point in its existence, and can transfer ownership of the lock from one instance to another.
So you always use lock_guard, unless you need the capabilities of unique_lock. A condition_variable needs a unique_lock.
Use lock_guard unless you need to be able to manually unlock the mutex in between without destroying the lock.
In particular, condition_variable unlocks its mutex when going to sleep upon calls to wait. That is why a lock_guard is not sufficient here.
If you're already on C++17 or later, consider using scoped_lock as a slightly improved version of lock_guard, with the same essential capabilities.
There are certain common things between lock_guard and unique_lock and certain differences.
But in the context of the question asked, the compiler does not allow using a lock_guard in combination with a condition variable, because when a thread calls wait on a condition variable, the mutex gets unlocked automatically and when other thread/threads notify and the current thread is invoked (comes out of wait), the lock is re-acquired.
This phenomenon is against the principle of lock_guard. lock_guard can be constructed only once and destructed only once.
Hence lock_guard cannot be used in combination with a condition variable, but a unique_lock can be (because unique_lock can be locked and unlocked several times).
One missing difference is:
std::unique_lock can be moved but std::lock_guard can't be moved.
Note: Both cant be copied.
They are not really same mutexes, lock_guard<muType> has nearly the same as std::mutex, with a difference that it's lifetime ends at the end of the scope (D-tor called) so a clear definition about these two mutexes :
lock_guard<muType> has a mechanism for owning a mutex for the duration of a scoped block.
And
unique_lock<muType> is a wrapper allowing deferred locking, time-constrained attempts at locking, recursive locking, transfer of lock ownership, and use with condition variables.
Here is an example implemetation :
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <functional>
#include <chrono>
using namespace std::chrono;
class Product{
public:
Product(int data):mdata(data){
}
virtual~Product(){
}
bool isReady(){
return flag;
}
void showData(){
std::cout<<mdata<<std::endl;
}
void read(){
std::this_thread::sleep_for(milliseconds(2000));
std::lock_guard<std::mutex> guard(mmutex);
flag = true;
std::cout<<"Data is ready"<<std::endl;
cvar.notify_one();
}
void task(){
std::unique_lock<std::mutex> lock(mmutex);
cvar.wait(lock, [&, this]() mutable throw() -> bool{ return this->isReady(); });
mdata+=1;
}
protected:
std::condition_variable cvar;
std::mutex mmutex;
int mdata;
bool flag = false;
};
int main(){
int a = 0;
Product product(a);
std::thread reading(product.read, &product);
std::thread setting(product.task, &product);
reading.join();
setting.join();
product.showData();
return 0;
}
In this example, i used the unique_lock<muType> with condition variable
As has been mentioned by others, std::unique_lock tracks the locked status of the mutex, so you can defer locking until after construction of the lock, and unlock before destruction of the lock. std::lock_guard does not permit this.
There seems no reason why the std::condition_variable wait functions should not take a lock_guard as well as a unique_lock, because whenever a wait ends (for whatever reason) the mutex is automatically reacquired so that would not cause any semantic violation. However according to the standard, to use a std::lock_guard with a condition variable you have to use a std::condition_variable_any instead of std::condition_variable.
Edit: deleted "Using the pthreads interface std::condition_variable and std::condition_variable_any should be identical". On looking at gcc's implementation:
std::condition_variable::wait(std::unique_lock&) just calls pthread_cond_wait() on the underlying pthread condition variable with respect to the mutex held by unique_lock (and so could equally do the same for lock_guard, but doesn't because the standard doesn't provide for that)
std::condition_variable_any can work with any lockable object, including one which is not a mutex lock at all (it could therefore even work with an inter-process semaphore)
I'm learning C++ and I saw that the source-code for a scope lock is quite simple. . How does it work, and how is this an example of "Resource Acquisition is Instantiation" (RAII) ?
Here is the little code that illustrates scoped lock:
void do_something()
{
//here in the constructor of scoped_lock, the mutex is locked,
//and a reference to it is kept in the object `lock` for future use
scoped_lock lock(shared_mutex_obj);
//here goes the critical section code
}//<---here : the object `lock` goes out of scope
//that means, the destructor of scoped_lock will run.
//in the destructor, the mutex is unlocked.
Read the comments. That explains how scoped_lock works.
And here is how scoped_lock is typically implemented (minimal code):
class scoped_lock : noncopyable
{
mutex_impl &_mtx; //keep ref to the mutex passed to the constructor
public:
scoped_lock(mutex_impl & mtx ) : _mtx(mtx)
{
_mtx.lock(); //lock the mutex in the constructor
}
~scoped_lock()
{
_mtx.unlock(); //unlock the mutex in the constructor
}
};
The idea of RAII (Resource Acquisition Is Initialisation) is that creating an object and initialising it are joined together into one unseparable action. This generally means they're performed in the object's constructor.
Scoped locks work by locking a mutex when they are constructed, and unlocking it when they are destructed. The C++ rules guarantee that when control flow leaves a scope (even via an exception), objects local to the scope being exited are destructed correctly. This means using a scoped lock instead of manually calling lock() and unlock() makes it impossible to accidentally not unlock the mutex, e.g. when an exception is thrown in the middle of the code between lock() and unlock().
This principle applies to all scenarios of acquiring resources which have to be released, not just to locking mutexes. It's good practice to provide such "scope guard" classes for other operations with similar syntax.
For example, I recently worked on a data structure class which normally sends signals when it's modified, but these have to be disabled for some bulk operations. Providing a scope guard class which disables them at construction and re-enables them at destruction prevents potential unbalanced calls to the disable/enable functions.
Basically it works like this:
template <class Lockable>
class lock{
public:
lock(Lockable & m) : mtx(m){
mtx.lock();
}
~lock(){
mtx.unlock();
}
private:
Lockable & mtx;
};
If you use it like
int some_function_which_uses_mtx(){
lock<std::mutex> lock(mtx);
/* Work with a resource locked by mutex */
if( some_condition())
return 1;
if( some_other_condition())
return 1;
function_witch_might_throw();
return;
}
you create a new object with a scope-based lifetime. Whenever the current scope is left and this lock gets destroyed it will automatically call mtx.unlock(). Note that in this particular example the lock on the mutex is aquired by the constructor of the lock, which is RAIII.
How would you do this without a scope guard? You would need to call mtx.unlock() whenever you leave the function. This is a) cumbersome and b) error-prone. Also you can't release the mutex after a return without a scope guard.