https://en.cppreference.com/w/cpp/thread/lock_guard
(constructor)
constructs a lock_guard, optionally locking the given mutex
What would be the way to avoid locking if it is optional?
This is one way to avoid having the lock_guard constructor lock the given mutex :
std::mutex mtx;
mtx.lock();
std::lock_guard<std::mutex> lck(mtx, std::adopt_lock);
The intention is to allow your lock_guard to take ownership of a mutex that you already locked.
From: https://en.cppreference.com/w/cpp/thread/lock_guard/lock_guard
explicit lock_guard( mutex_type& m ); (1) (since C++11)
lock_guard( mutex_type& m, std::adopt_lock_t t ); (2) (since C++11)
lock_guard( const lock_guard& ) = delete; (3) (since C++11)
Acquires ownership of the given mutex m.
1) Effectively calls m.lock(). The behavior is undefined if m is not a recursive mutex and the current thread already owns m.
2) Acquires ownership of the mutex m without attempting to lock it.
The behavior is undefined if the current thread does not own m.
3) Copy constructor is deleted.
The behavior is undefined if m is destroyed before the lock_guard object is.
Related
In the book "Concurrency in Action", there's an implementation of thread-safe stack where mutex is acquired/locked upon entering pop() and empty() functions like shown below:
class threadsafe_stack {
private:
std::stack<T> data;
mutable std::mutex m;
public:
//...
void pop(T& value) {
std::lock_guard<std::mutex> lock(m);
if(data.empty()) throw empty_stack();
value = std::move(data.top());
data.pop();
}
bool empty() const {
std::lock_guard<std::mutex> lock(m);
return data.empty();
}
};
My question is, how does this code not run into a deadlock, when a thread, that has acquired the lock upon entering pop() is calling empty() which is protected by mutex as well? If lock() is called by the thread that already owns the mutex, isn't that undefined behavior?
how does this code not run into a deadlock, when a thread, that has acquired the lock upon entering pop() is calling empty() which is protected by mutex as well?
Because you are not calling empty member function of threadsafe_stack but you are calling empty() of class std::stack<T>. If the code would be:
void pop(T& value)
{
std::lock_guard<std::mutex> lock(m);
if(empty()) // instead of data.empty()
throw empty_stack();
value = std::move(data.top());
data.pop();
}
Then, it would be undefined behavior:
If lock is called by a thread that already owns the mutex, the behavior is undefined: for example, the program may deadlock. An implementation that can detect the invalid usage is encouraged to throw a std::system_error with error condition resource_deadlock_would_occur instead of deadlocking.
Learn about recursive and shared mutex.
Not 100% sure what you mean, I guess you mean calling pop and empty sequentially in the same thread? Like in
while(!x.empty()) x.pop();
std::lock_guard follows RAII. This means the constructor
std::lock_guard<std::mutex> lock(m);
will acquire/lock the mutex and the destructor (when lock goes out of scope) will release/unlock the mutex again. So it's unlocked at the next function call.
Inside pop only data.empty() is called, which is not protected by a mutex. Calling this->empty() inside pop would indeed result in undefined behaviour.
You would be correct if pop would call this->empty. Locking the same mutex twice via a std::lock_guard is undefined behavior unless the locked mutex is a recursive one.
From cppreference on the constructor (the one that is used in the example code):
Effectively calls m.lock(). The behavior is undefined if m is not a recursive mutex and the current thread already owns m.
For the sake of completeness, there is a second constructor:
lock_guard( mutex_type& m, std::adopt_lock_t t );
which
Acquires ownership of the mutex m without attempting to lock it. The behavior is undefined if the current thread does not own m.
However, pop calls data.empty and this is the method of the private member, not the member function empty of threadsafe_stack. There is no problem in the code.
I'm trying to understand std::mutex, std::lock_guard, std::unique_lock better when working with a class.
For starters I sort of know the difference between lock_guard and unique_lock: I know that lock_guard only locks the mutex on construction which is a preferred use when using it in a class member function as such:
class Foo {
std::mutex myMutex;
public:
void someFunc() {
std::lock_guard<std::mutex> guard( myMutex );
// code
}
};
As in the above the lock_guard with the class's member myMutex will be locked in the beginning of the scope of the function Foo::someFunc() then unlocked after the code leaves the scope due to the destructors of both lock_guard and mutex.
I also understand that unique_lock allows you to lock & unlock a mutex multiple times.
My question is about a design of a class to where; what If I want the mutex to be locked in the constructor of the class, then not to have it unlock when the constructor goes out of scope, but to unlock when the destructor of the class is called...
class Foo {
std::mutex myMutex;
public:
Foo() {
// lock mutex here;
}
~Foo() {
// unlock mutex here;
}
};
Can the above be achieved; and if so how?
In the last example above what I'm not sure about is: if a lock_guard is used in the constructor of the class; will it go out of scope after the constructor leaves scope, or when the class's destructor is invoked when the object of the class goes out of scope? I like to try to mimic the desired behavior of the 2nd example shown.
After I had posted this question: I did a little more research and some trial and errors. With that I have opted towards a different implementation and solution.
Instead of what I was originally proposing, I ended up using std::shared_mutex and std:shared_lock.
So in my class's header I'm not saving or storing any mutex. Now in my class's cpp file. I'm using a static global shared_mutex
So my class now looks like this:
Foo.cpp
#include "Foo.h"
#include <mutex>
std::mutex g_mutex;
Foo::Foo() {
// code not locked
{ // scope of guard
std::lock_guard<std::mutex> lock( g_mutex );
// code to lock
} // end scope destroy guard unlock mutex
// other class code
}
Foo::someFunc() {
// Code to lock
std::lock_guard<std::mutex> lock( g_mutex );
}
I had discovered why it wasn't working properly for me. In my class's constructor it was calling a function from its parent or base class. The parent or base class was then calling a static member of this class and the static member function of this class was also using lock_guard on the same mutex.
After I found the problem; I had two options. I could of used 2 independent mutexes, one for the constructor specifically and one for the static method specifically. After some thought I figured well if I use 2, and I'm blocking the full constructor the current lock and mutex won't go out of scope until the class instance is destroyed; however the life of the class will be nearly the full life of the application. Then if I used a second mutex and lock guard within the static method, it would be redundant to wrap another lock_guard around an existing one. So I came to the conclusion that I needed to create a scoped block { } for the code that needed to be blocked by the mutex so that it can be unlocked after this section goes out of scope, then the constructor is free to call the static method and it can reuse the same mutex as it is now free. The class is now working properly and it is not crashing nor throwing exceptions when it shouldn't be.
I am reading Effective C++, in Rule 14: Think carefully about copying behavior in resource-managing classes, there is an example:
class Lock {
public:
explicit Lock(Mutex* pm) : mutexPtr(pm) {
lock(mutexPtr);
}
~Lock() {
unlock(mutexPtr);
}
private:
Mutex *mutexPtr;
};
It points out that if we construct the Lock as above, there will be a problem if we run the code below:
Mutex m;
Lock ml1(&m);
Lock ml2(ml1);
I think it may because the code may runs like below:
// ml1 constructes
lock(m)
// copy ml2, but ml1.mutexPtr and ml2.mutexPtr both point to m
ml2.mutexPtr = ml1.mutexPtr
// ml1 destructs
unlock(m)
// ml2 destructs
unlock(m)
So the m will be unlock for twice. So is it the real reason that cause the problem below? thx!
Yes, that is why the author is saying to be careful. If you use a recursive_mutex instead of plain mutex you can simply lock in the copy ctor and copy-assign operator, but if it's a non-recursive mutex it would likely be better to make the lock type non-copyable.
I have the following C++(11) code:
#include <mutex>
void unlock(std::unique_lock<std::mutex> && ulock)
{
}
int main(void)
{
std::mutex m;
std::unique_lock<std::mutex> ulock(m);
unlock(std::move(ulock));
if (ulock.mutex() == &m || ulock.owns_lock())
{
throw std::runtime_error("");
}
return 0;
}
What I can't figure out is why the mutex is still held after the return from unlock(). My expectation is that the std::move() causes the lock to go out of scope (and become unlocked by the destructor) upon return from the call to unlock(). At the very least, it seems like the std::move() should have caused ulock to become "unbound" from the mutex m.
What am I missing?
void unlock(std::unique_lock<std::mutex> && ulock)
Here ulock is a reference. A special kind of reference, but still a reference. It is just an alias for another object. Its creation does not involve creation of a new object, or any kind of ownership transfer. Likewise, end of its lifetime does not lead to any destructor call, it just means that you lost the alias for referring to some other object (not that it matters, since the function is ending anyway).
If you want to transfer ownership, you need an object, so pass by value instead of by reference:
void unlock(std::unique_lock<std::mutex> ulock)
Now, you will have to move the original lock, since std::unique_lock does not support copy construction, only move construction.
The straightforward way to make a class threadsafe is to add a mutex attribute and lock the mutex in the accessor methods
class cMyClass {
boost::mutex myMutex;
cSomeClass A;
public:
cSomeClass getA() {
boost::mutex::scoped_lock lock( myMutex );
return A;
}
};
The problem is that this makes the class non-copyable.
I can make things work by making the mutex a static. However, this means that every instance of the class blocks when any other instance is being accessed, because they all share the same mutex.
I wonder if there is a better way?
My conclusion is that there is no better way. Making a class thread-safe with private static mutex attribute is ‘best’: - it is simple, it works, and it hides the awkward details.
class cMyClass {
static boost::mutex myMutex;
cSomeClass A;
public:
cSomeClass getA() {
boost::mutex::scoped_lock lock( myMutex );
return A;
}
};
The disadvantage is that all instances of the class share the same mutex and so block each other unnecessarily. This cannot be cured by making the mutex attribute non-static ( so giving each instance its own mutex ) because the complexities of copying and assignment are nightmarish, if done properly.
The individual mutexes, if required, must be managed by an external non-copyable singleton with links established to each instance when created.
Thanks for all the responses.
Several people have mentioned writing my own copy constructor and assignment operator. I tried this. The problem is that my real class has many attributes which are always changing during development. Maintaining both the copy constructor and assignmet operator is tedious and error-prone, with errors creating hard to find bugs. Letting the compiler generate these for complex class is an enormous time saver and bug reducer.
Many responses are concerned about making the copy constructor and assignment operator thread-safe. This requirement adds even more complexity to the whole thing! Luckily for me, I do not need it since all the copying is done during set-up in a single thread.
I now think that the best approach would be to build a tiny class to hold just a mutex and the critical attributes. Then I can write a small copy constructor and assignment operator for the critical class and leave the compiler to look after all the other attributes in the main class.
class cSafe {
boost::mutex myMutex;
cSomeClass A;
public:
cSomeClass getA() {
boost::mutex::scoped_lock lock( myMutex );
return A;
}
(copy constructor)
(assignment op )
};
class cMyClass {
cSafe S;
( ... other attributes ... )
public:
cSomeClass getA() {
return S.getA();
}
};
You can define your own copy constructor (and copy assignment operator). The copy constructor would probably look something like this:
cMyClass(const cMyClass& x) : A(x.getA()) { }
Note that getA() would need to be const-qualified for this to work, which means the mutex would need to be mutable; you could make the parameter a non-const reference, but then you can't copy temporary objects, which usually isn't desirable.
Also, consider that it isn't always a good idea to perform locking at such a low level: if you lock the mutex in the accessor and the mutator functions, you lose a lot of functionality. For example, you can't perform a compare-and-swap because you can't get and set the member variable with a single lock of the mutex, and if you have multiple data members controlled by the mutex, you can't access more than one of them with the mutex locked.
As simple as the question might be, getting it right is not so simple. For starters we can work the easy copy constructor:
// almost pseudo code, mutex/lock/data types are synthetic
class test {
mutable mutex m;
data d;
public:
test( test const & rhs ) {
lock l(m); // Lock the rhs to avoid race conditions,
// no need to lock this object.
d = rhs.d; // perform the copy, data might be many members
}
};
Now creating an assignment operator is more complex. The first thing that comes to mind is just doing the same, but in this case locking both the lhs and rhs:
class test { // wrong
mutable mutex m;
data d;
public:
test( test const & );
test& operator=( test const & rhs ) {
lock l1( m );
lock l2( rhs.m );
d = rhs.d;
return *this;
}
};
Simple enough, and wrong. While we are guaranteeing single threaded access to the objects (both) during the operation, and thus we get no race conditions, we have a potential deadlock:
test a, b;
// thr1 // thr2
void foo() { void bar() {
a = b; b = a;
} }
And that is not the only potential deadlock, the code is not safe for self assignment (most mutex are not recursive, and trying to lock the same mutex twice will block the thread). The simple thing to solve is the self assignment:
test& test::operator=( test const & rhs ) {
if ( this == &rhs ) return *this; // nothing to do
// same (invalid) code here
}
For the other part of the problem you need to enforce an order in how the mutexes are acquired. That could be handled in different ways (storing a unique identifier per object an comparing...)
test & test::operator=( test const & rhs ) {
mutex *first, *second;
if ( unique_id(*this) < unique_id(rhs ) {
first = &m;
second = &rhs.m;
} else {
first = &rhs.m;
second = &rhs.m;
}
lock l1( *first );
lock l2( *second );
d = rhs.d;
}
The specific order is not as important as the fact that you need to ensure the same order in all uses, or else you will potentially deadlock the threads. As this is quite common, some libraries (including the upcoming c++ standard) have specific support for it:
class test {
mutable std::mutex m;
data d;
public:
test( const test & );
test& operator=( test const & rhs ) {
if ( this == &rhs ) return *this; // avoid self deadlock
std::lock( m, rhs.m ); // acquire both mutexes or wait
std::lock_guard<std::mutex> l1( m, std::adopt_lock ); // use RAII to release locks
std::lock_guard<std::mutex> l2( rhs.m, std::adopt_lock );
d = rhs.d;
return *this;
}
};
The std::lock function will acquire all locks passed in as argument and it ensures that the order of acquisition is the same, ensuring that if all code that needs to acquire these two mutexes does so by means of std::lock there will be no deadlock. (You can still deadlock by manually locking them somewhere else separately). The next two lines store the locks in objects implementing RAII so that if the assignment operation fails (exception is thrown) the locks are released.
That can be spelled differently by using std::unique_lock instead of std::lock_guard:
std::unique_lock<std::mutex> l1( m, std::defer_lock ); // store in RAII, but do not lock
std::unique_lock<std::mutex> l2( rhs.m, std::defer_lock );
std::lock( l1, l2 ); // acquire the locks
I just thought of a different much simpler approach that I am sketching here. The semantics are slightly different, but may be enough for many applications:
test& test::operator=( test copy ) // pass by value!
{
lock l(m);
swap( d, copy.d ); // swap is not thread safe
return *this;
}
}
There is a semantic difference in both approaches, as the one with copy-and-swap idiom has a potential race condition (that might or might not affect your application, but that you should be aware of). Since both locks are never held at once, the objects may change between the time the first lock is released (copy of the argument completes) and the second lock is acquired inside operator=.
For an example of how this might fail, consider that data is an integer and that you have two objects initialized with the same integer value. One thread acquires both locks and increments the values, while another thread copies one of the objects into the other:
test a(0), b(0); // ommited constructor that initializes the ints to the value
// Thr1
void loop() { // [1]
while (true) {
std::unique_lock<std::mutex> la( a.m, std::defer_lock );
std::unique_lock<std::mutex> lb( b.m, std::defer_lock );
std::lock( la, lb );
++a.d;
++b.d;
}
}
// Thr1
void loop2() {
while (true) {
a = b; // [2]
}
}
// [1] for the sake of simplicity, assume that this is a friend
// and has access to members
With the implementations of operator= that perform simultaneous locks on both objects, you can assert at any one given time (doing it thread safely by acquiring both locks) that a and b are the same, which seems to be expected by a cursory read of the code. That does not hold if operator= is implemented in terms of the copy-and-swap idiom. The issue is that in the line marked as [2], b is locked and copied into a temporary, then the lock is released. The first thread can then acquire both locks at once, and increment both a and b before a is locked by the second thread in [2]. Then a is overwritten with the value that b had before the increment.
The simple fact is that you cannot make a class thread safe by spewing mutexes at the problem. The reason that you can't make this work is because it doesn't work, not because you're doing this technique wrong. This is what everyone noticed when multithreading first came and started slaughtering COW string implementations.
Thread design occurs at the application level, not on a per-class basis. Only specific resource management classes should have thread-safety on this level- and for them you need to write explicit copy constructors/assignment operators anyway.