Why I can't lock two mutex with lock_guard [duplicate] - c++

C++17 introduced a new lock class called std::scoped_lock.
Judging from the documentation it looks similar to the already existing std::lock_guard class.
What's the difference and when should I use it?

Late answer, and mostly in response to:
You can consider std::lock_guard deprecated.
For the common case that one needs to lock exactly one mutex, std::lock_guard has an API that is a little safer to use than scoped_lock.
For example:
{
std::scoped_lock lock; // protect this block
...
}
The above snippet is likely an accidental run-time error because it compiles and then does absolutely nothing. The coder probably meant:
{
std::scoped_lock lock{mut}; // protect this block
...
}
Now it locks/unlocks mut.
If lock_guard was used in the two examples above instead, the first example is a compile-time error instead of a run-time error, and the second example has identical functionality as the version which uses scoped_lock.
So my advice is to use the simplest tool for the job:
lock_guard if you need to lock exactly 1 mutex for an entire scope.
scoped_lock if you need to lock a number of mutexes that is not exactly 1.
unique_lock if you need to unlock within the scope of the block (which includes use with a condition_variable).
This advice does not imply that scoped_lock should be redesigned to not accept 0 mutexes. There exist valid use cases where it is desirable for scoped_lock to accept variadic template parameter packs which may be empty. And the empty case should not lock anything.
And that's why lock_guard isn't deprecated. scoped_lock and unique_lock may be a superset of functionality of lock_guard, but that fact is a double-edged sword. Sometimes it is just as important what a type won't do (default construct in this case).

The scoped_lock is a strictly superior version of lock_guard that locks an arbitrary number of mutexes all at once (using the same deadlock-avoidance algorithm as std::lock). In new code, you should only ever use scoped_lock.
The only reason lock_guard still exists is for compatibility. It could not just be deleted, because it is used in current code. Moreover, it proved undesirable to change its definition (from unary to variadic), because that is also an observable, and hence breaking, change (but for somewhat technical reasons).

The single and important difference is that std::scoped_lock has a variadic constructor taking more than one mutex. This allows to lock multiple mutexes in a deadlock avoiding way as if std::lock were used.
{
// safely locked as if using std::lock
std::scoped_lock<std::mutex, std::mutex> lock(mutex1, mutex2);
}
Previously you had to do a little dance to lock multiple mutexes in a safe way using std::lock as explained this answer.
The addition of scope lock makes this easier to use and avoids the related errors. You can consider std::lock_guard deprecated. The single argument case of std::scoped_lock can be implemented as a specialization and such you don't have to fear about possible performance issues.
GCC 7 already has support for std::scoped_lock which can be seen here.
For more information you might want to read the standard paper

Here is a sample and quote from C++ Concurrency in Action:
friend void swap(X& lhs, X& rhs)
{
if (&lhs == & rhs)
return;
std::lock(lhs.m, rhs.m);
std::lock_guard<std::mutex> lock_a(lhs.m, std::adopt_lock);
std::lock_guard<std::mutex> lock_b(rhs.m, std::adopt_lock);
swap(lhs.some_detail, rhs.some_detail);
}
vs.
friend void swap(X& lhs, X& rhs)
{
if (&lhs == &rhs)
return;
std::scoped_lock guard(lhs.m, rhs.m);
swap(lhs.some_detail, rhs.some_detail);
}
The existence of std::scoped_lock means that most of the cases where you would have used std::lock prior to c++17 can now be written using std::scoped_lock, with less potential for mistakes, which can only be a good thing!

Related

Is the purpose of std::scoped_lock only to handle multiple mutexes, compared to std::lock_guard?

When reading the documentation about std::scoped_lock and std::lock_guard, it seams that the only difference is that scoped_lock can handle multiple lock guard and can avoid deadlock when unlocking.
Is this the only difference? If I have only one mutex, should I therefore keep using use lock_guard?
As far as I know the only important difference is that the scoped_lock has a variadic constructor taking more than one mutex as you mentioned. In addition you can implement a single-argument version of scoped_lock with template specialization.
So the lock_guard is kinda "deprecated" non-formally.
I think lock_guard still exists because of compatibility.

How do unique locks work compared to normal mutex locks?

I've came across with this sample of code provided by a book. Btw this book has bad reviews. I regret that I bought it
std::mutex m_mutex;
mutable std::unique_lock<std::mutex> m_finishedQueryLock{ m_mutex, std::defer_lock };
bool m_playerQuit{ false };
void SetPlayerQuit()
{
m_finishedQueryLock.lock();
m_playerQuit = true;
m_finishedQueryLock.unlock();
}
I'm not satisfied with the book's explanation of how it works and why should I use it. I already have an idea of how mutex locks work and the implementations of it, but I have a difficulty understanding the second line of the code above. And why does it have a mutable keyword in it?
I'm completely new on C++ Programming. So a basic level of explanation would help me a lot.
That example looks totally stupid.
The second line is declaring a non-static data member, and
the member is mutable (for reasons given below);
the member is an object of type std::unique_lock<std::mutex>, which is a helper type for locking/unlocking an associated mutex object;
the member is initialized when an instance of the class is created, by calling its constructor and passing m_mutex and the special tag std::defer_lock as arguments.
But doing that is stupid, and I'm not surprised the book has bad reviews if it has examples like that.
The point of unique_lock is to lock the associated mutex, and then to automatically unlock it when it goes out of scope. Creating a unique_lock member like this is stupid, because it doesn't go out of scope at the end of the function, so the code has absolutely no advantage over:
mutable std::mutex m_mutex;
bool m_playerQuit{ false };
void SetPlayerQuit()
{
m_mutex.lock();
m_playerQuit = true;
m_mutex.unlock();
}
But this manual unlocking has all the problems that unique_lock was designed to solve, so it should be using a scoped lock (either unique_lock or lock_guard) but only in the function scope, not as a member:
mutable std::mutex m_mutex;
bool m_playerQuit{ false };
void SetPlayerQuit()
{
std::lock_guard<std::mutex> lock(m_mutex);
m_playerQuit = true;
} // m_mutex is automatically unlocked by the ~lock_guard destructor
The mutable keyword is necessary so that you can lock the mutex in const member functions. Locking and unlocking a mutex is a non-const operation that modifies the mutex, which would not be allowed in a const member if it wasn't mutable.
The unique_lock is a RAII system. When created, taking the mutex as parameter, it will lock the mutex and when leaving the scope it is destroyed, thus unlocking the mutex. When you need to unlock earlier, you can call the unlock() function like in your example.
Using the unique_lock like in your example doesn't create added value over using the mutex directly.
All answers explain very well why the example is not good. I will answer when you would want to use a std::unique_lock instead of a std::lock_guard.
When a thread is waiting on something, one may use a std::condition_variable. A condition variable makes use of deferred locking and therefore uses a unique_lock, which allows deferred locking. I don't think you explicitly need to say you need deferred locking, as in the example.
To grasp these concepts, you first need to learn what RAII is. Then have a look at this reference. An example of the condition variables can be found here.

Rationale to keep std::mutex lock/unlock public

My question is simple. In C++11 we have std::mutex and std::lock_guard and std::unique_lock.
The usual way to use these classes is to lock the std::mutex through any of the locks. This prevents leaking mutexes due to exception throwing:
{
std::lock_guard<std::mutex> l(some_mutex);
//Cannot leak mutex.
}
Why std::mutex::lock and std::mutex::unlock are public? That calls for incorrect usage:
{
some_mutex.lock();
//Mutex leaked due to exception.
some_mutex.unlock();
}
Wouldn't be safer to make std::lock_guard and std::unique_lock a friend of std::mutex and make std::mutex lock operations private? This would prevent unsafe usage.
The only reason I can guess for this design is that if I have my own lock, it couldn't be used with std::mutex because I wouldn't be able to make my own lock friend of std::mutex. Is this the main reason behind making these member functions public?
The rationale for this design decision is documented in N2406:
Unlike boost, the mutexes have public member functions for lock(),
unlock(), etc. This is necessary to support one of the primary goals:
User defined mutexes can be used with standard defined locks. If there
were no interface for the user defined mutex to implement, there would
be no way for a standard defined lock to communicate with the user
defined mutex.
At the time this was written, boost::mutex could only be locked and unlocked with boost::scoped_lock.
So you can now write my::mutex, and as long as you supply members lock() and unlock(), your my::mutex is just as much a first class citizen as std::mutex. Your clients can use std::unique_lock<my::mutex> just as easily as they can use std::unique_lock<std::mutex>, even though std::unique_lock has no clue what my::mutex is.
A motivating real-life example of my::mutex is the proposed std::shared_mutex now in the C++1y (we hope y == 4) draft standard. std::shared_mutex has members lock() and unlock() for the exclusive-mode locking and unlocking. And it interacts with std::unique_lock when the client uses std::unique_lock<std::shared_mutex> just as expected.
And just in case you might believe shared_mutex might be the only other motivating mutex that could make use of this generic interface, here's another real-world example: :-)
template <class L0, class L1>
void
lock(L0& l0, L1& l1)
{
while (true)
{
{
unique_lock<L0> u0(l0);
if (l1.try_lock())
{
u0.release();
break;
}
}
this_thread::yield();
{
unique_lock<L1> u1(l1);
if (l0.try_lock())
{
u1.release();
break;
}
}
this_thread::yield();
}
}
This is the basic code for how to lock (in an exception safe manner) two BasicLockables at once, without danger of deadlock. And despite many critiques to the contrary, this code is extraordinarily efficient.
Note the line in the code above:
unique_lock<L0> u0(l0);
This algorithm has no clue what type L0 is. But as long as it supports lock() and unlock() (ok and try_lock() too), everything is cool. L0 might even be another instantiation of unique_lock, perhaps unique_lock<my::mutex> or even unique_lock<std::shared_mutex>, or perhaps even std::shared_lock<std::shared_mutex>.
It all just works. And all because there is no intimate relationship between std::unique_lock and std::mutex aside from the agreed upon public interface of lock() and unlock().
As Jerry Coffin pointed out, we can only speculate without an actual committee member chiming in, but my guess would be that they designed the interfaces with flexibility and the ability to extend them in your own way in mind. Instead of making every potential class a friend of std::mutex, exposing the interface to do the needed operations allows for someone (e.g. boost contributors) to come along and find a new and better way to do something with it. If the interface was hidden, that would not be possible.

C++11: why does std::condition_variable use std::unique_lock?

I am a bit confused about the role of std::unique_lock when working with std::condition_variable. As far as I understood the documentation, std::unique_lock is basically a bloated lock guard, with the possibility to swap the state between two locks.
I've so far used pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) for this purpose (I guess that's what the STL uses on posix). It takes a mutex, not a lock.
What's the difference here? Is the fact that std::condition_variable deals with std::unique_lock an optimization? If so, how exactly is it faster?
so there is no technical reason?
I upvoted cmeerw's answer because I believe he gave a technical reason. Let's walk through it. Let's pretend that the committee had decided to have condition_variable wait on a mutex. Here is code using that design:
void foo()
{
mut.lock();
// mut locked by this thread here
while (not_ready)
cv.wait(mut);
// mut locked by this thread here
mut.unlock();
}
This is exactly how one shouldn't use a condition_variable. In the regions marked with:
// mut locked by this thread here
there is an exception safety problem, and it is a serious one. If an exception is thrown in these areas (or by cv.wait itself), the locked state of the mutex is leaked unless a try/catch is also put in somewhere to catch the exception and unlock it. But that's just more code you're asking the programmer to write.
Let's say that the programmer knows how to write exception safe code, and knows to use unique_lock to achieve it. Now the code looks like this:
void foo()
{
unique_lock<mutex> lk(mut);
// mut locked by this thread here
while (not_ready)
cv.wait(*lk.mutex());
// mut locked by this thread here
}
This is much better, but it is still not a great situation. The condition_variable interface is making the programmer go out of his way to get things to work. There is a possible null pointer dereference if lk accidentally does not reference a mutex. And there is no way for condition_variable::wait to check that this thread does own the lock on mut.
Oh, just remembered, there is also the danger that the programmer may choose the wrong unique_lock member function to expose the mutex. *lk.release() would be disastrous here.
Now let's look at how the code is written with the actual condition_variable API that takes a unique_lock<mutex>:
void foo()
{
unique_lock<mutex> lk(mut);
// mut locked by this thread here
while (not_ready)
cv.wait(lk);
// mut locked by this thread here
}
This code is as simple as it can get.
It is exception safe.
The wait function can check lk.owns_lock() and throw an exception if it is false.
These are technical reasons that drove the API design of condition_variable.
Additionally, condition_variable::wait doesn't take a lock_guard<mutex> because lock_guard<mutex> is how you say: I own the lock on this mutex until lock_guard<mutex> destructs. But when you call condition_variable::wait, you implicitly release the lock on the mutex. So that action is inconsistent with the lock_guard use case / statement.
We needed unique_lock anyway so that one could return locks from functions, put them into containers, and lock/unlock mutexes in non-scoped patterns in an exception safe way, so unique_lock was the natural choice for condition_variable::wait.
Update
bamboon suggested in the comments below that I contrast condition_variable_any, so here goes:
Question: Why isn't condition_variable::wait templated so that I can pass any Lockable type to it?
Answer:
That is really cool functionality to have. For example this paper demonstrates code that waits on a shared_lock (rwlock) in shared mode on a condition variable (something unheard of in the posix world, but very useful nonetheless). However the functionality is more expensive.
So the committee introduced a new type with this functionality:
`condition_variable_any`
With this condition_variable adaptor one can wait on any lockable type. If it has members lock() and unlock(), you are good to go. A proper implementation of condition_variable_any requires a condition_variable data member and a shared_ptr<mutex> data member.
Because this new functionality is more expensive than your basic condition_variable::wait, and because condition_variable is such a low level tool, this very useful but more expensive functionality was put into a separate class so that you only pay for it if you use it.
It's essentially an API design decision to make the API as safe as possible by default (with the additional overhead being seen as negligible). By requiring to pass a unique_lock instead of a raw mutex users of the API are directed towards writing correct code (in the presence of exceptions).
In recent years the focus of the C++ language has shifted towards making it safe by default (but still allowing users to shoot themselves into their feet if they want to and try hard enough).

boost::unique_lock vs boost::lock_guard

I don't well understand the difference betweeen these two lock classes.
In boost documentation it is said, boost::unique_lock doesn't realize lock automatically.
Does it mean that the main difference between unique_lock and lock_guard is that with unique_lock we must call explicitly the lock() function ?
First to answer your question. No you don't need to call lock on a unique_lock. See below:
The unique_lock is only a lock class with more features. In most cases the lock_guard will do what you want and will be sufficient.
The unique_lock has more features to offer to you. E.g a timed wait if you need a timeout or if you want to defer your lock to a later point than the construction of the object. So it highly depends on what you want to do.
BTW: The following code snippets do the same thing.
boost::mutex mutex;
boost::lock_guard<boost::mutex> lock(mutex);
boost::mutex mutex;
boost::unique_lock<boost::mutex> lock(mutex);
The first one can be used to synchronize access to data, but if you want to use condition variables you need to go for the second one.
The currently best voted answer is good, but it did not clarify my doubt till I dug a bit deeper so decided to share with people who might be in the same boat.
Firstly both lock_guard and unique_lock follows the RAII pattern, in the simplest use case the lock is acquired during construction and unlocked during destruction automatically. If that is your use case then you don't need the extra flexibility of unique_lock and lock_guard will be more efficient.
The key difference between both is a unique_lock instance doesn't need to always own the mutex it is associated with while in lock_guard it owns the mutex. This means unique_lock would need to have an extra flag indicating whether it owns the lock and another extra method 'owns_lock()' to check that. Knowing this we can explain all extra benefits this flags brings with the overhead of that extra data to be set and checked
Lock doesn't have to taken right at the construction, you can pass the flag std::defer_lock during its construction to keep the mutex unlocked during construction.
We can unlock it before the function ends and don't have to necessarily wait for destructor to release it, which can be handy.
You can pass the ownership of the lock from a function, it is movable and not copyable.
It can be used with conditional variables since that requires mutex to be locked, condition checked and unlocked while waiting for a condition.
Their implementation can be found under path .../boost/thread/locks.hpp - and they are sitting just one next to other :) To sum things short:
lock_guard is a short simple utility class that locks mutex in constructor and unlocks in destructor, not caring about details.
unique_lock is a bit more complex one, adding pretty lot of features - but it still locks automatically in constructor. It is called unique_lock because it introduces "lock ownership" concept ( see owns_lock() method ).
If you're used to pthreads(3):
boost::mutex = pthread_mutex_*
boost::unique_lock = pthread_rwlock_* used to obtain write/exclusive locks (i.e. pthread_rwlock_wrlock)
boost::shared_lock = pthread_rwlock_* used to obtain read/shared locks (i.e. pthread_rwlock_rdlock)
Yes a boost::unique_lock and a boost::mutex function in similar ways, but a boost::mutex is generally a lighter weight mutex to acquire and release. That said, a shared_lock with the lock already acquired is faster (and allows for concurrency), but it's comparatively expensive to obtain a unique_lock.
You have to look under the covers to see the implementation details, but that's the gist of the intended differences.
Speaking of performance: here's a moderately useful comparison of latencies:
http://www.eecs.berkeley.edu/%7Ercs/research/interactive_latency.html
It would be nice if I/someone could benchmark the relative cost of the different pthread_* primitives, but last I looked, pthread_mutex_* was ~25us, whereas pthread_rwlock_* was ~20-100us depending on whether or not the read lock had been already acquired (~10us) or not (~20us) or writer (~100us). You'll have to benchmark to confirm current numbers and I'm sure it's very OS specific.
I think unique_lock may be also used when you need to emphasize the difference between unique and shared locks.