According to Boost documentation boost::mutex and boost::timed_mutex are supposed to be different. The first one implements Lockable Concept, and the second - TimedLockable Concept.
But if you take a look at the source, you can see they're basically the same thing. The only difference are lock typedefs. You can call timed_lock on boost::mutex or use boost::unique_lock with timeout just fine.
typedef ::boost::detail::basic_timed_mutex underlying_mutex;
class mutex:
public ::boost::detail::underlying_mutex
class timed_mutex:
public ::boost::detail::basic_timed_mutex
What's the rationale behind that? Is it some remnant of the past, is it wrong to use boost::mutex as a TimedLockable? It's undocumented after all.
I have not looked at the source, but I used these a few days ago, and the timed mutexes function differently. They block until the time is up, then return. A unique lock will block until it can get the lock.
A try lock will not block, and you can then test to see if it has ownership of the lock. A timed lock will block for the specified amount of time, then behave as a try lock - that is, cease blocking, and you can test for ownership of the lock.
I believe that internally some of the different boost locks are typedefs for unique lock since they all use unique locking. The typedef names are there so that you can keep track of what you are using different ones for, even though you could use different functionality and confuse your client code.
Edit: here is an example of a timed lock:
boost::timed_mutex timedMutexObj;
boost::mutex::scoped_lock scopedLockObj(timedMutexObj, boost::get_system_time() + boost::posix_time::seconds(60));
if(scopedLockObj.owns_lock()) {
// proceed
}
For reference: http://www.boost.org/doc/libs/1_49_0/doc/html/thread/synchronization.html#thread.synchronization.mutex_concepts.timed_lockable.timed_lock
Edit again: to provide a specific answer to your question, yes, it would be wrong to use boost::mutex as a TimedLockable because boost::timed_mutex is provided for this purpose. If they are the same thing in the source and this is undocumented, this is unreliable behavior and you should follow the documentation. (My code example did not used timed_mutex at first but I updated it)
Related
Hi Fellow Boost Enthusiasts
We have run into a problem with shared_mutex and have been digging into the boost source.
We can't tell if this is a deadlock case, or we are just not understanding the shared
mutex implementation for reader/writer locks.
Application :
We have a map map< Ptr*, data> that needs to be created and queried by multiple threads.
However, most of the Ptr* values are common, so there is a fast warmup followed by
what we believe is a pattern of almost no insertions into the map. So we thought to use
a reader/writer pattern to control access to the map, like this
boost::mutex& lock_;
bool found = false;
{
shared_lock<boost::shared_mutex> slock(lock_);
(search the map to see if you have key)
if (keyFound) {
found = true;
}
}
if (!found) {
upgrade_lock<boost::shared_mutex> ulock(lock_);
(search the map again to see if the key has been inserted)
if (key still found) {
upgrade_to_unique_lock<boost::shared_mutex> wlock(ulock);
insert into the map & do whatever
}
}
The original shared_lock should be destroyed when the block goes out of scope,
making the upgrade_lock the only lock if the original shared_lock does not succeed.
Observations:
All our threads are stuck for days in
_lll_lock_wait or pthread_mutex_lock
Upon digging into the boost::shared_mutex implementation, we find that there is
a single common "state_changed" lock inside the mutex, and in order for either the
shared_lock or unique_lock to succeed, it needs to acquire the common state_changed lock
for both lock construction and destruction. It seems that the unique_lock will go into
a state where it may release the scoped_lock on state_changed, but we cannot tell.
Either way, we cannot tell why the threads basically lock up for long periods of time
with sporadic progress - it's not quite a deadlock but something close.
Any help appreciated.
Sam Appleton
Take look at Boost.Thread change log, in particular at issue #7755 "Thread: deadlock with shared_mutex on Windows", which was fixed in 1.54. It might be the issue you encounter.
By the way, a lot of Boost.Thread bugs have been fixed since 1.50, so it's worth upgrading to the latest version.
The new std::shared_timed_mutex allows for two types of locks: shared and exclusive.
If one holds a shared lock, is there any way to atomically exchange it ("upgrade it") to an exclusive lock? In other words, given the following code, how can I avoid the non-atomic drop and re-lock?
std::shared_timed_mutex m; //Guards a std::vector.
m.lock_shared();
//Read from vector. (Shared lock is sufficient.)
// ...
//Now we want to write to the vector. We need an exclusive lock.
m.unlock_shared();
// <---- Problem here: non-atomic!
m.lock();
//Write to vector.
// ...
m.unlock();
Ideally, m.unlock_shared(); m.lock(); could be replaced with something like m.upgrade_to_exclusive(); (or something like the boost::.upgrade_to_unique_lock()).
In a similar question but for Boost's shared_mutex Dave S mentions
It is impossible to convert from a shared lock to a unique lock, or a shared lock to an upgradeable lock without releasing the shared lock first.
I'm not certain whether this applies to std::shared_mutex, though I suspect it does.
I would be happy with a reasonable work-around based on std::atomic/condition_variable or GCC's transactional memory.
Edit: Howard's answer addresses my question. His proposal N3427 contains nice descriptions of a mechanism to achieve mutex upgrading. I still welcome work-arounds based on std::atomic/condition_variable or GCC's transactional memory.
No, it can not. That functionality was proposed to the committee under the name upgrade_mutex and upgrade_lock, but the committee chose to reject that portion of the proposal. There is currently no work under way to re-prepose that functionality.
Edit
In response to the "where to go from here" edit in user3761401's question, I've created a partially crippled implementation of upgrade_mutex/upgrade_lock here:
https://github.com/HowardHinnant/upgrade_mutex
Feel free to use this. It is in the public domain. It is only lightly tested, and it does not have the full functionality described in N3427. Specifically the following functionality is missing:
One can not convert a unique_lock to a shared_timed_lock.
One can not try- or timed-convert a shared_timed_lock to a unique_lock.
One can not try- or timed-convert a upgrade_lock to a unique_lock.
That being said, I've included this functionality in upgrade_mutex and it can be accessed at this low level in a very ugly manner (such examples are in main.cpp).
The other lock conversions mentioned in N3427 are available.
try- and timed-conversions from shared_timed_lock to upgrade_lock.
conversion from upgrade_lock to shared_timed_lock.
blocking conversion from upgrade_lock to unique_lock.
conversion from unique_lock to upgrade_lock.
It has all been put in namespace acme. Put it in whatever namespace you want.
Requirements
The compiler needs to support "rvalue-this" qualifiers, and explicit conversion operators.
Disclaimers
The code has been only lightly tested. If you find bugs I would appreciate a pull request.
It is possible to optimize the upgrade_mutex through the use of std::atomic. No effort has been done on that front (it is a difficult and error prone task, taking more time than I have at the moment).
C++11 has the std::condition_variable, its wait function is
template< class Predicate >
void wait( std::unique_lock<std::mutex>& lock, Predicate pred );
It requires a mutex.
As far as I understand - its notify_one can be called without synchronization (I know the idiomatic way is to use it with a mutex).
I have an object which is already internally synchronized - so I don't need a mutex to protect it. One thread should wait for some event associated with that object, and others would be notified.
How to do such notification without a mutex in C++11? I.e. it is easy to do with a condition_variable, but it needs a mutex. I thought about using a fake mutex type, but std::mutex is nailed in the wait interface.
An option is to poll a std::atomic_flag + sleep, but I don't like sleeping.
Use std::condition_variable_any you can use any class with it which implements the BasicLockable Concept.
Given a bad feeling about this I checked the implementation of std::condition_variable_any of libc++. It turns out that it uses a plain std::condition_variable together with a std::shared_ptr to a std::mutex, so there is definitely some overhead involved without digging any deeper. (There is some other post here on SO which covers this, though I first have to search that)
As a matter of that I would probably recommend to redesign your case so that synchronization is really only done by a mutex protecting a plain condition variable.
In some threading models (although I doubt in modern ones) the mutex is needed to protect the condition variable itself (not the object you're synchronizing) from concurrent access. If the condition variable wasn't protected by a mutex you could encounter problems on the condition itself.
See Why do pthreads’ condition variable functions require a mutex?
I have some object, which already internally synchronized - I don't need mutex to protect it. One thread should wait for some event associated with that object, and others would notify.
If you don't hold the mutex the waiting thread is going to miss notifications, regardless whether you use condition_variable or condition_variable_any with the internal mutex.
You need to associate at least one bit of extra information with the condition variable, and this bit should be protected by a mutex.
I am a bit confused about the role of std::unique_lock when working with std::condition_variable. As far as I understood the documentation, std::unique_lock is basically a bloated lock guard, with the possibility to swap the state between two locks.
I've so far used pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) for this purpose (I guess that's what the STL uses on posix). It takes a mutex, not a lock.
What's the difference here? Is the fact that std::condition_variable deals with std::unique_lock an optimization? If so, how exactly is it faster?
so there is no technical reason?
I upvoted cmeerw's answer because I believe he gave a technical reason. Let's walk through it. Let's pretend that the committee had decided to have condition_variable wait on a mutex. Here is code using that design:
void foo()
{
mut.lock();
// mut locked by this thread here
while (not_ready)
cv.wait(mut);
// mut locked by this thread here
mut.unlock();
}
This is exactly how one shouldn't use a condition_variable. In the regions marked with:
// mut locked by this thread here
there is an exception safety problem, and it is a serious one. If an exception is thrown in these areas (or by cv.wait itself), the locked state of the mutex is leaked unless a try/catch is also put in somewhere to catch the exception and unlock it. But that's just more code you're asking the programmer to write.
Let's say that the programmer knows how to write exception safe code, and knows to use unique_lock to achieve it. Now the code looks like this:
void foo()
{
unique_lock<mutex> lk(mut);
// mut locked by this thread here
while (not_ready)
cv.wait(*lk.mutex());
// mut locked by this thread here
}
This is much better, but it is still not a great situation. The condition_variable interface is making the programmer go out of his way to get things to work. There is a possible null pointer dereference if lk accidentally does not reference a mutex. And there is no way for condition_variable::wait to check that this thread does own the lock on mut.
Oh, just remembered, there is also the danger that the programmer may choose the wrong unique_lock member function to expose the mutex. *lk.release() would be disastrous here.
Now let's look at how the code is written with the actual condition_variable API that takes a unique_lock<mutex>:
void foo()
{
unique_lock<mutex> lk(mut);
// mut locked by this thread here
while (not_ready)
cv.wait(lk);
// mut locked by this thread here
}
This code is as simple as it can get.
It is exception safe.
The wait function can check lk.owns_lock() and throw an exception if it is false.
These are technical reasons that drove the API design of condition_variable.
Additionally, condition_variable::wait doesn't take a lock_guard<mutex> because lock_guard<mutex> is how you say: I own the lock on this mutex until lock_guard<mutex> destructs. But when you call condition_variable::wait, you implicitly release the lock on the mutex. So that action is inconsistent with the lock_guard use case / statement.
We needed unique_lock anyway so that one could return locks from functions, put them into containers, and lock/unlock mutexes in non-scoped patterns in an exception safe way, so unique_lock was the natural choice for condition_variable::wait.
Update
bamboon suggested in the comments below that I contrast condition_variable_any, so here goes:
Question: Why isn't condition_variable::wait templated so that I can pass any Lockable type to it?
Answer:
That is really cool functionality to have. For example this paper demonstrates code that waits on a shared_lock (rwlock) in shared mode on a condition variable (something unheard of in the posix world, but very useful nonetheless). However the functionality is more expensive.
So the committee introduced a new type with this functionality:
`condition_variable_any`
With this condition_variable adaptor one can wait on any lockable type. If it has members lock() and unlock(), you are good to go. A proper implementation of condition_variable_any requires a condition_variable data member and a shared_ptr<mutex> data member.
Because this new functionality is more expensive than your basic condition_variable::wait, and because condition_variable is such a low level tool, this very useful but more expensive functionality was put into a separate class so that you only pay for it if you use it.
It's essentially an API design decision to make the API as safe as possible by default (with the additional overhead being seen as negligible). By requiring to pass a unique_lock instead of a raw mutex users of the API are directed towards writing correct code (in the presence of exceptions).
In recent years the focus of the C++ language has shifted towards making it safe by default (but still allowing users to shoot themselves into their feet if they want to and try hard enough).
I don't well understand the difference betweeen these two lock classes.
In boost documentation it is said, boost::unique_lock doesn't realize lock automatically.
Does it mean that the main difference between unique_lock and lock_guard is that with unique_lock we must call explicitly the lock() function ?
First to answer your question. No you don't need to call lock on a unique_lock. See below:
The unique_lock is only a lock class with more features. In most cases the lock_guard will do what you want and will be sufficient.
The unique_lock has more features to offer to you. E.g a timed wait if you need a timeout or if you want to defer your lock to a later point than the construction of the object. So it highly depends on what you want to do.
BTW: The following code snippets do the same thing.
boost::mutex mutex;
boost::lock_guard<boost::mutex> lock(mutex);
boost::mutex mutex;
boost::unique_lock<boost::mutex> lock(mutex);
The first one can be used to synchronize access to data, but if you want to use condition variables you need to go for the second one.
The currently best voted answer is good, but it did not clarify my doubt till I dug a bit deeper so decided to share with people who might be in the same boat.
Firstly both lock_guard and unique_lock follows the RAII pattern, in the simplest use case the lock is acquired during construction and unlocked during destruction automatically. If that is your use case then you don't need the extra flexibility of unique_lock and lock_guard will be more efficient.
The key difference between both is a unique_lock instance doesn't need to always own the mutex it is associated with while in lock_guard it owns the mutex. This means unique_lock would need to have an extra flag indicating whether it owns the lock and another extra method 'owns_lock()' to check that. Knowing this we can explain all extra benefits this flags brings with the overhead of that extra data to be set and checked
Lock doesn't have to taken right at the construction, you can pass the flag std::defer_lock during its construction to keep the mutex unlocked during construction.
We can unlock it before the function ends and don't have to necessarily wait for destructor to release it, which can be handy.
You can pass the ownership of the lock from a function, it is movable and not copyable.
It can be used with conditional variables since that requires mutex to be locked, condition checked and unlocked while waiting for a condition.
Their implementation can be found under path .../boost/thread/locks.hpp - and they are sitting just one next to other :) To sum things short:
lock_guard is a short simple utility class that locks mutex in constructor and unlocks in destructor, not caring about details.
unique_lock is a bit more complex one, adding pretty lot of features - but it still locks automatically in constructor. It is called unique_lock because it introduces "lock ownership" concept ( see owns_lock() method ).
If you're used to pthreads(3):
boost::mutex = pthread_mutex_*
boost::unique_lock = pthread_rwlock_* used to obtain write/exclusive locks (i.e. pthread_rwlock_wrlock)
boost::shared_lock = pthread_rwlock_* used to obtain read/shared locks (i.e. pthread_rwlock_rdlock)
Yes a boost::unique_lock and a boost::mutex function in similar ways, but a boost::mutex is generally a lighter weight mutex to acquire and release. That said, a shared_lock with the lock already acquired is faster (and allows for concurrency), but it's comparatively expensive to obtain a unique_lock.
You have to look under the covers to see the implementation details, but that's the gist of the intended differences.
Speaking of performance: here's a moderately useful comparison of latencies:
http://www.eecs.berkeley.edu/%7Ercs/research/interactive_latency.html
It would be nice if I/someone could benchmark the relative cost of the different pthread_* primitives, but last I looked, pthread_mutex_* was ~25us, whereas pthread_rwlock_* was ~20-100us depending on whether or not the read lock had been already acquired (~10us) or not (~20us) or writer (~100us). You'll have to benchmark to confirm current numbers and I'm sure it's very OS specific.
I think unique_lock may be also used when you need to emphasize the difference between unique and shared locks.