I have following example:
template <typename T>
class container
{
public:
std::mutex _lock;
std::set<T> _elements;
void add(T element)
{
_elements.insert(element);
}
void remove(T element)
{
_elements.erase(element);
}
};
void exchange(container<int>& cont1, container<int>& cont2, int value)
{
cont1._lock.lock();
std::this_thread::sleep_for(std::chrono::seconds(1));
cont2._lock.lock();
cont1.remove(value);
cont2.add(value);
cont1._lock.unlock();
cont2._lock.unlock();
}
int main()
{
container<int> cont1, cont2;
cont1.add(1);
cont2.add(2);
std::thread t1(exchange, std::ref(cont1), std::ref(cont2), 1);
std::thread t2(exchange, std::ref(cont2), std::ref(cont1), 2);
t1.join();
t2.join();
return 0;
}
In this case I'm expiriencing a deadlock. But when I use std::lock_guard instead of manually locking and unlocking mutextes I have no deadlock. Why?
void exchange(container<int>& cont1, container<int>& cont2, int value)
{
std::lock_guard<std::mutex>(cont1._lock);
std::this_thread::sleep_for(std::chrono::seconds(1));
std::lock_guard<std::mutex>(cont2._lock);
cont1.remove(value);
cont2.add(value);
}
Your two code snippets are not comparable. The second snippet locks and immediately unlocks each mutex as the temporary lock_guard object is destroyed at the semicolon:
std::lock_guard<std::mutex>(cont1._lock); // temporary object
The correct way to use lock guards is to make scoped variables of them:
{
std::lock_guard<std::mutex> lock(my_mutex);
// critical section here
} // end of critical section, "lock" is destroyed, calling mutex.unlock()
(Note that there is another common error that's similar but different:
std::mutex mu;
// ...
std::lock_guard(mu);
This declares a variable named mu (just like int(n);). However, this code is ill-formed because std::lock_guard does not have a default constructor. But it would compile with, say, std::unique_lock, and it also would not end up locking anything.)
Now to address the real problem: How do you lock multiple mutexes at once in consistent order? It may not be feasible to agree on a single lock order across an entire codebase, or even across a future user's codebase, or even in local cases as your example shows. In such cases, use the std::lock algorithm:
std::mutex mu1;
std::mutex mu2;
void f()
{
std::lock(mu1, mu2);
// order below does not matter
std::lock_guard<std::mutex> lock1(mu1, std::adopt_lock);
std::lock_guard<std::mutex> lock2(mu2, std::adopt_lock);
}
In C++17 there is a new variadic lock guard template called scoped_lock:
void f_17()
{
std::scoped_lock lock(mu1, mu2);
// ...
}
The constructor of scoped_lock uses the same algorithm as std::lock, so the two can be used compatibly.
While Kerrek SB's answer is entirely valid I thought I'd throw an alternative hat in the ring. std::lock or any try-and-retreat deadlock avoidance strategies should be seen as the last resort from a performance perspective.
How about:
#include <functional> //includes std::less<T> template.
static const std::less<void*> l;//comparison object. See note.
void exchange(container<int>& cont1, container<int>& cont2, int value)
{
if(&cont1==&cont2) {
return; //aliasing protection.
}
std::unique_lock<std::mutex> lock1(cont1._lock, std::defer_lock);
std::unique_lock<std::mutex> lock2(cont2._lock, std::defer_lock);
if(l(&cont1,&cont2)){//in effect portal &cont1<&cont2
lock1.lock();
std::this_thread::sleep_for(std::chrono::seconds(1));
lock2.lock();
}else{
lock2.lock();
std::this_thread::sleep_for(std::chrono::seconds(1));
lock1.lock();
}
cont1.remove(value);
cont2.add(value);
}
This code uses the memory address of the objects to determine an arbitrary but consistent lock order. This approach can (of course) be generalized.
Note also that in reusable code the aliasing protection is necessary because the version where cont1 is cont2 would be invalid by trying to lock the same lock twice. std::mutex cannot be assumed to be a recursive lock and normally isn't.
NB: The use of std::less<void> ensures compliance as it guarantees a consistent total ordering of addresses. Technically (&cont1<&cont2) is unspecified behavior. Thanks Kerrek SB!
Related
I have seen a lot of examples of code when developer uses std::unique_lock in new scope for automatically unlocking mutex:
...
// do some staff
{
std::unique_lock<std::mutex> lock(shared_resource_mutex);
// do some actions with shared resource
}
// do some staff
...
In my opinion it would be better to implement this behaviour using method unlock from std::unique_lock API in this way:
...
// do some actions
std::unique_lock<std::mutex> lock(shared_resource_mutex);
// do some actions with shared resource
lock.unlock();
// do some actions
...
Are these two fragments of code equivalent? For what purpose developers use the first variant? Maybe to emphasize (using parentheses) code that can not be executed parallel?
When the object is destroyed at the end of the scope, the lock is released. That's the whole point of RAII.
The good thing about using RAII is that you cannot forget to unlock and it doesn't matter how you leave the scope. If an exception is thrown for example, the lock will still be released.
If all you need is lock at construction and unlock at destruction, then std::scoped_lock is an even simpler/more appropriate class to use though.
I would say the former method is safer, more consistent and easier to read.
First consider safety:
void function()
{
std::unique_lock<std::shared_mutex> lock(mtx);
// exclusive lock stuff
lock.unlock();
// std::shared_lock<std::shared_mutex> lock(mtx); // whoops name in use
std::shared_lock<std::shared_mutex> lock2(mtx);
// read only shared lock stuff here
lock2.unlock(); // what if I forget to do this?
lock.lock(); // if I forgot to call lock2.unlock() undefined behavior
// back to the exclusive stuff
lock.unlock();
}
If you have different locks to acquire/release and you forget to call unlock() then you may end up trying to lock the same mutex twice from the same thread.
That is undefined behavior so it may go unnoticed but cause trouble.
And what if you call either lock() or unlock() on the wrong lock variable.... (say on lock2 rather than lock1?) the possibilities are frightening.
Consistency:
Also, not all lock types have a .unlock() function (std::scoped_lock, std::lock_guard) so it is good to be consistent with your coding style.
Easier to read:
It is also easier to see what code sections use locks which makes reasoning on the code simpler:
void function()
{
{
std::unique_lock<std::shared_mutex> lock(mtx);
// exclusive lock stuff
}
{
std::shared_lock<std::shared_mutex> lock(mtx);
// read only shared lock stuff here
}
{
std::unique_lock<std::shared_mutex> lock(mtx);
// back to the exclusive stuff
}
}
Both of your approaches are correct, and you might choose either of them depending on circumstance. For example, when using a condition_variable/lock combination it's often useful to be able to explicitly lock and unlock the lock.
Here's another approach that I find to be both expressive and safe:
#include <mutex>
template<class Mutex, class Function>
decltype(auto) with_lock(Mutex& m, Function&& f)
{
std::lock_guard<Mutex> lock(m);
return f();
}
std::mutex shared_resource_mutex;
void something()
{
with_lock(shared_resource_mutex, [&]
{
// some actions
});
// some other actions
}
I am dealing with the multi-threading project with C++ and I doubt about std::mutex
Let's assume that I have a stack.
#include <exception>
#include <memory>
#include <mutex>
#include <stack>
struct empty_stack: std::exception
{
const char* what() const throw();
};
template<typename T>
class threadsafe_stack
{
private:
std::stack<T> data;
mutable std::mutex m;
public:
threadsafe_stack(){}
threadsafe_stack(const threadsafe_stack& other)
{
std::lock_guard<std::mutex> lock(other.m);
data=other.data;
}
threadsafe_stack& operator=(const threadsafe_stack&) = delete;
void push(T new_value)
{
std::lock_guard<std::mutex> lock(m);
data.push(new_value);
}
std::shared_ptr<T> pop()
{
std::lock_guard<std::mutex> lock(m);
if(data.empty()) throw empty_stack();
std::shared_ptr<T> const res(std::make_shared<T>(data.top()));
data.pop();
return res;
}
void pop(T& value)
{
std::lock_guard<std::mutex> lock(m);
if(data.empty()) throw empty_stack();
value=data.top();
data.pop();
}
bool empty() const
{
std::lock_guard<std::mutex> lock(m);
return data.empty();
}
};
Someone said that using this stack can avoid race condition. However I think that problem here is that mutex aka mutual exclusion here only ensure for individual function not together. For example, I can have the threads call push and pop. Those function still have problem of race condition.
For example:
threadsafe_stack st; //global varibale for simple
void fun1(threadsafe_stack st)
{
std::lock_guard<std::mutex> lock(m);
st.push(t);
t = st.pop();
//
}
void fun2(threadsafe_stack st)
{
std::lock_guard<std::mutex> lock(m);
T t,t2;
t = st.pop();
// Do big things
st.push(t2);
//
}
If a thread fun1 and fun2 call the same stack (global variable for simple). So it can be a race condition(?)
I have only solution I can think is using some kind of atomic transaction means instead of calling directly push(), pop(), empty(), I call them via a function with a "function pointer" to those function and with only one mutex.
For example:
#define PUSH 0
#define POP 1
#define EMPTY 2
changeStack(int kindOfFunction, T* input, bool* isEmpty)
{
std::lock_guard<std::mutex> lock(m);
switch(kindOfFunction){
case PUSH:
push(input);
break;
case POP:
input = pop();
break;
case EMPTY:
isEmpty = empty();
break;
}
}
Is my solution good? Or I just overthinking and the first solution my friend told me is good enough? Are there any other solution for this? The solution can avoid "atomic transaction" like I suggest.
A given mutex is a single lock and can be held by a single thread at any one time.
If a thread (T1) is holding the lock on a given object in push() another thread (T2) cannot acquire it in pop() and will be blocked until T1 releases it. At that point of release T2 (or another thread also blocked by the same mutex) will be unblocked and allowed to proceed.
You do not need to do all the locking and unlocking in one member.
The point where you may still be introducing a race condition is constructs like this if they appear in consumer code:
if(!stack.empty()){
auto item=stack.pop();//Guaranteed?
}
If another thread T2 enters pop() after thread T1 enters empty() (above) and gets blocked waiting on the mutex then the pop() in T1 may fail because T2 'got there first'. Any number of actions might take place between the end of empty() and the start of pop() in that snippet unless other synchronization is handling it.
In this case you should imagine T1 & T2 literally racing to pop() though of course they may be racing to different members and still invalidate each other...
If you want to build code like that you usually have to then add further atomic member functions like try_pop() which returns (say) an empty std::shared_ptr<> if the stack is empty.
I hope this sentence isn't confusing:
Locking the object mutex inside member functions avoids race
conditions between calls to those member functions but not in
between calls to those member functions.
The best way to solve that is by adding 'composite' functions that are doing the job of more than one 'logical' operation. That tends to go against good class design in which you design a logical set of minimal operations and the consuming code combines them.
The alternative is to allow the consuming code access to the mutex. For example expose void lock() const; and void unlock() cont; members. That is usually not preferred because (a) it becomes very easy for consumer code to create deadlocks and (b) you either use a recursive lock (with its overhead) or double up member functions again:
void pop(); //Self locking version...
void pop_prelocked(); //Caller must hold object mutex or program invalidated.
Whether you expose them as public or protected or not that would make try_pop() look something like this:
std::shared_ptr<T> try_pop(){
std::lock_guard<std::mutex> guard(m);
if(empty_prelocked()){
return std::shared_ptr<T>();
}
return pop_prelocked();
}
Adding a mutex and acquiring it at the start of each member is only the start of the story...
Footnote: Hopefully that explains mutual exlusion (mut****ex). There's a whole other topic round memory barriers lurking below the surface here but if you use mutexes in this way you can treat that as an implementation detail for now...
You misunderstand something. You don't need that changeStack function.
If you forget about lock_guard, here's what it looks like (with lock_guard, the code does the same, but lock_guard makes it convenient: makes unlock automatic):
push() {
m.lock();
// do the push
m.unlock();
}
pop() {
m.lock();
// do the pop
m.unlock();
}
When push is called, mutex will be locked. Now, imagine, that on other thread, there is pop called. pop tries to lock the mutex, but it cannot lock it, because push already locked it. So it has to wait for push to unlock the mutex. When push unlocks the mutex, then pop can lock it.
So, in short, it is std::mutex which does the mutual exclusion, not the lock_guard.
I am using a std::condition_variable combined with a std::unique_lock like this.
std::mutex a_mutex;
std::condition_variable a_condition_variable;
std::unique_lock<std::mutex> a_lock(a_mutex);
a_condition_variable.wait(a_lock, [this] {return something;});
//Do something
a_lock.unlock();
It works fine. As I understand, std::condition_variable accepts a std::unique_lock for it to wait. But, I am trying to combine it with std::lock_guard but not able to.
My question is: Is it possible to replace std::unique_lock with a std::lock_guard instead ? This can relieve me from manually unlocking the lock every time I use it.
No, a std::unique_lock is needed if it is used with std::condition_variable. std::lock_guard may have less overhead, but it cannot be used with std::condition_variable.
But the std::unique_lock doesn't need to be manually unlocked, it also unlocks when it goes out of scope, like std::lock_guard. So the waiting code could be written as:
std::mutex a_mutex;
std::condition_variable a_condition_variable;
{
std::unique_lock<std::mutex> a_lock(a_mutex);
a_condition_variable.wait(a_lock, [this] {return something;});
//Do something
}
See http://en.cppreference.com/w/cpp/thread/unique_lock
Any call to wait() on a condition variable will always need to lock() and unlock() the underlying mutex. Since the wrapper lock_guard<> does not provide these functions, it can never be used with wait().
Still you could write your own simple mutex wrapper based on lock_guard<>, and add the 2 necessary methods. Additionally you would have to use condition_variable_any, which accepts any lock/mutex with a lock()/unlock() interface:
#include <mutex>
template<typename _mutex_t>
class my_lock_guard
{
public:
explicit my_lock_guard(_mutex_t & __m) : __mutex(__m)
{ __mutex.lock(); }
my_lock_guard(_mutex_t & __m, std::adopt_lock_t) : __mutex(__m)
{ } // calling thread owns mutex
~my_lock_guard()
{ __mutex.unlock(); }
void lock()
{ __mutex.lock(); }
void unlock()
{ __mutex.unlock(); }
my_lock_guard(const my_lock_guard &) = delete;
my_lock_guard& operator=(const my_lock_guard &) = delete;
private:
_mutex_t & __mutex;
};
And then:
#include <condition_variable>
...
std::mutex m;
my_lock_guard<std::mutex> lg(m);
std::condition_variable_any cva;
cva.wait(lg, [] { return something;});
// do something ...
...
Impossible, but you don’t actually need that.
std::unique_lock automatically unlocks itself in destructor, if it was locked.
I have a question regarding thread safety and mutexes. I have two functions that may not be executed at the same time because this could cause problems:
std::mutex mutex;
void A() {
std::lock_guard<std::mutex> lock(mutex);
//do something (should't be done while function B is executing)
}
T B() {
std::lock_guard<std::mutex> lock(mutex);
//do something (should't be done while function A is executing)
return something;
}
Now the thing is, that function A and B should not be executed at the same time. That's why I use the mutex. However, it is perfectly fine if function B is called simultaneously from multiple threads. However, this is also prevented by the mutex (and I don't want this). Now, is there a way to ensure that A and B are not executed at the same time while still letting function B be executed multiple times in parallel?
If C++14 is an option, you could use a shared mutex (sometimes called "reader-writer" mutex). Basically, inside function A() you would acquire a unique (exclusive, "writer") lock, while inside function B() you would acquire a shared (non-exclusive, "reader") lock.
As long as a shared lock exists, the mutex cannot be acquired exclusively by other threads (but can be acquired non-exclusively); as long as an exclusive locks exist, the mutex cannot be acquired by any other thread anyhow.
The result is that you can have several threads concurrently executing function B(), while the execution of function A() prevents concurrent executions of both A() and B() by other threads:
#include <shared_mutex>
std::shared_timed_mutex mutex;
void A() {
std::unique_lock<std::shared_timed_mutex> lock(mutex);
//do something (should't be done while function B is executing)
}
T B() {
std::shared_lock<std::shared_timed_mutex> lock(mutex);
//do something (should't be done while function A is executing)
return something;
}
Notice, that some synchronization overhead will always be present even for concurrent executions of B(), and whether this will eventually give you better performance than using plain mutexes is highly dependent on what is going on inside and outside those functions - always measure before committing to a more complicated solution.
Boost.Thread also provides an implementation of shared_mutex.
You have an option in C++14.
Use std::shared_timed_mutex.
A would use lock, B would use lock_shared
This is quite possibly full of bugs, but since you have no C++14 you could create a lock-counting wrapper around std::mutex and use that:
// Lock-counting class
class SharedLock
{
public:
SharedLock(std::mutex& m) : count(0), shared(m) {}
friend class Lock;
// RAII lock
class Lock
{
public:
Lock(SharedLock& l) : lock(l) { lock.lock(); }
~Lock() { lock.unlock(); }
private:
SharedLock& lock;
};
private:
void lock()
{
std::lock_guard<std::mutex> guard(internal);
if (count == 0)
{
shared.lock();
}
++count;
}
void unlock()
{
std::lock_guard<std::mutex> guard(internal);
--count;
if (count == 0)
{
shared.unlock();
}
}
int count;
std::mutex& shared;
std::mutex internal;
};
std::mutex shared_mutex;
void A()
{
std::lock_guard<std::mutex> lock(shared_mutex);
// ...
}
void B()
{
static SharedLock shared_lock(shared_mutex);
SharedLock::Lock mylock(shared_lock);
// ...
}
... unless you want to dive into Boost, of course.
I want to have a thread wait for the destruction of a specific object by another thread. I thought about implementing it somehow like this:
class Foo {
private:
pthread_mutex_t* mutex;
pthread_cond_t* condition;
public:
Foo(pthread_mutex_t* _mutex, pthread_cond_t* _condition) : mutex(_mutex), condition(_condition) {}
void waitForDestruction(void) {
pthread_mutex_lock(mutex);
pthread_cond_wait(condition,mutex);
pthread_mutex_unlock(mutex);
}
~Foo(void) {
pthread_mutex_lock(mutex);
pthread_cond_signal(condition);
pthread_mutex_unlock(mutex);
}
};
I know, however, that i must handle spurious wakeups in the waitForDestruction method, but i can't call anything on 'this', because it could already be destructed.
Another possibility that crossed my mind was to not use a condition variable, but lock the mutex in the constructor, unlock it in the destructor and lock/unlock it in the waitForDestruction method - this should work with a non-recursive mutex, and iirc i can unlock a mutex from a thread which didn't lock it, right? Will the second option suffer from any spurious wakeups?
It is always a difficult matter. But how about these lines of code:
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync() : owner(boost::this_thread::get_id()) {
}
void Wait() {
assert(boost::this_thread::get_id() != owner);
mutex.lock();
mutex.unlock();
}
boost::mutex mutex;
boost::thread::id owner;
};
struct Foo {
Foo() { }
~Foo() {
for (size_t i = 0; i < waiters.size(); ++i) {
waiters[i]->mutex.unlock();
}
}
FooSync::Ptr GetSync() {
waiters.push_back(FooSync::Ptr(new FooSync));
waiters.back()->mutex.lock();
return waiters.back();
}
std::vector<FooSync::Ptr> waiters;
};
The solution above would allow any number of destruction-wait object on a single Foo object. As long as it will correctly manage memory occupied by these objects. It seems that nothing prevents Foo instances to be created on the stack.
Though the only drawback I see is that it requires that destruction-wait objects always created in a thread that "owns" Foo object instance otherwise the recursive lock will probably happen. There is more, if GetSync gets called from multiple threads race condition may occur after push_back.
EDIT:
Ok, i have reconsidered the problem and came up with new solution. Take a look:
typedef boost::shared_ptr<boost::shared_mutex> MutexPtr;
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync(MutexPtr const& ptr) : mutex(ptr) {
}
void Wait() {
mutex->lock_shared();
mutex->unlock_shared();
}
MutexPtr mutex;
};
struct Foo {
Foo() : mutex(new boost::shared_mutex) {
mutex->lock();
}
~Foo() {
mutex->unlock();
}
FooSync::Ptr GetSync() {
return FooSync::Ptr(new FooSync(mutex));
}
MutexPtr mutex;
};
Now it seems reasonably cleaner and much less points of code are subjects to race conditions. There is only one synchronization primitive shared between object itself and all the sync-objects. Some efforts must be taken to overcome the case when Wait called in the thread where the object itself is (like in my first example). If the target platform does not support shared_mutex it is ok to go along with good-ol mutex. shared_mutex seems to reduce the burden of locks when there are many of FooSyncs waiting.