Situation: I have two methods: method1 loads values of config; method2 returns the values of loaded config. both method maybe called by different threads.
Issue: I want to use one mutex to lock/unlock two methods. so when method1 is locked, when method2 is called it should also be locked. I get confused when I read about mutex, when they say that i locks the region of the code until unlock is called.
Question: When I lock method1, am I also locking method2?
Expectation: what I want to happen is method2 to be called when method1 is updating the config data that will be returned by method2.
Sample Code:
void Class::method1() {
pthread_mutex_lock(mutex1);
string1 = "a value";
pthread_mutex_unlock(mutex1);
};
void Class::method2(string& aString) {
pthread_mutex_lock(mutex1);
aString = string1;
pthread_mutex_unlock(mutex1);
return;
};
Yes, you can (and should) use the same mutex. Though it's more preferable to acquire it with RAII (you could use std::lock_guard if your mutex was std::mutex or a wrapper around pthread_mutex_t implementing lock() and unlock()):
void Class::method1() {
std::lock_guard<std::mutex> lock(mutex1);
...
}
This way not only you can place return statements anywhere in the code, but also you achieve exception safety (mutex will be unlocked if the method throws an exception).
This is precisely how mutexes should be used -- to protect data from being accessed in one thread while another thread is, or might be, modifying it. Just fix the bug in method2 (preferably by acquiring the mutex using RAII and letting the mutex holder's destructor unlock the mutex after the return value is copied).
Related
I have the namespace below which func1 and func2 will be called from diffrent threads.
#include<thread>
namespace test{
std::mutex mu;
void func1(){
std::lock_guard<mutex>lock(mu);
//the whole function needs to be protected
}
void func2() {
mu.lock();
//some code that should not be executed when func1 is executed
mu.unlock();
//some other code
}
}
is it deadlock safe to use this mutex (once with lock_guard and outside of it ) to protect these critical sections ? if not how to achieve this logic?
Yes, you can effectively mix and match different guard instances (e.g. lock_guard, unique_lock, etc...) with std::mutex in different functions. One case I run into occassionally is when I want to use std::lock_guard for most methods, but usage of std::condition_variable expects a std::unique_lock for its wait method.
To elaborate on what Oblivion said, I typically introduce a new scope block within a function so that usage of std::lock_guard is consistent. Example:
void func2() {
{ // ENTER LOCK
lock_guard<std::mutex> lck;
//some code that should not be executed when func1 is executed
} // EXIT LOCK
// some other (thread safe) code
}
The advantage of the using the above pattern is that if anything throws an exception within the critical section of code that is under a lock, the destructor of lck will still be invoked and hence, unlock the mutex.
Everything the lock_guard does is to guarantee unlock on destruction. It's a convenience to get code right when functions can take multiple paths (think of exceptions!) not a necessity. Also, it builds on the "regular" lock() and unlock() functions. In summary, it is safe.
Deadlock happens when at least two mutex are involved or the single mutex didn't unlock forever for whatever reason.
The only issue with the second function is, in case of exception the lock will not be released.
You can simply use lock_guard or anything else that gets destroyed(and unlocks the mutex at dtor) to avoid such a scenario as you did for the first function.
I was using following kind of wait/signal way to let threads inform each other.
std::condition_variable condBiz;
std::mutex mutexBar;
..
void Foo::wait()
{
std::unique_lock<std::mutex> waitPoint(mutexBar);
if (waitPoint.owns_lock())
{
condBiz.wait(waitPoint);
}
}
void Foo::signal()
{
std::unique_lock<std::mutex> waitPoint(mutexBar);
condBiz.notify_all();
}
void Foo::safeSection(std::function<void(void)> & f)
{
std::unique_lock<std::mutex> waitPoint(mutexBar);
f();
}
Then converted lock/unlock mechanism from unique_lock to lock_guard because I'm not returning unique_lock to use somewhere else(other than wait/signal) and lock_guard is said to have less overhead:
void Foo::safeSection(std::function<void(void)> & f)
{
std::lock_guard<std::mutex> waitPoint(mutexBar); // same mutex object
f();
}
and it works.
Does this work for all platforms or just looks like working for current platform? Can unique_lock and lock_guard work with each other using same mutex object?
Both std::unique_lock and std::lock_guard lock the associated mutex in the constructor and unlock it in the destructor.
std::unique_lock:
Member functions
(constructor) constructs a unique_lock, optionally locking the supplied mutex
(destructor) unlocks the associated mutex, if owned
and the same for std::lock_guard:
Member functions
(constructor) constructs a lock_guard, optionally locking the given mutex
(destructor) destructs the lock_guard object, unlocks the underlying mutex
Since both behave the same, when used as a RAII style wrapper, I see no obstacle to use them together, even with the same mutex.
It has been pointed out in the comments to your post that checking if the unique_lock is owned in Foo::wait() is pointless, because the associated mutex must be owned by the lock at that point in order for the thread to be proceeding.
Instead your condition variable should be checking some meaningful condition, and it should do so in a while loop or by using the overload of condition_variable::wait which takes a predicate as its second argument, which is required by the C++ standard to have effect as:
while (!pred()) wait(lock);
The reason for checking the predicate in a while loop is that, apart from the fact that the condition may already be satisfied so no wait is necessary, the condition variable may spuriously wake up even when not signalled to do so.
Apart from that there is no reason why the signalling thread should not use a lock_guard with respect to the associated mutex. But I am not clear what you are trying to do.
My mutex class is defined:-
class Mutex{
static pthread_mutex_t mutex;
public:
Mutex(){
pthread_mutex_init(&mutex, NULL);
while(pthread_mutex_trylock(&mutex)){
sleep(2000);
}
}
virtual ~Mutex(){
pthread_mutex_unlock(&mutex);
pthread_mutex_destroy(&mutex);
}
};
The functions I am trying to apply the mutual exclusion to use this class like this:-
void doSomething(){
Mutex mutex;
// do something
}
This way when the constructor is called, the mutex is initialized and it tries to obtain the lock on that mutex. And when it goes out of scope from that function, it automatically gets destroyed.
But if one thread has a lock on the mutex, another thread tries to run pthread_mutex_init on it, what exactly happens? Will the thread that has the lock be overridden?
Pretty easy, from POSIX.1-2013:
Attempting to initialize an already initialized mutex results in undefined behavior.
That's why you have an alternative way of initializing mutexes:
// in your .cpp somewhere
pthread_mutex_t Mutex::mutex = PTHREAD_MUTEX_INITIALIZER;
Apart from this, logically speaking, your class seems very questionable. Do you really want to have one global lock for all users of Mutex, no matter what they're doing? You should employ fine grained locks, or you'll artificially limit your own scalability via software lockout.
I have a object and all its function should be executed in sequential order.
I know it is possible to do that with a mutex like
#include <mutex>
class myClass {
private:
std::mutex mtx;
public:
void exampleMethod();
};
void myClass::exampleMethod() {
std::unique_lock<std::mutex> lck (mtx); // lock mutex until end of scope
// do some stuff
}
but with this technique a deadlock occurs after calling an other mutex locked method within exampleMethod.
so i'm searching for a better resolution.
the default std::atomic access is sequentially consistent, so its not possible to read an write to this object at the same time, but now when i access my object and call a method, is the whole function call also atomic or more something like
object* obj = atomicObj.load(); // read atomic
obj.doSomething(); // call method from non atomic object;
if yes is there a better way than locking the most functions with a mutex ?
Stop and think about when you actually need to lock a mutex. If you have some helper function that is called within many other functions, it probably shouldn't try to lock the mutex, because the caller already will have.
If in some contexts it is not called by another member function, and so does need to take a lock, provide a wrapper function that actually does that. It is not uncommon to have 2 versions of member functions, a public foo() and a private fooNoLock(), where:
public:
void foo() {
std::lock_guard<std::mutex> l(mtx);
fooNoLock();
}
private:
void fooNoLock() {
// do stuff that operates on some shared resource...
}
In my experience, recursive mutexes are a code smell that indicate the author hasn't really got their head around the way the functions are used - not always wrong, but when I see one I get suspicious.
As for atomic operations, they can really only be applied for small arithmetic operations, say incrementing an integer, or swapping 2 pointers. These operations are not automatically atomic, but when you use atomic operations, these are the sorts of things they can be used for. You certainly can't have any reasonable expectations about 2 separate operations on a single atomic object. Anything could happen in between the operations.
You could use a std::recursive_mutex instead. This will allow a thread that already owns to mutex to reacquire it without blocking. However, if another thread tries to acquire the lock it will block.
As #BoBTFish properly indicated, it is better to separate your class's public interface, which member functions acquire non-recursive lock and then call private methods which don't. Your code must then assume a lock is always held when a private method is run.
To be safe on this, you may add a reference to std::unique_lock<std::mutex> to each of the method that requires the lock to be held.
Thus, even if you happen to call one private method from another, you would need to make sure a mutex is locked before execution:
class myClass
{
std::mutex mtx;
//
void i_exampleMethod(std::unique_lock<std::mutex> &)
{
// execute method
}
public:
void exampleMethod()
{
std::unique_lock<std::mutex> lock(mtx);
i_exampleMethod(lock);
}
};
I am very confused about the difference between a lock and mutex. In Boost docs, it says,
Lock Types
Class template lock_guard
Class template unique_lock
Class template shared_lock
Class template upgrade_lock
Class template upgrade_to_unique_lock
Mutex-specific class scoped_try_lock
Mutex Types
Class mutex
Typedef try_mutex
Class timed_mutex
Class recursive_mutex
Typedef recursive_try_mutex
Class recursive_timed_mutex
Class shared_mutex
In another article, I see functions like this,
boost::shared_mutex _access;
void reader()
{
boost::shared_lock< boost::shared_mutex > lock(_access);
// do work here, without anyone having exclusive access
}
void conditional_writer()
{
boost::upgrade_lock< boost::shared_mutex > lock(_access);
// do work here, without anyone having exclusive access
if (something) {
boost::upgrade_to_unique_lock< boost::shared_mutex > uniqueLock(lock);
// do work here, but now you have exclusive access
}
// do more work here, without anyone having exclusive access
}
Updated questions
Can anyone offer some clarification between the "mutex" and "lock"?
Is it necessary to create a shared_lock for a shared_mutex?
What happen if I create a unique_lock for a shared_mutex?
Or if I create a shared_lock for a mutex, does it mean the mutex can
not be shared among multiple threads?
A mutex is a synchronization object. You acquire a lock on a mutex at the beginning of a section of code, and release it at the end, in order to ensure that no other thread is accessing the same data at the same time. A mutex typically has a lifetime equal to that of the data it is protecting, and that one mutex is accessed by multiple threads.
A lock object is an object that encapsulates that lock. When the object is constructed it acquires the lock on the mutex. When it is destructed the lock is released. You typically create a new lock object for every access to the shared data.
A mutex is an object which can be locked. A lock is the object which
maintains the lock. To create a lock, you need to pass it a mutex.
Locks can provide mutual exclusion but not condition synchronization.Unlike a semaphore, a lock has an owner, and ownership plays an important
role in the behavior of a lock
example -
class lockableObject { public void F() {
mutex.lock(); ...; mutex.unlock();
}
public void G() {
mutex.lock(); ...; F(); ...; mutex.unlock();
}
private mutexLock mutex; }
// method G() calls method F()
Lock mutex in class lockableObject is used to turn methods F() and G() into critical sections. Thus, only one thread at a time can execute inside a method of a lockableObject. When a thread calls method G(), the mutex is locked. When method G() calls method F(), mutex.lock() is executed in F(), but the calling thread is not blocked since it already owns mutex. If mutex were a binary semaphore instead of a lock, the call from G() to F() would block the calling thread when mutex.P() was executed in F(). (Recall that comple- tions of P() and V() operations on a binary semaphore must alternate.) This would create a deadlock since no other threads would be able execute inside F() or G().
These are differences between locks and binary semaphores:
1 For a binary semaphore,if two calls are made toP()without any intervening call to V(), the second call will block. But a thread that owns a lock and requests ownership again is not blocked. (Beware of the fact that locks are not always recursive, so check the documentation before using a lock.)
2 The owner for successive calls to lock() and unlock() must be the same thread. But successive calls to P () and V () can be made by different threads.