How to best write multiple functons that use the same mutexes - c++

Suppose I have some code that looks like this:
g_mutex
void foo()
{
g_mutex.lock();
...
g_mutex.unlock()
}
void foobar()
{
g_mutex.lock();
...
g_mutex.unlock()
foo();
g_mutex.lock();
...
g_mutex.unlock()
}
Is there a pattern I can use such that in foobar() I can just lock the mutex once?

I can think of two solutions:
1. Use std::recursive_mutex
This way there's no problem if the same thread locks the mutex more than once, you don't have to unlock it before calling the function.
Use lock_guard or unique_lock though, don't litter your code with lock/unlock pairs.
2. Make foo() take a guard as argument
Rewrite foo() like this:
void foo(lock_guard<mutex>&)
{
// do foo stuff
}
This way it's impossible to call foo() without a mutex being locked. The lock_guard object is a token saying foo() can only be called with synchronization. Of course, it's still possible to mess it up by locking an unrelated mutex (which is rare if you are implementing the methods of a class, there's only one mutex visible to be locked).
You can see more details of this approach on this Andrei's pre-C++-11 article.

You can use std::lock_guard<std::mutex>:
void foobar()
{
std::lock_guard<std::mutex> guard(g_mutex);
// ...
} // releases g_mutex automatically

Usually, you rely on the mutex being reentrant - that is, it can be locked many times by the same thread:
void foo() {
g_mutex.lock();
// do foo stuff
g_mutex.unlock();
}
void foobar() {
g_mutex.lock();
foo();
g_mutex.unlock();
}
If you don't want that for some reason, there is a messier approach but it's not recommended. This would typically be done only in a class, where you can restrict access to private functions.
void foo_private()
{
// do foo stuff with the assumption that the lock is acquired.
}
void foo() {
g_mutex.lock();
foo_private();
g_mutex.unlock();
}
void foobar() {
g_mutex.lock();
foo_private();
g_mutex.unlock();
}
Also, as stated in the other answer to your question, you should use std::lock_guard to acquire the lock, as it will correctly unlock your object in the event of an exception (or if you forget to).

Related

Is unique_lock unlocked when a function is called?

Let's say I have a situation like this:
void consumer(){
unique_lock<mutex> lock(mtx);
foo();
}
void foo(){
/* does the thread still own the mutex here? */
}
I expect it does but I'm not 100% sure.
The destructor of unique_lock calls mtx.unlock(). The destructor is called at the end of the lifetime of the lock. Generally (see comments), the end of the lifetime of the lock is :
void consumer(){
unique_lock<mutex> lock(mtx);
foo();
} // <- here.
So yes, it'll still be locked.

Ensure that a thread doesn't lock a mutex twice?

Say I have a thread running a member method like runController in the example below:
class SomeClass {
public:
SomeClass() {
// Start controller thread
mControllerThread = std::thread(&SomeClass::runController, this)
}
~SomeClass() {
// Stop controller thread
mIsControllerThreadInterrupted = true;
// wait for thread to die.
std::unique_lock<std:::mutex> lk(mControllerThreadAlive);
}
// Both controller and external client threads might call this
void modifyObject() {
std::unique_lock<std::mutex> lock(mObjectMutex);
mObject.doSomeModification();
}
//...
private:
std::mutex mObjectMutex;
Object mObject;
std::thread mControllerThread;
std::atomic<bool> mIsControllerInterrupted;
std::mutex mControllerThreadAlive;
void runController() {
std::unique_lock<std::mutex> aliveLock(mControllerThreadAlive);
while(!mIsControllerInterruped) {
// Say I need to synchronize on mObject for all of these calls
std::unique_lock<std::mutex> lock(mObjectMutex);
someMethodA();
modifyObject(); // but calling modifyObject will then lock mutex twice
someMethodC();
}
}
//...
};
And some (or all) of the subroutines in runController need to modify data that is shared between threads and guarded by a mutex. Some (or all) of them, might also be called by other threads that need to modify this shared data.
With all the glory of C++11 at my disposal, how can I ensure that no thread ever locks a mutex twice?
Right now, I'm passing unique_lock references into the methods as parameters as below. But this seems clunky, difficult to maintain, potentially disastrous, etc...
void modifyObject(std::unique_lock<std::mutex>& objectLock) {
// We don't even know if this lock manages the right mutex...
// so let's waste some time checking that.
if(objectLock.mutex() != &mObjectMutex)
throw std::logic_error();
// Lock mutex if not locked by this thread
bool wasObjectLockOwned = objectLock.owns_lock();
if(!wasObjectLockOwned)
objectLock.lock();
mObject.doSomeModification();
// restore previous lock state
if(!wasObjectLockOwned)
objectLock.unlock();
}
Thanks!
There are several ways to avoid this kind of programming error. I recommend doing it on a class design level:
separate between public and private member functions,
only public member functions lock the mutex,
and public member functions are never called by other member functions.
If a function is needed both internally and externally, create two variants of the function, and delegate from one to the other:
public:
// intended to be used from the outside
int foobar(int x, int y)
{
std::unique_lock<std::mutex> lock(mControllerThreadAlive);
return _foobar(x, y);
}
private:
// intended to be used from other (public or private) member functions
int _foobar(int x, int y)
{
// ... code that requires locking
}

How to use recursive QMutex

I'm trying to use a recursive QMutex, i read the QMutex Class Reference but i not understand how to do it, can someone give me an example?
I need some way to lock QMutex that can be unlocked after or before the lock method is called.
If recursive mutex is not the way is there any other way?
To create a recursive QMutex you simply pass QMutex::Recursive at construction time, for instance:
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
mutex.lock();
number *= 5;
mutex.unlock();
}
void method2()
{
mutex.lock();
number *= 3;
mutex.unlock();
}
Recursive means that you can lock several times the mutex from the same thread, you don't have to unlock it. If I understood well your question that's what you want.
Be careful, if you lock recursively you must call unlock the same amount of times. A better way to lock/unlock a mutex is using a QMutexLocker
#include <QMutexLocker>
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
QMutexLocker locker(&mutex); // Here mutex is locked
number *= 5;
// Here locker goes out of scope.
// When locker is destroyed automatically unlocks mutex
}
void method2()
{
QMutexLocker locker(&mutex);
number *= 3;
}
A recursive mutex can be locked multiple times from a single thread without needing to be unlocked, as long as the same number of unlock calls are made from the same thread. This mechanism comes in handy when a shared resource is used by more then one function, and one of those functions call another function in which the resource is used.
Consider the following class:
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Calls bar() then does something else to the resource
private:
Resource mRes;
QMutex mLock;
}
An initial implementation may look something like the following:
Foo::Foo() {}
void Foo::bar() {
QMutexLocker locker(&mLock);
mRes.doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
bar();
mRes.doSomethingElse();
}
The above code will DEADLOCK on calls to thud. mLock will be acquired in the first line of thud() and once again by the first line of bar() which will block waiting for thud() to release the lock.
A simple solution would be to make the lock recursive in the ctor.
Foo::Foo() : mLock(QMutex::Recursive) {}
This an OK fix, and will be suitable for many situations, however one should be aware that there may be a performance penalty to using this solution since each recursive mutex call may require a system call to identify the current thread id.
In addition to the thread id check, all calls to thud() still execute QMutex::lock() twice!
Designs which require a recursive may be able to be refactored to eliminate the need for the recursive mutex. In general, the need for a recursive mutex is a "code smell" and indicates a need to adhere to the principle of separation of concerns.
For the class Foo, one could imagine creating a private function call which performs the shared computation and keeping the resource locking at the public interface level.
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Does something then does something else to the resource
private:
void doSomething();
private:
Resource mRes;
QMutex mLock;
}
Foo::Foo() {}
// public
void Foo::bar() {
QMutexLocker locker(&mLock);
doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
doSomething();
mRes.doSomethingElse();
}
// private
void Foo::doSomething() {
mRes.doSomething(); // Notice - no mutex in private function
}
Recursive mode just means that if a thread owns a mutex, and the same thread tries to lock the mutex again, that will succeed. The requirement is that calls to lock/unlock are balanced.
In non recursive mode, this will result in a deadlock.

waiting on a condition variable in a helper function that's called from the function that acquires the lock

I'm new to the boost threads library.
I have a situation where I acquire a scoped_lock in one function and need to wait on it in a callee.
The code is on the lines of:
class HavingMutex
{
public:
...
private:
static boost::mutex m;
static boost::condition_variable *c;
static void a();
static void b();
static void d();
}
void HavingMutex::a()
{
boost::mutex::scoped_lock lock(m);
...
b() //Need to pass lock here. Dunno how !
}
void HavingMutex::b(lock)
{
if (some condition)
d(lock) // Need to pass lock here. How ?
}
void HavingMutex::d(//Need to get lock here)
{
c->wait(lock); //Need to pass lock here (doesn't allow direct passing of mutex m)
}
Basically, in function d(), I need to access the scoped lock I acquired in a() so that I can wait on it. (Some other thread will notify).
Any help appreiciated. Thanks !
Have you tried simple reference? According to boost 1.41 documentation at http://www.boost.org/doc/libs/1_41_0/doc/html/thread/synchronization.html it should be all that is required.
...
void HavingMutex::a()
{
boost::mutex::scoped_lock lock(m);
...
b(lock);
}
void HavingMutex::b(boost::mutex::scoped_lock &lock)
{
if (some condition) // consider while (!some_condition) here
d(lock);
}
void HavingMutex::d(boost::mutex::scoped_lock &lock)
{
c->wait(lock);
}
void HavingMutex::notify()
{
// set_condition;
c->notify_one();
}
Also in boost example they have while cycle around wait. Wait could be interrupted in some cases by system itself, not really you got the lock.
I suggest you reconsider having all methods static with static members also. Instead create them as normal members, and create one global object.

how to let a thread wait for destruction of an object

I want to have a thread wait for the destruction of a specific object by another thread. I thought about implementing it somehow like this:
class Foo {
private:
pthread_mutex_t* mutex;
pthread_cond_t* condition;
public:
Foo(pthread_mutex_t* _mutex, pthread_cond_t* _condition) : mutex(_mutex), condition(_condition) {}
void waitForDestruction(void) {
pthread_mutex_lock(mutex);
pthread_cond_wait(condition,mutex);
pthread_mutex_unlock(mutex);
}
~Foo(void) {
pthread_mutex_lock(mutex);
pthread_cond_signal(condition);
pthread_mutex_unlock(mutex);
}
};
I know, however, that i must handle spurious wakeups in the waitForDestruction method, but i can't call anything on 'this', because it could already be destructed.
Another possibility that crossed my mind was to not use a condition variable, but lock the mutex in the constructor, unlock it in the destructor and lock/unlock it in the waitForDestruction method - this should work with a non-recursive mutex, and iirc i can unlock a mutex from a thread which didn't lock it, right? Will the second option suffer from any spurious wakeups?
It is always a difficult matter. But how about these lines of code:
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync() : owner(boost::this_thread::get_id()) {
}
void Wait() {
assert(boost::this_thread::get_id() != owner);
mutex.lock();
mutex.unlock();
}
boost::mutex mutex;
boost::thread::id owner;
};
struct Foo {
Foo() { }
~Foo() {
for (size_t i = 0; i < waiters.size(); ++i) {
waiters[i]->mutex.unlock();
}
}
FooSync::Ptr GetSync() {
waiters.push_back(FooSync::Ptr(new FooSync));
waiters.back()->mutex.lock();
return waiters.back();
}
std::vector<FooSync::Ptr> waiters;
};
The solution above would allow any number of destruction-wait object on a single Foo object. As long as it will correctly manage memory occupied by these objects. It seems that nothing prevents Foo instances to be created on the stack.
Though the only drawback I see is that it requires that destruction-wait objects always created in a thread that "owns" Foo object instance otherwise the recursive lock will probably happen. There is more, if GetSync gets called from multiple threads race condition may occur after push_back.
EDIT:
Ok, i have reconsidered the problem and came up with new solution. Take a look:
typedef boost::shared_ptr<boost::shared_mutex> MutexPtr;
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync(MutexPtr const& ptr) : mutex(ptr) {
}
void Wait() {
mutex->lock_shared();
mutex->unlock_shared();
}
MutexPtr mutex;
};
struct Foo {
Foo() : mutex(new boost::shared_mutex) {
mutex->lock();
}
~Foo() {
mutex->unlock();
}
FooSync::Ptr GetSync() {
return FooSync::Ptr(new FooSync(mutex));
}
MutexPtr mutex;
};
Now it seems reasonably cleaner and much less points of code are subjects to race conditions. There is only one synchronization primitive shared between object itself and all the sync-objects. Some efforts must be taken to overcome the case when Wait called in the thread where the object itself is (like in my first example). If the target platform does not support shared_mutex it is ok to go along with good-ol mutex. shared_mutex seems to reduce the burden of locks when there are many of FooSyncs waiting.