how to let a thread wait for destruction of an object - c++

I want to have a thread wait for the destruction of a specific object by another thread. I thought about implementing it somehow like this:
class Foo {
private:
pthread_mutex_t* mutex;
pthread_cond_t* condition;
public:
Foo(pthread_mutex_t* _mutex, pthread_cond_t* _condition) : mutex(_mutex), condition(_condition) {}
void waitForDestruction(void) {
pthread_mutex_lock(mutex);
pthread_cond_wait(condition,mutex);
pthread_mutex_unlock(mutex);
}
~Foo(void) {
pthread_mutex_lock(mutex);
pthread_cond_signal(condition);
pthread_mutex_unlock(mutex);
}
};
I know, however, that i must handle spurious wakeups in the waitForDestruction method, but i can't call anything on 'this', because it could already be destructed.
Another possibility that crossed my mind was to not use a condition variable, but lock the mutex in the constructor, unlock it in the destructor and lock/unlock it in the waitForDestruction method - this should work with a non-recursive mutex, and iirc i can unlock a mutex from a thread which didn't lock it, right? Will the second option suffer from any spurious wakeups?

It is always a difficult matter. But how about these lines of code:
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync() : owner(boost::this_thread::get_id()) {
}
void Wait() {
assert(boost::this_thread::get_id() != owner);
mutex.lock();
mutex.unlock();
}
boost::mutex mutex;
boost::thread::id owner;
};
struct Foo {
Foo() { }
~Foo() {
for (size_t i = 0; i < waiters.size(); ++i) {
waiters[i]->mutex.unlock();
}
}
FooSync::Ptr GetSync() {
waiters.push_back(FooSync::Ptr(new FooSync));
waiters.back()->mutex.lock();
return waiters.back();
}
std::vector<FooSync::Ptr> waiters;
};
The solution above would allow any number of destruction-wait object on a single Foo object. As long as it will correctly manage memory occupied by these objects. It seems that nothing prevents Foo instances to be created on the stack.
Though the only drawback I see is that it requires that destruction-wait objects always created in a thread that "owns" Foo object instance otherwise the recursive lock will probably happen. There is more, if GetSync gets called from multiple threads race condition may occur after push_back.
EDIT:
Ok, i have reconsidered the problem and came up with new solution. Take a look:
typedef boost::shared_ptr<boost::shared_mutex> MutexPtr;
struct FooSync {
typedef boost::shared_ptr<FooSync> Ptr;
FooSync(MutexPtr const& ptr) : mutex(ptr) {
}
void Wait() {
mutex->lock_shared();
mutex->unlock_shared();
}
MutexPtr mutex;
};
struct Foo {
Foo() : mutex(new boost::shared_mutex) {
mutex->lock();
}
~Foo() {
mutex->unlock();
}
FooSync::Ptr GetSync() {
return FooSync::Ptr(new FooSync(mutex));
}
MutexPtr mutex;
};
Now it seems reasonably cleaner and much less points of code are subjects to race conditions. There is only one synchronization primitive shared between object itself and all the sync-objects. Some efforts must be taken to overcome the case when Wait called in the thread where the object itself is (like in my first example). If the target platform does not support shared_mutex it is ok to go along with good-ol mutex. shared_mutex seems to reduce the burden of locks when there are many of FooSyncs waiting.

Related

unique_ptr with multithreading

I have some fear about using unique_ptr with multithreading without mutex. I wrote simplified code below, please take a look. If I check unique_ptr != nullptr, is it thread safe?
class BigClassCreatedOnce
{
public:
std::atomic<bool> var;
// A lot of other stuff
};
BigClassCreatedOnce class instance will be created only once but I'm not sure is it safe to use it between threads.
class MainClass
{
public:
// m_bigClass used all around the class from the Main Thread
MainClass()
: m_bigClass()
, m_thread()
{
m_thread = std::thread([this]() {
while (1)
{
methodToBeCalledFromThread();
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
});
// other stuff here
m_bigClass.reset(new BigClassCreatedOnce()); // created only once
}
void methodToBeCalledFromThread()
{
if (!m_bigClass) // As I understand this is not safe
{
return;
}
if (m_bigClass->var.load()) // As I understand this is safe
{
// does something
}
}
std::unique_ptr<BigClassCreatedOnce> m_bigClass;
std::thread m_thread;
};
I just put it into infinity loop to simplify the sample.
int main()
{
MainClass ms;
while (1)
{
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
}
If I check unique_ptr != nullptr, is it thread safe
No, it is not thread safe. If you have more than one thread and at least of one of them writes to the shared data then you need synchronization. If you do not then you have a data race and it is undefined behavior.
m_bigClass.reset(new BigClassCreatedOnce()); // created only once
and
if (!m_bigClass)
Can both happen at the same time, so it is a data race.
I would also like to point out that
if (m_bigClass->var.load())
Is also not thread safe. var.load() is, but the access of m_bigClass is not so you could also have a data race there.

c++: Function that locks mutex for other function but can itself be executed in parallel

I have a question regarding thread safety and mutexes. I have two functions that may not be executed at the same time because this could cause problems:
std::mutex mutex;
void A() {
std::lock_guard<std::mutex> lock(mutex);
//do something (should't be done while function B is executing)
}
T B() {
std::lock_guard<std::mutex> lock(mutex);
//do something (should't be done while function A is executing)
return something;
}
Now the thing is, that function A and B should not be executed at the same time. That's why I use the mutex. However, it is perfectly fine if function B is called simultaneously from multiple threads. However, this is also prevented by the mutex (and I don't want this). Now, is there a way to ensure that A and B are not executed at the same time while still letting function B be executed multiple times in parallel?
If C++14 is an option, you could use a shared mutex (sometimes called "reader-writer" mutex). Basically, inside function A() you would acquire a unique (exclusive, "writer") lock, while inside function B() you would acquire a shared (non-exclusive, "reader") lock.
As long as a shared lock exists, the mutex cannot be acquired exclusively by other threads (but can be acquired non-exclusively); as long as an exclusive locks exist, the mutex cannot be acquired by any other thread anyhow.
The result is that you can have several threads concurrently executing function B(), while the execution of function A() prevents concurrent executions of both A() and B() by other threads:
#include <shared_mutex>
std::shared_timed_mutex mutex;
void A() {
std::unique_lock<std::shared_timed_mutex> lock(mutex);
//do something (should't be done while function B is executing)
}
T B() {
std::shared_lock<std::shared_timed_mutex> lock(mutex);
//do something (should't be done while function A is executing)
return something;
}
Notice, that some synchronization overhead will always be present even for concurrent executions of B(), and whether this will eventually give you better performance than using plain mutexes is highly dependent on what is going on inside and outside those functions - always measure before committing to a more complicated solution.
Boost.Thread also provides an implementation of shared_mutex.
You have an option in C++14.
Use std::shared_timed_mutex.
A would use lock, B would use lock_shared
This is quite possibly full of bugs, but since you have no C++14 you could create a lock-counting wrapper around std::mutex and use that:
// Lock-counting class
class SharedLock
{
public:
SharedLock(std::mutex& m) : count(0), shared(m) {}
friend class Lock;
// RAII lock
class Lock
{
public:
Lock(SharedLock& l) : lock(l) { lock.lock(); }
~Lock() { lock.unlock(); }
private:
SharedLock& lock;
};
private:
void lock()
{
std::lock_guard<std::mutex> guard(internal);
if (count == 0)
{
shared.lock();
}
++count;
}
void unlock()
{
std::lock_guard<std::mutex> guard(internal);
--count;
if (count == 0)
{
shared.unlock();
}
}
int count;
std::mutex& shared;
std::mutex internal;
};
std::mutex shared_mutex;
void A()
{
std::lock_guard<std::mutex> lock(shared_mutex);
// ...
}
void B()
{
static SharedLock shared_lock(shared_mutex);
SharedLock::Lock mylock(shared_lock);
// ...
}
... unless you want to dive into Boost, of course.

Ensure that a thread doesn't lock a mutex twice?

Say I have a thread running a member method like runController in the example below:
class SomeClass {
public:
SomeClass() {
// Start controller thread
mControllerThread = std::thread(&SomeClass::runController, this)
}
~SomeClass() {
// Stop controller thread
mIsControllerThreadInterrupted = true;
// wait for thread to die.
std::unique_lock<std:::mutex> lk(mControllerThreadAlive);
}
// Both controller and external client threads might call this
void modifyObject() {
std::unique_lock<std::mutex> lock(mObjectMutex);
mObject.doSomeModification();
}
//...
private:
std::mutex mObjectMutex;
Object mObject;
std::thread mControllerThread;
std::atomic<bool> mIsControllerInterrupted;
std::mutex mControllerThreadAlive;
void runController() {
std::unique_lock<std::mutex> aliveLock(mControllerThreadAlive);
while(!mIsControllerInterruped) {
// Say I need to synchronize on mObject for all of these calls
std::unique_lock<std::mutex> lock(mObjectMutex);
someMethodA();
modifyObject(); // but calling modifyObject will then lock mutex twice
someMethodC();
}
}
//...
};
And some (or all) of the subroutines in runController need to modify data that is shared between threads and guarded by a mutex. Some (or all) of them, might also be called by other threads that need to modify this shared data.
With all the glory of C++11 at my disposal, how can I ensure that no thread ever locks a mutex twice?
Right now, I'm passing unique_lock references into the methods as parameters as below. But this seems clunky, difficult to maintain, potentially disastrous, etc...
void modifyObject(std::unique_lock<std::mutex>& objectLock) {
// We don't even know if this lock manages the right mutex...
// so let's waste some time checking that.
if(objectLock.mutex() != &mObjectMutex)
throw std::logic_error();
// Lock mutex if not locked by this thread
bool wasObjectLockOwned = objectLock.owns_lock();
if(!wasObjectLockOwned)
objectLock.lock();
mObject.doSomeModification();
// restore previous lock state
if(!wasObjectLockOwned)
objectLock.unlock();
}
Thanks!
There are several ways to avoid this kind of programming error. I recommend doing it on a class design level:
separate between public and private member functions,
only public member functions lock the mutex,
and public member functions are never called by other member functions.
If a function is needed both internally and externally, create two variants of the function, and delegate from one to the other:
public:
// intended to be used from the outside
int foobar(int x, int y)
{
std::unique_lock<std::mutex> lock(mControllerThreadAlive);
return _foobar(x, y);
}
private:
// intended to be used from other (public or private) member functions
int _foobar(int x, int y)
{
// ... code that requires locking
}

How to use recursive QMutex

I'm trying to use a recursive QMutex, i read the QMutex Class Reference but i not understand how to do it, can someone give me an example?
I need some way to lock QMutex that can be unlocked after or before the lock method is called.
If recursive mutex is not the way is there any other way?
To create a recursive QMutex you simply pass QMutex::Recursive at construction time, for instance:
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
mutex.lock();
number *= 5;
mutex.unlock();
}
void method2()
{
mutex.lock();
number *= 3;
mutex.unlock();
}
Recursive means that you can lock several times the mutex from the same thread, you don't have to unlock it. If I understood well your question that's what you want.
Be careful, if you lock recursively you must call unlock the same amount of times. A better way to lock/unlock a mutex is using a QMutexLocker
#include <QMutexLocker>
QMutex mutex(QMutex::Recursive);
int number = 6;
void method1()
{
QMutexLocker locker(&mutex); // Here mutex is locked
number *= 5;
// Here locker goes out of scope.
// When locker is destroyed automatically unlocks mutex
}
void method2()
{
QMutexLocker locker(&mutex);
number *= 3;
}
A recursive mutex can be locked multiple times from a single thread without needing to be unlocked, as long as the same number of unlock calls are made from the same thread. This mechanism comes in handy when a shared resource is used by more then one function, and one of those functions call another function in which the resource is used.
Consider the following class:
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Calls bar() then does something else to the resource
private:
Resource mRes;
QMutex mLock;
}
An initial implementation may look something like the following:
Foo::Foo() {}
void Foo::bar() {
QMutexLocker locker(&mLock);
mRes.doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
bar();
mRes.doSomethingElse();
}
The above code will DEADLOCK on calls to thud. mLock will be acquired in the first line of thud() and once again by the first line of bar() which will block waiting for thud() to release the lock.
A simple solution would be to make the lock recursive in the ctor.
Foo::Foo() : mLock(QMutex::Recursive) {}
This an OK fix, and will be suitable for many situations, however one should be aware that there may be a performance penalty to using this solution since each recursive mutex call may require a system call to identify the current thread id.
In addition to the thread id check, all calls to thud() still execute QMutex::lock() twice!
Designs which require a recursive may be able to be refactored to eliminate the need for the recursive mutex. In general, the need for a recursive mutex is a "code smell" and indicates a need to adhere to the principle of separation of concerns.
For the class Foo, one could imagine creating a private function call which performs the shared computation and keeping the resource locking at the public interface level.
class Foo {
public:
Foo();
void bar(); // Does something to the resource
void thud(); // Does something then does something else to the resource
private:
void doSomething();
private:
Resource mRes;
QMutex mLock;
}
Foo::Foo() {}
// public
void Foo::bar() {
QMutexLocker locker(&mLock);
doSomething();
}
void Foo::thud() {
QMutexLocker locker(&mLock);
doSomething();
mRes.doSomethingElse();
}
// private
void Foo::doSomething() {
mRes.doSomething(); // Notice - no mutex in private function
}
Recursive mode just means that if a thread owns a mutex, and the same thread tries to lock the mutex again, that will succeed. The requirement is that calls to lock/unlock are balanced.
In non recursive mode, this will result in a deadlock.

waiting on a condition variable in a helper function that's called from the function that acquires the lock

I'm new to the boost threads library.
I have a situation where I acquire a scoped_lock in one function and need to wait on it in a callee.
The code is on the lines of:
class HavingMutex
{
public:
...
private:
static boost::mutex m;
static boost::condition_variable *c;
static void a();
static void b();
static void d();
}
void HavingMutex::a()
{
boost::mutex::scoped_lock lock(m);
...
b() //Need to pass lock here. Dunno how !
}
void HavingMutex::b(lock)
{
if (some condition)
d(lock) // Need to pass lock here. How ?
}
void HavingMutex::d(//Need to get lock here)
{
c->wait(lock); //Need to pass lock here (doesn't allow direct passing of mutex m)
}
Basically, in function d(), I need to access the scoped lock I acquired in a() so that I can wait on it. (Some other thread will notify).
Any help appreiciated. Thanks !
Have you tried simple reference? According to boost 1.41 documentation at http://www.boost.org/doc/libs/1_41_0/doc/html/thread/synchronization.html it should be all that is required.
...
void HavingMutex::a()
{
boost::mutex::scoped_lock lock(m);
...
b(lock);
}
void HavingMutex::b(boost::mutex::scoped_lock &lock)
{
if (some condition) // consider while (!some_condition) here
d(lock);
}
void HavingMutex::d(boost::mutex::scoped_lock &lock)
{
c->wait(lock);
}
void HavingMutex::notify()
{
// set_condition;
c->notify_one();
}
Also in boost example they have while cycle around wait. Wait could be interrupted in some cases by system itself, not really you got the lock.
I suggest you reconsider having all methods static with static members also. Instead create them as normal members, and create one global object.