Avoid deadlock in a single thread - c++

I have the following problem: I have a class that needs to be protected from simultaneous access from different threads. The class has two methods: lock() and unlock() using (g_mutex_lock / g_mutex_unlock with a per-object GMutex). Now a locking method looks like this:
void Object::method()
{
lock();
// do stuff modifying the object
unlock();
}
Now lets assume that I have two mwthods of this type, method1() and method2() which I call one after another:
object.method1();
// but what if some other thread modifies object in between
object.method2();
I tried locking the object before this block und unlocking it again, but in this case
there is a deadlock even with a single thread because the GMutex doesn't know that it has already been locked by the same thread. A solution would be to modify the method to accept an additional bool to determine whether the object is already locked. But is there a more elegant concept? Or is this a shortcoming regarding the design concept in total?

The recursive mutex solution mentioned in other responses and comments will work just fine, but in my experience it leads to code that is harder to maintain, because once you switch to a recursive mutex it is all too easy to abuse it and lock all over the place.
Instead, I prefer to reorganize the code so that locking once is sufficient. In your example I would define the class as follows:
class Object {
public:
void method1() {
GMutexLock scopeLock(&lock);
method1_locked();
}
void method2() {
GMutexLock scopeLock(&lock);
method2_locked();
}
void method1_and_2() {
GMutexLock scopeLock(&lock);
method1_locked();
method2_locked();
}
private:
void method1_locked();
void method2_locked();
GMutex lock;
};
The "locked" versions of your methods are private, so they are only accessible from inside the class. The class takes responsibility in never calling these without the lock taken.
From the outside you have three choices of methods to call, depending on which of the methods you want to run.
Note that another improvement I've made is to not use explicit locking primitives but instead use the scope indirectly to lock and unlock. This is what GMutexLock does. An example implementation for this class is below:
class GMutexLock {
private:
GMutex* m;
GMutexLock(const GMutexLock &mlock); // not allowed
GMutexLock &operator=(const GMutexLock &); // not allowed
public:
GMutexLock(GMutex* mutex) {
m = mutex;
g_mutex_lock(m);
}
~GMutexLock() {
g_mutex_unlock(m);
}
};

Look up "recursive mutex" or "reentrant mutex" (versus the non-recursive mutex you're using now). These enable what you want. Some folks are not fans of recursive mutexes and feel they enable messy design.
Note that a recursive mutex cannot be locked on one thread and unlocked on another.

I personally would never use recursive mutexes (especially as such).
I would do some private functions that doesn't lock mutexes and lock around them in public functions' implementations.

Related

Calling a static method from an object inside a thread

I have a static function inside a class:
class ABC {
public:
static void calculate()
{
//...
}
};
I have another class:
Class DEF
{
public:
void doCalculation()
{
ABC::calculate();
}
};
The object for class DEF is one per thread, that is, we have n number of threads which can access calculate() at the same time.
In this case should I lock the call ABC::calculate()?
Please think carefully: locks should protect data, not code. If you teach yourself to put locks around a function call, you'll be debugging forever.
Solutions:
if calculate() only accesses static (not changing) data, no need
for synchronization at all.
if calculate() contains a simple increment of a shared variable
(int/float), use std::atomic<>. Note that atomics are way
faster than any contended locking.
if calculate() uses multiple variablers and/or larger structures, consider using libguarded.
if calculate() really is the only method accessing the data, use a std::lock_guard<>. At least you get exception safety for free.

How to use unique_lock in a class that calls other functions that use unique_lock?

I have a class that I need to make thread-safe. I'm trying to do this by putting a unique lock at the top of every function in the class. The problem is that as soon as one function calls another function (in this class) the mutexes seem to lock each other, despite being in different functions. How can I stop this from happening?
An example is a class with get() and set() functions that both use a unique_lock at the start of each function. But in set() you want to call get() at some point, but without set()'s mutex locking get()'s mutex. However the mutex in get() should still work if called directly.
Making a class "thead safe" by adding a mutex to all operations is code smell. Doing so with recursive mutex is worse, because it implies a lack of control and understanding about what was locked and what operations lock.
While it often permits some limited multithreaded access, but leads very often to deadlocks, contention and performance hell down the lane.
Lock based concurrency does not safely compose except in limited cases. You can take two correct lock-based datastructures/algorithms, connect them, and end up with incorrect/unsafe code.
Consider leaving your type single threaded, implementing const methods that can be mutually called without synchronization, then using mixtures of immutable instances and externally synchronized ones.
template<class T>
struct mutex_guarded {
template<class F>
auto read( F&& f ) const {
return access( std::forward<F>(f), *this );
}
template<class F>
auto write( F&& f ) {
return access( std::forward<F>(f), *this );
}
mutex_guarded()=default;
template<class T0, class...Ts,
std::enable_if_t<!std::is_same<mutex_guarded, std::decay_t<T0>>, bool> =true
>
mutex_guarded(T0&&t0, Ts&&ts):
t(std::forward<T0>(t0),std::forward<Ts>(ts)...)
{}
private:
template<class F, class Self>
friend auto access(F&& f, Self& self ){
auto l = self.lock();
return std::forward<F>(f)( self.t );
}
mutable std::mutex m;
T t;
auto lock() const { return std::unique_lock<std::mutex>(m); }
};
and similar for shared mutex (it has two lock overloads). access can be made public and vararg woth a bit of work (to handle things like assignment).
Now calling your own methods is no problem. External use looks like:
std::mutex_guarded<std::ostream&> safe_cout(std::cout);
safe_cout.write([&](auto& cout){ cout<<"hello "<<"world\n"; });
you can also write async wrappers (that do tasks in a thread pool and return futures) and the like.
The std::recursive_mutex is what you want. It can be locked more than one time in one thread.
It could be more clear if you provided the code.
The problem arises from the fact that all the locks share the same mutex object. a recursive lock could to some extent solve the problem.
but bear in mind that getters do not necessarily have to be locked.
I confronted the same problem years ago, There were a couple of threads working together on a proxy object. The final solution was that I had to define more than one mutex. If it is possible, make use of Qt signals or boost signal each of which helps you come up with a better solution for passing data back and forth.
While using std::recursive_mutex will work, it might incur some overhead that can be avoided.
Instead, implement all logic in private methods that do not take the lock but assume the lock is held by the current thread of execution. Then provide the necessary public methods that take the lock and forward the call to the respective private methods. In the implementation of the private methods, you are free to call other private methods without worrying about locking the mutex multiple times.
struct Widget
{
void foo()
{
std::unique_lock<std::mutex> lock{ m };
foo_impl();
}
void bar()
{
std::unique_lock<std::mutex> lock{ m };
bar_impl();
}
private:
std::mutex m;
void foo_impl()
{
bar_impl(); // Call other private methods
}
void bar_impl()
{
/* ... */
}
};
Note that this is just an alternative approach to (potentially) tackle your problem.

Avoiding deadlock in case of nested calls when designing thread safe class

I understand I can use a mutex member in a class and lock it inside each method to prevent data race in multithreading environment. However, such method might incur deadlock if there are nested calls in the class's methods, like the add_one() and add_two() in below class. Using different mutex for each method is a workaround. But are there more principled and elegant way to prevent deadlock in the case of nested calls?
class AddClass {
public:
AddClass& operator=(AddClass const&) = delete; // disable copy-assignment constructor
AddClass(int val) : base(val) {}
int add_one() { return ++base; }
int add_two() {
add_one;
add_one;
return base;
}
private:
int base;
};
There is std::recursive_mutex exactly for this purpose.
Another approach that avoids overhead incurred by recursive mutex is to separate public synchronized interface from private non-synchronized implementation:
class AddClass {
public:
AddClass& operator=(AddClass const&) = delete; // disable copy-assignment constructor
AddClass(int val) : base(val) {}
int add_one() {
std::lock_guard<std::mutex> guard{mutex};
return add_one_impl();
}
int add_two() {
std::lock_guard<std::mutex> guard{mutex};
return add_two_impl();
}
private:
int base;
std::mutex mutex;
int add_one_impl() {
return ++base;
}
int add_two_impl() {
add_one_impl();
add_one_impl();
return base;
}
};
Note, however, that this is not always possible. For example if you have a method that accepts a callback and calls it while holding the lock, the callback might try to call some other public method of your class, and you are again faced with the double locking attempt.
A recursive mutex is a lockable object, just like mutex, but allows the same thread to acquire multiple levels of ownership over the mutex object.
When and how to Use Recursive Mutex-- Link Below
Recursive Mutex
Note: Recursive and non-recursive mutexes have different use cases.
Hope it helps.
The general solution for this is called a reentrant mutex
While any attempt to perform the "lock" operation on an ordinary mutex
(lock) would either fail or block when the mutex is already locked, on
a recursive mutex this operation will succeed if and only if the
locking thread is the one that already holds the lock. Typically, a
recursive mutex tracks the number of times it has been locked, and
requires equally many unlock operations to be performed before other
threads may lock it.
There's one in the C++11 standard library: http://en.cppreference.com/w/cpp/thread/recursive_mutex
Implement the public interface with functions that lock your locks and call private member functions that do the work. The private functions can call each other without the overhead of re-locking mutexes and without resorting to recursive mutexes, which are regarded by many as a sign of design failure.

mutable thread vs non-const method

I have a class where I want to call a method in a thread.
My methods don't modify the class attributs so I expect them to be const, but they instantiate a thread, so they can't.
What is the best choice between putting the std::thread mutable, remove the const because of the thread, edit : or using detached threads ?
class ValidationSound
{
public:
[... CTOR DTOR ...]
void emitSuccessSoundAsync() const
{
if(m_thread.joinable())
{
m_thread.join();
}
m_thread = std::thread(&ValidationSound::emitSound, this, std::cref(m_bipsParameters.first));
};
void emitFailureSoundAsync() const
{
if(m_thread.joinable())
{
m_thread.join();
}
m_thread = std::thread(&ValidationSound::emitSound, this, std::cref(m_bipsParameters.second));
};
void emitSound(const BipParameters& bipParam) const
{
//BIP BIP THE BUZZER
};
private:
std::pair<BipParameters, BipParameters> m_bipsParameters;
mutable std::thread m_thread;
};
My methods don't modify the class attributs so I expect them to be const, but they instantiate a thread, so they can't.
But your methods do modify class attributes. Your std::thread is a class attribute and once any of your methods are called, that attribute will change (begin running) and continue to change state even after the methods have exited.
What is the best choice between putting the std::thread mutable, remove the const because of the thread, edit : or using detached threads ?
In this case, I'd recommend removing the const from method signatures. Const just confuses the interface and could fool users into thinking it's thread-safe. If the methods were mutex protected and blocked for the duration of the thread execution time, you could make a stronger argument for mutable and const, but given your current implementation I wouldn't.
Edit: Said another way, imagine you've gone ahead and created a single const instance of you ValidationSound class. It would be very easy for a user of this class to call your instance in a way that creates many threads all playing different sounds interleaved at different times. Is that how you'd envision a const instance of this class behaving? It's certainly not how I'd envision it looking purely at the interface.

Atomic class object methods usage

I want to call methods of some class atomically from two threads.
I have non-thead-safe class, from third-party library, but need to use this class like that:
Main thread:
Foo foo;
foo.method1(); // while calling Foo::method1 object foo is locked for another threads
Second thread:
foo.method2(); // wait while somewere calling another methods from foo
How to use std::atomic at this situation? Or may be another solution (exclude use mutex and lock before and unlock after calling methods from foo)?
You cannot use std::atomic with user-defined types that are not trivially copyable, and the Standard only provides a limited set of specializations for certain fundamental types. Here you can find the list of all the standard specializations of std::atomic.
One approach you may want to consider is to write a general-purpose wrapper that lets you provide callable objects to be executed in a thread-safe manner on the wrapped object. Something along these lines was once presented by Herb Sutter in one of his talks:
template<typename T>
class synchronized
{
public:
template<typename... Args>
synchronized(Args&&... args) : _obj{std::forward<Args>(args)...} { }
template<typename F>
void thread_safe_invoke(F&& f)
{
std::lock_guard<std::mutex> lock{_m};
(std::forward<F>(f))(_obj);
}
// ...
private:
T _obj;
std::mutex _m;
};
This incurs some syntactic overhead in case you only want to call a single function in a thread-safe manner, but it also allows realizing transactions that must be performed atomically and may consist of more than one function call on the synchronized object.
This is how you could use it:
int main()
{
synchronized<std::string> s{"Hello"};
s.thread_safe_invoke([&] (auto& s)
{
std::cout << s.size() << " " << (s + s);
});
}
For a deeper analysis and implementation guidance, you may refer to this article on the subject as well as this one.
Share a std::mutex between the different threads. Where ever you use foo, wrap the calls with a std::unique_lock