I want to call methods of some class atomically from two threads.
I have non-thead-safe class, from third-party library, but need to use this class like that:
Main thread:
Foo foo;
foo.method1(); // while calling Foo::method1 object foo is locked for another threads
Second thread:
foo.method2(); // wait while somewere calling another methods from foo
How to use std::atomic at this situation? Or may be another solution (exclude use mutex and lock before and unlock after calling methods from foo)?
You cannot use std::atomic with user-defined types that are not trivially copyable, and the Standard only provides a limited set of specializations for certain fundamental types. Here you can find the list of all the standard specializations of std::atomic.
One approach you may want to consider is to write a general-purpose wrapper that lets you provide callable objects to be executed in a thread-safe manner on the wrapped object. Something along these lines was once presented by Herb Sutter in one of his talks:
template<typename T>
class synchronized
{
public:
template<typename... Args>
synchronized(Args&&... args) : _obj{std::forward<Args>(args)...} { }
template<typename F>
void thread_safe_invoke(F&& f)
{
std::lock_guard<std::mutex> lock{_m};
(std::forward<F>(f))(_obj);
}
// ...
private:
T _obj;
std::mutex _m;
};
This incurs some syntactic overhead in case you only want to call a single function in a thread-safe manner, but it also allows realizing transactions that must be performed atomically and may consist of more than one function call on the synchronized object.
This is how you could use it:
int main()
{
synchronized<std::string> s{"Hello"};
s.thread_safe_invoke([&] (auto& s)
{
std::cout << s.size() << " " << (s + s);
});
}
For a deeper analysis and implementation guidance, you may refer to this article on the subject as well as this one.
Share a std::mutex between the different threads. Where ever you use foo, wrap the calls with a std::unique_lock
Related
I have a class that I need to make thread-safe. I'm trying to do this by putting a unique lock at the top of every function in the class. The problem is that as soon as one function calls another function (in this class) the mutexes seem to lock each other, despite being in different functions. How can I stop this from happening?
An example is a class with get() and set() functions that both use a unique_lock at the start of each function. But in set() you want to call get() at some point, but without set()'s mutex locking get()'s mutex. However the mutex in get() should still work if called directly.
Making a class "thead safe" by adding a mutex to all operations is code smell. Doing so with recursive mutex is worse, because it implies a lack of control and understanding about what was locked and what operations lock.
While it often permits some limited multithreaded access, but leads very often to deadlocks, contention and performance hell down the lane.
Lock based concurrency does not safely compose except in limited cases. You can take two correct lock-based datastructures/algorithms, connect them, and end up with incorrect/unsafe code.
Consider leaving your type single threaded, implementing const methods that can be mutually called without synchronization, then using mixtures of immutable instances and externally synchronized ones.
template<class T>
struct mutex_guarded {
template<class F>
auto read( F&& f ) const {
return access( std::forward<F>(f), *this );
}
template<class F>
auto write( F&& f ) {
return access( std::forward<F>(f), *this );
}
mutex_guarded()=default;
template<class T0, class...Ts,
std::enable_if_t<!std::is_same<mutex_guarded, std::decay_t<T0>>, bool> =true
>
mutex_guarded(T0&&t0, Ts&&ts):
t(std::forward<T0>(t0),std::forward<Ts>(ts)...)
{}
private:
template<class F, class Self>
friend auto access(F&& f, Self& self ){
auto l = self.lock();
return std::forward<F>(f)( self.t );
}
mutable std::mutex m;
T t;
auto lock() const { return std::unique_lock<std::mutex>(m); }
};
and similar for shared mutex (it has two lock overloads). access can be made public and vararg woth a bit of work (to handle things like assignment).
Now calling your own methods is no problem. External use looks like:
std::mutex_guarded<std::ostream&> safe_cout(std::cout);
safe_cout.write([&](auto& cout){ cout<<"hello "<<"world\n"; });
you can also write async wrappers (that do tasks in a thread pool and return futures) and the like.
The std::recursive_mutex is what you want. It can be locked more than one time in one thread.
It could be more clear if you provided the code.
The problem arises from the fact that all the locks share the same mutex object. a recursive lock could to some extent solve the problem.
but bear in mind that getters do not necessarily have to be locked.
I confronted the same problem years ago, There were a couple of threads working together on a proxy object. The final solution was that I had to define more than one mutex. If it is possible, make use of Qt signals or boost signal each of which helps you come up with a better solution for passing data back and forth.
While using std::recursive_mutex will work, it might incur some overhead that can be avoided.
Instead, implement all logic in private methods that do not take the lock but assume the lock is held by the current thread of execution. Then provide the necessary public methods that take the lock and forward the call to the respective private methods. In the implementation of the private methods, you are free to call other private methods without worrying about locking the mutex multiple times.
struct Widget
{
void foo()
{
std::unique_lock<std::mutex> lock{ m };
foo_impl();
}
void bar()
{
std::unique_lock<std::mutex> lock{ m };
bar_impl();
}
private:
std::mutex m;
void foo_impl()
{
bar_impl(); // Call other private methods
}
void bar_impl()
{
/* ... */
}
};
Note that this is just an alternative approach to (potentially) tackle your problem.
O wise interwebs
We have an impasse between two colleagues that we could use your help resolving in the proper C++ way. Basically we have a set of utility classes, two of which are a Mutex and SpinLock class which both have the following abridged interface:
class Mutex {
public:
Mutex();
~Mutex();
void Lock();
void Unlock();
// ...
};
Obviously this is similar to, but differently-cased than the BasicLockable concept used by std::lock_guard, so we want something similar (assume that the Mutex class is immutable in this example; we cannot add the BasicLockable concept to it). Also not all of our compilers for supported platforms are fully c++11 featured, so we cannot just use vanilla c++11 support.
One school of thought is the following implementation of a generic guard class which can be inherited to provide a generic guard class and inherit from it to create a lock-guard class:
template<class T, void (T::*EnterFn)(), void (T::*ExitFn)()>
class Guard
{
public: // copy constructor deleting omitted for brevity
Guard( T *lock ) : m_lock(lock) { (m_lock->*EnterFn)(); }
~Guard(); { (m_lock->*ExitFn)(); }
private:
T *m_lock;
};
template<class T>
class LockGuard : public Guard<T, &T::Lock, &T::Unlock>
{
public:
LockGuard(const T* lock) : Guard<T, &T::Lock, &T::Unlock>(lock) {}
};
The other school of thought is to just implement a simple lockguard:
template<class T>
class LockGuard {
T* m_lockable;
public:
LockGuard(const T* lockable) : m_lockable(lockable) { lockable->Lock(); }
~LockGuard() { m_lockable->Unlock(); }
};
Which implementation would you choose and why? What is the most proper C++(03, 11, 14, 17) way of implementing it? Is there any inherent value to having a generic Guard class as described above?
I would not want to use method pointers.
Personally, I'd want to move towards the C++11 standard tools as much as possible. So I'd write an adapter.
template<class T>
struct lock_adapter {
T* t = nullptr;
void lock() { t->Lock(); }
void unlock() { t->Unlock(); }
lock_adapter( T& tin ):t(std::addressof(tin)) {}
// default some stuff if you like
};
template<class T>
struct adapted_unique_lock:
private lock_adapter<T>,
std::unique_lock< lock_adapter<T> >
{
template<class...Args>
adapted_unique_lock(T& t, Args&&...):
lock_adapter<T>(t),
std::unique_lock< lock_adapter<T> >( *this, std::forward<Args>(args)... )
{}
adapted_unique_lock(adapted_unique_lock&&)=delete; // sadly
friend void swap( adapted_unique_lock&, adapted_unique_lock& ) = delete; // ditto
};
now adapted_unique_lock has a restricted set of functionality from a std::unique_lock.
It cannot be moved, as the unique_lock holds a pointer to this inside its implementation and does not reseat it.
Note that the richness of the entire unique_lock constructor set is available.
Functions that return adapted unique locks must store their return values in something like auto&& references until end of scope, and you cannot return them through chains until C++17.
But any code using adapted_unique_lock can be swapped to use unique_lock once T has been changed to support .lock() and .unlock(). This moves your code base towards being more standard C++11, rather than bespoke.
I'll answer the question in your title since nobody else answered it.
According to the docs, lock_guard works on any type that "meets the BasicLockable requirements". BasicLockable only requires two methods, lock() and unlock().
In order to make lock_guard work with a custom library, you need to either add lock() and unlock() methods to the library's mutex class, or wrap it in another class that has lock() and unlock() methods.
You should use those: std::mutex, std::shared_lock, std::unique_lock,
std::timed_mutex, std::shared_mutex, std::recursive_mutex, std::shared_timed_mutex, std::recursive_timed_mutex, std::lock_guard if you actually have C++11 compiler. Otherwise it is more complex, and tag c++11 on question should be removed.
You can't implement spinlock or mutex with C\C++ means, you would need add assembler or intrinsic code - making it non-portable, unless you implement it for each platform - and with many x86-64 C++11 and later compilers you can't do inline assembler.
The main problem you would run into, if you use inherent locking logic is that objects behind mutexes or spinlocks are uncopyable. As soon as you copy it or if you copy defended variable, it stops being locked. Objects that are implementing mutex mechanics are uncopyable too.
I understand I can use a mutex member in a class and lock it inside each method to prevent data race in multithreading environment. However, such method might incur deadlock if there are nested calls in the class's methods, like the add_one() and add_two() in below class. Using different mutex for each method is a workaround. But are there more principled and elegant way to prevent deadlock in the case of nested calls?
class AddClass {
public:
AddClass& operator=(AddClass const&) = delete; // disable copy-assignment constructor
AddClass(int val) : base(val) {}
int add_one() { return ++base; }
int add_two() {
add_one;
add_one;
return base;
}
private:
int base;
};
There is std::recursive_mutex exactly for this purpose.
Another approach that avoids overhead incurred by recursive mutex is to separate public synchronized interface from private non-synchronized implementation:
class AddClass {
public:
AddClass& operator=(AddClass const&) = delete; // disable copy-assignment constructor
AddClass(int val) : base(val) {}
int add_one() {
std::lock_guard<std::mutex> guard{mutex};
return add_one_impl();
}
int add_two() {
std::lock_guard<std::mutex> guard{mutex};
return add_two_impl();
}
private:
int base;
std::mutex mutex;
int add_one_impl() {
return ++base;
}
int add_two_impl() {
add_one_impl();
add_one_impl();
return base;
}
};
Note, however, that this is not always possible. For example if you have a method that accepts a callback and calls it while holding the lock, the callback might try to call some other public method of your class, and you are again faced with the double locking attempt.
A recursive mutex is a lockable object, just like mutex, but allows the same thread to acquire multiple levels of ownership over the mutex object.
When and how to Use Recursive Mutex-- Link Below
Recursive Mutex
Note: Recursive and non-recursive mutexes have different use cases.
Hope it helps.
The general solution for this is called a reentrant mutex
While any attempt to perform the "lock" operation on an ordinary mutex
(lock) would either fail or block when the mutex is already locked, on
a recursive mutex this operation will succeed if and only if the
locking thread is the one that already holds the lock. Typically, a
recursive mutex tracks the number of times it has been locked, and
requires equally many unlock operations to be performed before other
threads may lock it.
There's one in the C++11 standard library: http://en.cppreference.com/w/cpp/thread/recursive_mutex
Implement the public interface with functions that lock your locks and call private member functions that do the work. The private functions can call each other without the overhead of re-locking mutexes and without resorting to recursive mutexes, which are regarded by many as a sign of design failure.
I have a class that I want to use in different threads and I think I may be able to use std::atomic this way:
class A
{
int x;
public:
A()
{
x=0;
}
void Add()
{
x++;
}
void Sub()
{
x--;
}
};
and in my code:
std::atomic<A> a;
and in a different thread:
a.Add();
and
a.Sub();
but I am getting an error that a.Add() is not known. How can I solve this?
Is there any better way to do this?
Please note that it is an example, and what I want is to make sure that access to class A is thread-safe, so I can not use
std::atomic<int> x;
How can I make a class thread-safe using std::atomic ?
You need to make the x attribute atomic, and not your whole class, as followed:
class A
{
std::atomic<int> x;
public:
A() {
x=0;
}
void Add() {
x++;
}
void Sub() {
x--;
}
};
The error you get in you original code is completely normal: there is no std::atomic<A>::Add method (see here) unless you provide a specialization for std::atomic<A>.
Referring your edit: you cannot magically make your class A thread safe by using it as template argument of std::atomic. To make it thread safe, you can make its attributes atomic (as suggested above and provided the standard library gives a specialization for it), or use mutexes to lock your ressources yourself. See the mutex header. For example:
class A
{
std::atomic<int> x;
std::vector<int> v;
std::mutex mtx;
void Add() {
x++;
}
void Sub() {
x--;
}
/* Example method to protect a vector */
void complexMethod() {
mtx.lock();
// Do whatever complex operation you need here
// - access element
// - erase element
// - etc ...
mtx.unlock();
}
/*
** Another example using std::lock_guard, as suggested in comments
** if you don't need to manually manipulate the mutex
*/
void complexMethod2() {
std::lock_guard<std::mutex> guard(mtx);
// access, erase, add elements ...
}
};
Declare the class member x as atomic, then you don't have to declare the object as atomic:
class A
{
std::atomic<int> x;
};
The . operator can be used on an object to call its class's member function, not some other class's member function (unless you explicitly write the code that way).
std::atomic<A> a ;
a.Add(); // Here, a does not know what Add() is (a member function of the type parameter)
// It tries to call Add() method of its own class i.e. std::atomic
// But std::atomic has no method names Add or Sub
As the answer by #ivanw mentions, make std::atomic<int> a member of your class instead and then use it.
Here is another example:
template <typename T> class A
{};
class B { public: void hello() { std::cout << "HELLO!!!"; } };
A<B> a ;
a.hello(); // This statement means that call a's hello member function
// But the typeof(a) which is A does not have such a function
// Hence it will be an error.
I think the problem with the answers above is that they don't explain what I think is, at a minimum, an ambiguity in the question, and most likely, a common threaded development fallacy.
You can't make an object "atomic" because the interval between two functions (first "read x" and then later "write x") will cause a race with other uses. If you think you need an "atomic" object, then you need to carefully design the API and member functions to expose now to begin and commit updates to the object.
If all you mean by "atomic" is "the object doesn't corrupt its internal state," then you can achieve this through std::atomic<> for single plain-old-data types that have no invariant between them (a doesn't depend on b) but you need a lock of some sort for any dependent rules you need to enforce.
I have the following problem: I have a class that needs to be protected from simultaneous access from different threads. The class has two methods: lock() and unlock() using (g_mutex_lock / g_mutex_unlock with a per-object GMutex). Now a locking method looks like this:
void Object::method()
{
lock();
// do stuff modifying the object
unlock();
}
Now lets assume that I have two mwthods of this type, method1() and method2() which I call one after another:
object.method1();
// but what if some other thread modifies object in between
object.method2();
I tried locking the object before this block und unlocking it again, but in this case
there is a deadlock even with a single thread because the GMutex doesn't know that it has already been locked by the same thread. A solution would be to modify the method to accept an additional bool to determine whether the object is already locked. But is there a more elegant concept? Or is this a shortcoming regarding the design concept in total?
The recursive mutex solution mentioned in other responses and comments will work just fine, but in my experience it leads to code that is harder to maintain, because once you switch to a recursive mutex it is all too easy to abuse it and lock all over the place.
Instead, I prefer to reorganize the code so that locking once is sufficient. In your example I would define the class as follows:
class Object {
public:
void method1() {
GMutexLock scopeLock(&lock);
method1_locked();
}
void method2() {
GMutexLock scopeLock(&lock);
method2_locked();
}
void method1_and_2() {
GMutexLock scopeLock(&lock);
method1_locked();
method2_locked();
}
private:
void method1_locked();
void method2_locked();
GMutex lock;
};
The "locked" versions of your methods are private, so they are only accessible from inside the class. The class takes responsibility in never calling these without the lock taken.
From the outside you have three choices of methods to call, depending on which of the methods you want to run.
Note that another improvement I've made is to not use explicit locking primitives but instead use the scope indirectly to lock and unlock. This is what GMutexLock does. An example implementation for this class is below:
class GMutexLock {
private:
GMutex* m;
GMutexLock(const GMutexLock &mlock); // not allowed
GMutexLock &operator=(const GMutexLock &); // not allowed
public:
GMutexLock(GMutex* mutex) {
m = mutex;
g_mutex_lock(m);
}
~GMutexLock() {
g_mutex_unlock(m);
}
};
Look up "recursive mutex" or "reentrant mutex" (versus the non-recursive mutex you're using now). These enable what you want. Some folks are not fans of recursive mutexes and feel they enable messy design.
Note that a recursive mutex cannot be locked on one thread and unlocked on another.
I personally would never use recursive mutexes (especially as such).
I would do some private functions that doesn't lock mutexes and lock around them in public functions' implementations.