Most of the times I see in the code some variant of this kind of implementation for a thread safe getter method:
class A
{
public:
inline Resource getResource() const
{
Lock lock(m_mutex);
return m_resource;
}
private:
Resource m_resource;
Mutex m_mutex;
};
Assuming that the class Resource can't be copied, or that the copy operation has a too high computational cost, is there a way in C++ to avoid the returning copy but still using a RAII style locking mechanism?
I haven't tried it, but something like this should work:
#include <iostream>
#include <mutex>
using namespace std;
typedef std::mutex Mutex;
typedef std::unique_lock<Mutex> Lock;
struct Resource {
void doSomething() {printf("Resource::doSomething()\n"); }
};
template<typename MutexType, typename ResourceType>
class LockedResource
{
public:
LockedResource(MutexType& mutex, ResourceType& resource) : m_mutexLocker(mutex), m_pResource(&resource) {}
LockedResource(MutexType& mutex, ResourceType* resource) : m_mutexLocker(mutex), m_pResource(resource) {}
LockedResource(LockedResource&&) = default;
LockedResource(const LockedResource&) = delete;
LockedResource& operator=(const LockedResource&) = delete;
ResourceType* operator->()
{
return m_pResource;
}
private:
Lock m_mutexLocker;
ResourceType* m_pResource;
};
class A
{
public:
inline LockedResource<Mutex, Resource> getResource()
{
return LockedResource<Mutex, Resource>(m_mutex, &m_resource);
}
private:
Resource m_resource;
Mutex m_mutex;
};
int main()
{
A a;
{ //Lock scope for multiple calls
auto r = a.getResource();
r->doSomething();
r->doSomething();
// The next line will block forever as the lock is still in use
//auto dead = a.getResource();
} // r will be destroyed here and unlock
a.getResource()->doSomething();
return 0;
}
But be careful, as the lifetime of the accessed Resource depends on the lifetime of the owner (A)
Example on Godbolt: Link
P1144 reduces the generated assembly quite nicely so that you can see where the lock is locked and unlocked.
How about returning an accessor object that provides a thread-safe interface to the Resource class and/or keeps some lock?
class ResourceGuard {
private:
Resource *resource;
public:
void thread_safe_method() {
resource->lock_and_do_stuff();
}
}
This will be cleared in a RAII fashion, releasing any locks if needed. If you need locking it should be done in the the Resource class.
Of course you have to take care of the lifespan of Resource. A very simple way would be to use a std::shard_ptr. A weak_ptr might fit as well.
another way to achieve the same thing. This is the implementation of a mutable version. the const accessor is just as trivial.
#include <iostream>
#include <mutex>
struct Resource
{
};
struct locked_resource_view
{
locked_resource_view(std::unique_lock<std::mutex> lck, Resource& r)
: _lock(std::move(lck))
, _resource(r)
{}
void unlock() {
_lock.unlock();
}
Resource& get() {
return _resource;
}
private:
std::unique_lock<std::mutex> _lock;
Resource& _resource;
};
class A
{
public:
inline locked_resource_view getResource()
{
return {
std::unique_lock<std::mutex>(m_mutex),
m_resource
};
}
private:
Resource m_resource;
mutable std::mutex m_mutex;
};
using namespace std;
auto main() -> int
{
A a;
auto r = a.getResource();
// do something with r.get()
return 0;
}
Related
We have a lot of legacy C++98 code that we are slowly upgrading to c++11 and we have a RAII implementation for custom Mutex class:
class RaiiMutex
{
public:
RaiiMutex() = delete;
RaiiMutex(const RaiiMutex&) = delete;
RaiiMutex& operator= (const RaiiMutex&) = delete;
RaiiMutex(Mutex& mutex) : mMutex(mutex)
{
mMutex.Lock();
}
~RaiiMutex()
{
mMutex.Unlock();
}
private:
Mutex& mMutex;
};
Is it ok to make an std::unique_ptr of this object? We would still benefit from automatically calling the destructor when the object dies (thus unlocking) and would also gain the ability of unlocking before non-critical operations.
Example legacy code:
RaiiMutex raiiMutex(mutex);
if (!condition)
{
loggingfunction();
return false;
}
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
if (!condition)
{
raiiMutex = nullptr;
loggingfunction(); // log without locking the mutex
return false;
}
It would also remove the use of unnecessary brackets:
Example legacy code:
Data data;
{
RaiiMutex raiiMutex(mutex);
data = mQueue.front();
mQueue.pop_front();
}
data.foo();
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
Data data = mQueue.front();
mQueue.pop_front();
raiiMutex = nullptr;
data.foo();
Does it make sense?
Edit:
Cannot use unique_lock due to custom Mutex class:
class Mutex
{
public:
Mutex();
virtual ~Mutex();
void Unlock(bool yield = false);
void Lock();
bool TryLock();
bool TimedLock(uint64 pWaitIntervalUs);
private:
sem_t mMutex;
};
Add Mutex::lock(), Mutex::unlock() and Mutex::try_lock() methods to Mutex. They just forward to the Lock etc methods.
Then use std::unique_lock<Mutex>.
If you cannot modify Mutex, wrap it.
struct SaneMutex: Mutex {
void lock() { Lock(); }
// etc
using Mutex::Mutex;
};
A SaneMutex replaces a Mutex everywhere you can.
Where you can't:
struct MutexRef {
void lock() { m.Lock(); }
// etc
MutexRef( Mutex& m_in ):m(m_in) {}
private:
Mutex& m;
};
include an adapter.
these match the C++ standard lockable requirements. If you want timed lockable, you have to write a bit of glue code.
auto l = std::unique_lock<MutexRef>( mref );
or
auto l = std::unique_lock<SaneMutex>( m );
you now have std::lock, std::unique_lock, std::scoped_lock support.
And your code is one step closer to using std::mutex.
As for your unique_ptr solution, I wouldn't add the overhead of a memory allocation on every time you lock a mutex casually.
I am trying to write a thread safe datastore class.
This class object is shared with between many threads in Generator and Consumer, where the class members can be set or get.
By calling setDatastore() the object is set for usage at different threads.
Below is my code,
#ifndef IF_DATA_STORE_H
#define IF_DATA_STORE_H
#include <mutex>
#include <shared_mutex>
#include <memory>
class DataType1{public:int value;};
class DataType2{public:int value;};
class DataStore
{
public:
DataStore(): _member1(), _member2(){}
~DataStore(){}
// for member1
void setMember1(const DataType1& val)
{
std::unique_lock lock(_mtx1); // no one can read/write!
_member1 = val;
}
const DataType1& getMember1() const
{
std::shared_lock lock(_mtx1); // multiple threads can read!
return _member1;
}
// for member2
void setMember2(const DataType2& val)
{
std::unique_lock lock(_mtx2); // no one can read/write!
_member2 = val;
}
const DataType2& getMember2() const
{
std::shared_lock lock(_mtx2); // multiple threads can read!
return _member2;
}
private:
mutable std::shared_mutex _mtx1;
mutable std::shared_mutex _mtx2;
DataType1 _member1;
DataType2 _member2;
// different other member!
};
// now see where data is generated/consumed!
class Generator
{
public:
void start(){/* start thread!*/}
void setDataStore(std::shared_ptr<DataStore> store)
{
_store = store;
}
void threadRoutine() //this is called from different thread and updating values
{
// some code...
{
_data.value = 10; // keep a local updated copy of data!
_store->setMember1(_data);
}
}
private:
std::shared_ptr<DataStore> _store;
DataType1 _data;
};
class Consumer
{
public:
void start(){/* start thread!*/}
void setDataStore(std::shared_ptr<DataStore> store)
{
_store = store;
}
void threadRoutine() // running a check on datastore every 1sec
{
// some code...
auto val = _store->getMember1();
// do something..
}
private:
std::shared_ptr<DataStore> _store;
};
// fianlly start all!
int main()
{
// somewhere in main thread
std::shared_ptr<DataStore> store;
Consumer c; Generator g;
c.setDataStore(store); c.start();
g.setDataStore(store); g.start();
}
#endif
Questions:
Is there any other way than creating multiple shared mutex for each member?
In Generator.threadRoutine() if I keep a local copy of DataType1 does this cause high memory issues (I see high cpu and memory) when this block called frequently, don't if this is the root cause of it.
Any other better way suggested?
I'd like to wrap all usages of a class instance with a mutex. Today I have
std::map<int, std::shared_ptr<MyClass>> classes;
and functions to find and return instances, like:
std::shared_ptr<MyClass> GetClass(int i);
I'd like to ensure that GetClass() can only retrieve an instance if someone else hasn't already retrieved it, with some RAII mechanism. Usage would be like:
void CallingFunction()
{
auto c = GetClass(i); // mutex for class id 'i' is acquired here
// some calls to class
c.SomeFunction();
} // mutex is released here when 'c' goes out of scope
With the mutex acquired by CallingFunction() other threads that wanted to access the same class instance would block on their calls to GetClass().
I've been looking at a few ways of doing it, such as with a wrapper class like:
class ClassContainer
{
public:
std::shared_ptr<Class> c;
std::mutex m;
};
Where I'd modify GetClass() to be:
ClassContainer GetClass(int i);
But I'm having trouble figuring out both where the std::mutex should be kept, I tried initially storing it in the map before moving to using a container class like:
std::map<int, std::pair<std::mutex, std::shared_ptr<MyClass<>>> classes;
but that wasn't working well, now with the ClassContainer how to have ClassContainer lock the std::mutex like std::lock_guard<> when the caller acquires one via a call to GetClass().
I've been looking at a few ways of doing it, such as with a wrapper class like:
Yes this is proper way to do it and you are close, but you cannot keep mutex itself in this class, only locker. And std::unique_lock is a proper type for that as it has necessary move ctor etc. I would make fields private though and create necessary accessors:
class ClassContainer
{
std::shared_ptr<Class> c;
std::uniqe_lock<mutex> lock;
public:
ClassContainer( std::pair<std::mutex,std::shared_ptr<Class>> &p ) :
c( p.second ),
lock( p.first )
{
}
Class * operator->()const { return c.get(); }
Class & operator*() const { return *c; }
};
then usage is simple:
void CallingFunction()
{
auto c = GetClass(i); // mutex for class id 'i' is acquired here
// some calls to class
c->SomeFunction();
// or even
GetClass(i)->SomeFunction();
}
It is Class which should hold the mutex, something like:
class Class
{
public:
// Your methods...
std::mutex& GetMutex() { return m; }
private:
std::mutex m;
};
class ClassContainer
{
public:
ClassContainer(std::shared_ptr<Class> c) :
c(std::move(c)),
l(this->c->GetMutex())
{}
ClassContainer(const ClassContainer&) = delete;
ClassContainer(ClassContainer&&) = delete;
ClassContainer& operator =(const ClassContainer&) = default;
ClassContainer& operator =(ClassContainer&&) = default;
// For transparent pointer like access to Class.
decltype(auto) operator -> () const { return c; }
decltype(auto) operator -> () { return c; }
const Class& operator*() const { return *c; }
Class& operator*() { return *c; }
private:
std::shared_ptr<Class> c;
std::lock_guard<std::mutex> l;
};
ClassContainer GetClass(int i)
{
auto c = std::make_shared<Class>();
return {c}; // syntax which avoids copy/move contructor.
}
and finally usage:
auto&& cc = GetClass(42); // `auto&&` or `const&` pre-C++17, simple auto possible in C++17
cc->ClassMethod();
Simplified demo.
Accidentally, I did something extremely similar recently (only I returned references to objects instead of shared_ptr. The code worked like following:
struct locked_queue {
locked_queue(locked_queue&& ) = default;
mutable std::unique_lock<decltype(queue::mutex)> lock;
const queue::q_impl_t& queue; // std::deque
};
And here is how it would be used:
locked_queue ClassX::get_queue(...) {
return {std::unique_lock<decltype(mutex)>{mutex}, queue_impl};
}
With multiple threads (std::async) sharing an instance of the following class through a shared_ptr, is it possible to get a segmentation fault in this part of the code? If my understanding of std::mutex is correct, mutex.lock() causes all other threads trying to call mutex.lock() to block until mutex.unlock() is called, thus access to the vector should happen purely sequentially. Am I missing something here? If not, is there a better way of designing such a class (maybe with a std::atomic_flag)?
#include <mutex>
#include <vector>
class Foo
{
private:
std::mutex mutex;
std::vector<int> values;
public:
Foo();
void add(const int);
int get();
};
Foo::Foo() : mutex(), values() {}
void Foo::add(const int value)
{
mutex.lock();
values.push_back(value);
mutex.unlock();
}
int Foo::get()
{
mutex.lock();
int value;
if ( values.size() > 0 )
{
value = values.back();
values.pop_back();
}
else
{
value = 0;
}
mutex.unlock();
return value;
}
Disclaimer: The default value of 0 in get() is intended as it has a special meaning in the rest of the code.
Update: The above code is exactly as I use it, except for the typo push_Back of course.
Other than not using RAII to acquire the lock and using size() > 0 instead of !empty(), the code looks fine. This is exactly how a mutex is meant to be used and this is the quintessential example of how and where you need a mutex.
As Andy Prowl pointed out, instances can't be copy constructed or copy assigned.
Here is the "improved" version:
#include <mutex>
#include <vector>
class Foo {
private:
std::mutex mutex;
typedef std::lock_guard<std::mutex> lock;
std::vector<int> values;
public:
Foo();
void add(int);
int get();
};
Foo::Foo() : mutex(), values() {}
void Foo::add(int value) {
lock _(mutex);
values.push_back(value);
}
int Foo::get() {
lock _(mutex);
int value = 0;
if ( !values.empty() )
{
value = values.back();
values.pop_back();
}
return value;
}
with RAII for acquiring the mutex etc.
I understand the concept of thread safety. I am looking for advice to simplify thread safety when trying to protect a single variable.
Say I have a variable:
double aPass;
and I want to protect this variable, so i create a mutex:
pthread_mutex_t aPass_lock;
Now there are two good ways i can think of doing this but they both have annoying disadvantages. The first is to make a thread safe class to hold the variable:
class aPass {
public:
aPass() {
pthread_mutex_init(&aPass_lock, NULL);
aPass_ = 0;
}
void get(double & setMe) {
pthread_mutex_lock(aPass_lock);
setMe = aPass_
pthread_mutex_unlock(aPass_lock);
}
void set(const double setThis) {
pthread_mutex_lock(aPass_lock);
aPass_ = setThis;
pthread_mutex_unlock(aPass_lock);
}
private:
double aPass_;
pthread_mutex_t aPass_lock;
};
Now this will keep aPass totally safe, nothing can be mistaken and ever touch it, YAY! however look at all that mess and imagine the confusion when accessing it. gross.
The other way is to have them both accessible and to make sure you lock the mutex before you use aPass.
pthread_mutex_lock(aPass_lock);
do something with aPass
pthread_mutex_unlock(aPass_lock);
But what if someone new comes on the project, what if you forget one time to lock it. I don't like debugging thread problems they are hard.
Is there a good way to (using pthreads because i have to use QNX which has little boost support) To lock single variables without needing a big class and that is safer then just creating a mutex lock to go with it?
std::atomic<double> aPass;
provided you have C++11.
To elabourate on my solution, it would be something like this.
template <typename ThreadSafeDataType>
class ThreadSafeData{
//....
private:
ThreadSafeDataType data;
mutex mut;
};
class apass:public ThreadSafeData<int>
Additionally, to make it unique, it might be best to make all operators and members static. For this to work you need to use CRTP
i.e
template <typename ThreadSafeDataType,class DerivedDataClass>
class ThreadSafeData{
//....
};
class apass:public ThreadSafeData<int,apass>
You can easily make your own class that locks the mutex on construction, and unlocks it on destruction. This way, no matter what happens the mutex will be freed on return, even if an exception is thrown, or any path is taken.
class MutexGuard
{
MutexType & m_Mutex;
public:
inline MutexGuard(MutexType & mutex)
: m_Mutex(mutex)
{
m_Mutex.lock();
};
inline ~MutexGuard()
{
m_Mutex.unlock();
};
}
class TestClass
{
MutexType m_Mutex;
double m_SharedVar;
public:
TestClass()
: m_SharedVar(4.0)
{ }
static void Function1()
{
MutexGuard scopedLock(m_Mutex); //lock the mutex
m_SharedVar+= 2345;
//mutex automatically unlocked
}
static void Function2()
{
MutexGuard scopedLock(m_Mutex); //lock the mutex
m_SharedVar*= 234;
throw std::runtime_error("Mutex automatically unlocked");
}
}
The variable m_SharedVar is ensured mutual exclusion between Function1() and Function2() , and will always be unlocked on return.
boost has build in types to accomplish this: boost::scoped_locked, boost::lock_guard.
You can create a class which act as a generic wrapper around your variable synchronising the access to it.
Add operator overloading for the assignment and you are done.
Consider use RAII idiom, below code is just the idea, it's not tested:
template<typename T, typename U>
struct APassHelper : boost::noncoypable
{
APassHelper(T&v) : v_(v) {
pthread_mutex_lock(mutex_);
}
~APassHelper() {
pthread_mutex_unlock(mutex_);
}
UpdateAPass(T t){
v_ = t;
}
private:
T& v_;
U& mutex_;
};
double aPass;
int baPass_lock;
APassHelper<aPass,aPass_lock) temp;
temp.UpdateAPass(10);
You can modify your aPass class by using operators instead of get/set:
class aPass {
public:
aPass() {
pthread_mutex_init(&aPass_lock, NULL);
aPass_ = 0;
}
operator double () const {
double setMe;
pthread_mutex_lock(aPass_lock);
setMe = aPass_;
pthread_mutex_unlock(aPass_lock);
return setMe;
}
aPass& operator = (double setThis) {
pthread_mutex_lock(aPass_lock);
aPass_ = setThis;
pthread_mutex_unlock(aPass_lock);
return *this;
}
private:
double aPass_;
pthread_mutex_t aPass_lock;
};
Usage:
aPass a;
a = 0.5;
double b = a;
This could of course be templated to support other types. Note however that a mutex is overkill in this case. Generally, memory barriers are enough when protecting loads and stores of small data-types. If possible you should use C++11 std::atomic<double>.