Select mutex or dummy mutex at runtime - c++
I have a class that is shared between several projects, some uses of it are single-threaded and some are multi-threaded. The single-threaded users don't want the overhead of mutex locking, and the multi-threaded users don't want to do their own locking and want to be able to optionally run in "single-threaded mode." So I would like to be able to select between real and "dummy" mutexes at runtime.
Ideally, I would have a shared_ptr<something> and assign either a real or fake mutex object. I would then "lock" this without regard to what's in it.
unique_lock<something> guard(*mutex);
... critical section ...
Now there is a signals2::dummy_mutex but it does not share a common base class with boost::mutex.
So, what's an elegant way to select between a real mutex and a dummy mutex (either the one in signals2 or something else) without making the lock/guard code more complicated than the example above?
And, before you point out the alternatives:
I could select an implementation at compile time, but preprocessor macros are ugly and maintaining project configurations is painful for us.
Users of the class in a multi-threaded environment do not want to take on the responsibility of locking the use of the class rather than having the class do its own locking internally.
There are too many APIs and existing usages involved for a "thread-safe wrapper" to be a practical solution.
How about something like this?
Its untested but should be close to OK.
You might consider making the template class hold a value rather than a pointer
if your mutexes support the right kinds of constructions. Otherwise you could specialise the MyMutex class to get value behaviour.
Also it's not being careful about copying or destruction .. I leave that as an exercise to the reader ;) ( shared_ptr or storing a value rather than a pointer should fix this)
Oh and the code would be nicer using RAII rather than explicit lock/unlock... but that's a different question.I assume thats what the unique_lock in your code does?
struct IMutex
{
virtual ~IMutex(){}
virtual void lock()=0;
virtual bool try_lock()=0;
virtual void unlock()=0;
};
template<typename T>
class MyMutex : public IMutex
{
public:
MyMutex(T t) : t_(t) {}
void lock() { t_->lock(); }
bool try_lock() { return t_->try_lock(); }
void unlock() { t_->unlock(); }
protected:
T* t_;
};
IMutex * createMutex()
{
if( isMultithreaded() )
{
return new MyMutex<boost::mutex>( new boost::mutex );
}
else
{
return new MyMutex<signal2::dummy_mutex>( new signal2::dummy_mutex );
}
}
int main()
{
IMutex * mutex = createMutex();
...
{
unique_lock<IMutex> guard( *mutex );
...
}
}
Since the two mutex classes signals2::dummy_mutex and boost::mutex don't share a common base class you could use something like "external polymorphism" to allow to them to be treated polymorphically. You'd then use them as locking strategies for a common mutex/lock interface. This allows you to avoid using "if" statements in the lock implementation.
NOTE: This is basically what Michael's proposed solution implements. I'd suggest going with his answer.
Have you ever heard about Policy-based Design ?
You can define a Lock Policy interface, and the user may choose which policy she wishes. For ease of use, the "default" policy is precised using a compile-time variable.
#ifndef PROJECT_DEFAULT_LOCK_POLICY
#define PROJECT_DEFAULT_LOCK_POLICY TrueLock
#endif
template <class LP = PROJECT_DEFAULT_LOCK_POLICY>
class MyClass {};
This way, your users can choose their policies with a simple compile-time switch, and may override it one instance at a time ;)
This is my solution:
std::unique_lock<std::mutex> lock = dummy ?
std::unique_lock<std::mutex>(mutex, std::defer_lock) :
std::unique_lock<std::mutex>(mutex);
Is this not sufficient?
class SomeClass
{
public:
SomeClass(void);
~SomeClass(void);
void Work(bool isMultiThreaded = false)
{
if(isMultiThreaded)
{
lock // mutex lock ...
{
DoSomething
}
}
else
{
DoSomething();
}
}
};
In general, a mutex is only needed if the resource is shared between multiple processes. If an instance of the object is unique for a (possibly multi-threaded) process, then a Critical Section is often more appropriate.
In Windows, the single-threaded implementation of a Critical Section is a dummy one. Not sure what platform you are using.
Just FYI, here's the implementation I ended up with.
I did away with the abstract base class, merging it with the no-op "dummy" implementation. Also note the shared_ptr-derived class with an implicit conversion operator. A little too tricky, I think, but it lets me use shared_ptr<IMutex> objects where I previously used boost::mutex objects with zero changes.
header file:
class Foo {
...
private:
struct IMutex {
virtual ~IMutex() { }
virtual void lock() { }
virtual bool try_lock() { return true; }
virtual void unlock() { }
};
template <typename T> struct MutexProxy;
struct MutexPtr : public boost::shared_ptr<IMutex> {
operator IMutex&() { return **this; }
};
typedef boost::unique_lock<IMutex> MutexGuard;
mutable MutexPtr mutex;
};
implementation file:
template <typename T>
struct Foo::MutexProxy : public IMutex {
virtual void lock() { mutex.lock(); }
virtual bool try_lock() { return mutex.try_lock(); }
virtual void unlock() { mutex.unlock(); }
private:
T mutex;
};
Foo::Foo(...) {
mutex.reset(single_thread ? new IMutex : new MutexProxy<boost::mutex>);
}
Foo::Method() {
MutexGuard guard(mutex);
}
Policy based Option:
class SingleThreadedPolicy {
public:
class Mutex {
public:
void Lock() {}
void Unlock() {}
bool TryLock() { return true; }
};
class ScopedGuard {
public:
ScopedGuard(Mutex& mutex) {}
};
};
class MultithreadingPolicy {
public:
class ScopedGuard;
class Mutex {
friend class ScopedGuard;
private:
std::mutex mutex_;
public:
void Lock() {
mutex_.lock();
}
void Unlock() {
mutex_.unlock();
}
bool TryLock() {
return mutex_.try_lock();
}
};
class ScopedGuard {
private:
std::lock_guard<std::mutex> lock_;
public:
ScopedGuard(Mutex& mutex) : lock_(mutex.mutex_) {}
};
};
Then it can be used as follows:
template<class ThreadingPolicy = SingleThreadedPolicy>
class MyClass {
private:
typedef typename ThreadingPolicy::Mutex Mutex;
typedef typename ThreadingPolicy::ScopedGuard ScopedGuard;
Mutex mutex_;
public:
void DoSomething(){
ScopedGuard guard(mutex_);
std::cout<<"Hello World"<<std::endl;
}
};
Related
Thread safety of nested calls
I have two libs, one is thread safe called class A, The other lib called class B, which used class A to realize functions. class A { public: void Get() { std::lock_guard<std::mutex> lock(mutex_); do_something } void Put() { std::lock_guard<std::mutex> lock(mutex_); do_something } private: std::mutex mutex_; }; class B { public: void Get() { a.Get(); } void Put() { a.Put(); } private: A a; }; So is class B thread safe? I know that judging whether the thread is safe depends on whether the operation is atomic. If the put operate is not atomic then it's not thread safe. According to the above requirements, I think class B is not an atomic operation, so it is not thread-safe? When the operation is not atomic, it may not be thread safe. for example we add some operate like below, Is it right? class B { public: void Get() { // But Get is not atomic!!! do_some_thing(); // atomic a.Get(); // atomic do_some_thing(); // atomic } void Put() { do_some_thing(); a.Put(); do_some_thing(); } private: A a; };
Thread safety concerns about the race conditions and data races. Now, Since the methods of class B don't use any data directly but via delegating other methods in class A that as you said are thread-safe, the methods in B are thread-safe.
Is there a better way to make a class support thread-safe version and unsafe version?
I have lots of structure like below to make it support thread-safe version and unsafe version: class NullMutex { public: void lock() {} void unlock() noexcept {} bool try_lock() { return true; } }; class IFoo { public: virtual void DoSomething() = 0; }; template<typename Mutex> class Foo : public IFoo { Mutex m_Mutex; public: void DoSomething() override { std::lock_guard<Mutex> guard{ m_Mutex }; // ... some operations } }; using FooSt = Foo<NullMutex>; // Single thread using FooMt = Foo<std::mutex>; // Multi-thread int main() { std::shared_ptr<IFoo> foo{ new FooMt }; } But as you can see, I must write all the implementation in a header file, and it also it increases my compilation time. The worst thing is it makes the classes that need to create it must have a template argument for choose the Mutex type(then this class will also become a template and make the situation worse). One of the solutions I've thought is to define a using Mutex = std::mutex in a header file. But it needs to include this header file in every file in my project. Is there a better way to solve this problem? Any suggestion will be really appreciated.
Singleton with multithreads
This question was asked in an interview. The first part was to write the singleton class: class Singleton { static Singleton *singletonInstance; Singleton() {} public: static Singleton* getSingletonInstance() { if(singletonInstance == null) { singletonInstance = new Singleton(); } return singletonInstance; } }; Then I was asked how to handle this getSingletonInstance() in a multithreaded situation. I wasn't really sure, but I modified as: class Singleton { static Singleton *singletonInstance; Singleton() {} static mutex m_; public: static Singleton* getSingletonInstance() { m_pend(); if(singletonInstance == null) { singletonInstance = new Singleton(); } return singletonInstance; } static void releaseSingleton() { m_post(); } }; Then I was told that although a mutex is required, pending and posting a mutex is not efficient as it takes time. And there is a better way to handle to this situation. Does anybody know a better and more efficient way to handle the singleton class in a multithreaded situation?
In C++11, the following is guaranteed to perform thread-safe initialisation: static Singleton* getSingletonInstance() { static Singleton instance; return &instance; } In C++03, a common approach was to use double-checked locking; checking a flag (or the pointer itself) to see if the object might be uninitialised, and only locking the mutex if it might be. This requires some kind of non-standard way of atomically reading the pointer (or an associated boolean flag); many implementations incorrectly use a plain pointer or bool, with no guarantee that changes on one processor are visible on others. The code might look something like this, although I've almost certainly got something wrong: static Singleton* getSingletonInstance() { if (!atomic_read(singletonInstance)) { mutex_lock lock(mutex); if (!atomic_read(singletonInstance)) { atomic_write(singletonInstance, new Singleton); } } return singletonInstance; } This is quite tricky to get right, so I suggest that you don't bother. In C++11, you could use standard atomic and mutex types, if for some reason you want to keep the dynamic allocation of you example. Note that I'm only talking about synchronised initialisation, not synchronised access to the object (which your version provides by locking the mutex in the accessor, and releasing it later via a separate function). If you need the lock to safely access the object itself, then you obviously can't avoid locking on every access.
As #piokuc suggested, you can also use a once function here. If you have C++11: #include <mutex> static void init_singleton() { singletonInstance = new Singleton; } static std::once_flag singleton_flag; Singleton* getSingletonInstance() { std::call_once(singleton_flag, init_singleton); return singletonInstance; } And, yes, this will work sensibly if the new Singleton throws an exception.
If you have C++11 you can make singletonInstance an atomic variable, then use a double-checked lock: if (singletonInstance == NULL) { lock the mutex if (singletonInstance == NULL) { singletonInstance = new Singleton; } unlock the mutex } return singletonInstance;
You should actually lock the singleton, and not the instance. If the instance requires locking, that should be handled by the caller (or perhaps by the instance itself, depending on what kind of an interface it exposes) Update sample code: #include <mutex> class Singleton { static Singleton *singletonInstance; Singleton() {} static std::mutex m_; public: static Singleton* getSingletonInstance() { std::lock_guard<std::mutex> lock(m_); if(singletonInstance == nullptr) { singletonInstance = new Singleton(); } return singletonInstance; } }
If you use POSIX threads you can use pthread_once_t and pthread_key_t stuff, this way you can avoid using mutexes altogether. For example: template<class T> class ThreadSingleton : private NonCopyable { public: ThreadSingleton(); ~ThreadSingleton(); static T& instance(); private: ThreadSingleton( const ThreadSingleton& ); const ThreadSingleton& operator=( const ThreadSingleton& ) static pthread_once_t once_; static pthread_key_t key_; static void init(void); static void cleanUp(void*); }; And implementation: template<class T> pthread_once_t ThreadSingleton<T>::once_ = PTHREAD_ONCE_INIT; template<class T> pthread_key_t ThreadSingleton<T>::key_; template<class T> T& ThreadSingleton<T>::instance() { pthread_once(&once_,init); T* value = (T*)pthread_getspecific(key_); if(!value) { value = new T(); pthread_setspecific(key_,value); } return *value; } template<class T> void ThreadSingleton<T>::cleanUp(void* data) { delete (T*)data; pthread_setspecific(key_,0); } template<class T> void ThreadSingleton<T>::init() { pthread_key_create(&key_,cleanUp); }
custom RAII C++ implementation for scoped mutex locks
I cannot use boost or the latest std::thread library. The way to go is to create a custom implementation of a scoped mutex. In a few words when a class instance is create a mutex locks. Upon class destruction the mutex is unlocked. Any implementation available? I don't want to re-invent the wheel. I need to use pthreads. resource acquisition is initialization == “RAII”
Note This is an old answer. C++11 contains better helpers that are more platform independent: std::lock_guard std::mutex, std::timed_mutex, std::recursive_mutex, std::recursive_timed_mutex And other options like std::unique_lock, boost::unique_lock Any RAII tutorial will do. Here's the gist: (also on http://ideone.com/kkB86) // stub mutex_t: implement this for your operating system struct mutex_t { void Acquire() {} void Release() {} }; struct LockGuard { LockGuard(mutex_t& mutex) : _ref(mutex) { _ref.Acquire(); // TODO operating system specific } ~LockGuard() { _ref.Release(); // TODO operating system specific } private: LockGuard(const LockGuard&); // or use c++0x ` = delete` mutex_t& _ref; }; int main() { mutex_t mtx; { LockGuard lock(mtx); // LockGuard copy(lock); // ERROR: constructor private // lock = LockGuard(mtx);// ERROR: no default assignment operator } } Of course you can make it generic towards mutex_t, you could prevent subclassing. Copying/assignment is already prohibited because of the reference field EDIT For pthreads: struct mutex_t { public: mutex_t(pthread_mutex_t &lock) : m_mutex(lock) {} void Acquire() { pthread_mutex_lock(&m_mutex); } void Release() { pthread_mutex_unlock(&m_mutex); } private: pthread_mutex_t& m_mutex; };
concurrent reference counter class and scoped retain: is this ok?
This is a question regarding coding design, so please forgive the long code listings: I could not resume these ideas and the potential pitfalls without showing the actual code. I am writing a ConcurrentReferenceCounted class and would appreciate some feedback on my implementation. Sub-classes from this class will receive "release" instead of a direct delete. Here is the class: class ConcurrentReferenceCounted : private NonCopyable { public: ConcurrentReferenceCounted() : ref_count_(1) {} virtual ~ConcurrentReferenceCounted() {} void retain() { ScopedLock lock(mutex_); ++ref_count_; } void release() { bool should_die = false; { ScopedLock lock(mutex_); should_die = --ref_count_ == 0; } if (should_die) delete this; } private: size_t ref_count_; Mutex mutex_; }; And here is a scoped retain: class ScopedRetain { public: ScopedRetain(ConcurrentReferenceCounted *object) : object_(object) { retain(); } ScopedRetain() : object_(NULL) {} ~ScopedRetain() { release(); } void hold(ConcurrentReferenceCounted *object) { assert(!object_); // cannot hold more then 1 object object_ = object; retain(); } private: ConcurrentReferenceCounted *object_; void release() { if (object_) object_->release(); } void retain() { object_->retain(); } }; And finally this is a use case: Object *target; ScopedRetain sr; if (objects_.get(key, &target)) sr.hold(target); else return; // use target // no need to 'release'
Your ConcurrentReferenceCounted seems to use a full mutex, which is not necessary and not very fast. Reference counting can be implemented atomically using architecture-dependent interlocked instructions. Under Windows, the InterlockedXXXfamily of functions simply wraps these instructions.