Using weak_ptr to implement the Observer pattern - c++

What I have so far is:
Observer.h
class Observer
{
public:
~Observer();
virtual void Notify() = 0;
protected:
Observer();
};
class Observable
{
public:
~Observable();
void Subscribe( std::shared_ptr<Observer> observer );
void Unsubscribe( std::shared_ptr<Observer> observer );
void Notify();
protected:
Observable();
private:
std::vector<std::weak_ptr<Observer>> observers;
};
Observer.cpp
void Observable::Subscribe( std::shared_ptr<Observer> observer )
{
observers.push_back( observer );
}
void Observable::Unsubscribe( std::shared_ptr<Observer> observer )
{
???
}
void Observable::Notify()
{
for ( auto wptr : observers )
{
if ( !wptr.expired() )
{
auto observer = wptr.lock();
observer->Notify();
}
}
}
(de/constructors are implemented here but empty, so I've left them out)
What I'm stuck on is how to implement the Unsubscribe procedure. I came across the erase - remove - end idiom, but I understand that it will not work "out of the box" with how I have setup my Observable. How do I inspect the weak_ptr elements in the observers vector such that I can remove the desired Observer?
I'm also looking for some advice on what the parameter type should be for my Un/Subscribe procedures. Would it be better to use std::shared_ptr<Observer>& or const std::shared_ptr<Observer>&, since we will not be modifying it?
I really do not want to have Observables owning their Observers, as it seems to betray the intentions of the pattern, and is certainly not how I want to structure the rest of the project that will ultimately be making use of the pattern. That said, an added layer of security / automation that I am considering is to have Observers store a mirror vector of weak_ptr. An Observer on its way out could then unsubscribe from all Observables it had subscribed to, and an Observable on its way out could erase the back-reference to itself from each of the Observers observing it. Evidently the two classes would be friends in such a scenario.

You can use std::remove_if with std::erase like this:
void Observable::Unsubscribe( std::shared_ptr<Observer> observer )
{
std::erase(
std::remove_if(
this->observers.begin(),
this->observers.end(),
[&](const std::weak_ptr<Observer>& wptr)
{
return wptr.expired() || wptr.lock() == observer;
}
),
this->observers.end()
);
}
You should indeed pass observer as const std::shared_ptr<Observer>&.

What I'm stuck on is how to implement the Unsubscribe procedure.
I suggest to store observers in a std::list because it's iterators are not invalidated on container modification. Then in subscribe in observer you store iterator to it and in unsubscribe you use the iterator to remove the element.
But of course you can use std::vector and std::remove_if as suggested in another answer.
Now about all that *_ptr stuff. In C++ RAII is your friend so use it. Get rid of public unsubscribe method. Instead observer must unsubscribe itself in it's destructor. This simplifies things very much: no more locking of weak pointers: if observer has been deleted, then it is not on the list. Just don't forget to protect observer list with a mutex if you have multithreaded application. If you use this design, then Observable would need only plain pointers to Observers and there will be no requirements how Observers must be stored.
class Observer {
public:
void subscribe(std::function<void()> unsubscribe) {
unsubscribe_ = std::move(unsubscribe);
}
virtual ~Observer() {
unsubscribe_();
}
private:
std::function<void()> unsubscribe_;
};
class Observable {
public:
void subscribe(Observer* observer) {
std::lock_guard<std::mutex> lock(observablesMutex_);
auto itr = observers_.insert(observers_.end(), observer);
observer->subscribe([this, itr]{
std::lock_guard<std::mutex> lock(observablesMutex_);
observers_.erase(itr);
});
}
private:
std::list<Observer*> observers_;
std::mutex observablesMutex_;
};
Note: for this code Observers must always be destroyed before the Observable.
Update: if you get more used to C++ lambdas you may find that having std::function as observer is more convenient in many cases than having a special class hierarchy. In this case you API can be like this:
class Handle {
public:
explicit Handle(std::function<void()> onDestroy)
: onDestroy_(std::move(onDestroy)) {}
Handle(const Handle&) = delete;
Handle(Handle&&) = default;
virtual ~Handle() {
onDestroy_();
}
private:
std::function<void()> onDestroy_;
};
class Observable {
public:
Handle subscribe(std::function<void()> observer) {
std::lock_guard<std::mutex> lock(observablesMutex_);
auto itr = observers_.insert(observers_.end(), observer);
return {[this, itr]{
std::lock_guard<std::mutex> lock(observablesMutex_);
observers_.erase(itr);
}};
}
private:
std::list<std::function<void()>> observers_;
std::mutex observablesMutex_;
};

Related

How to check if caller still exist in task callback

A very common scenario for a thread's callback is to inform the caller that it has finished his job. Here's the minimal example:
class task
{
public:
void operator()(std::function<void()>&& callback)
{
std::thread t
{
[c = std::move(callback)]{
std::this_thread::sleep_for(std::chrono::milliseconds{100});
c();
}
};
t.detach();
}
};
class processor
{
public:
void new_task()
{
auto& t = tasks.emplace_back();
t([this]{ if (true/*this object still alives*/) finish_callback(); });
}
private:
void finish_callback()
{
// ...
}
private:
std::vector<task> tasks;
};
In such scenario, we have to support the case when the child task overlives the parent/caller. Is there any common design pattern that allows us to do this?
Theoretically, we may use shared_ptr + enable_shared_from_this + weak_ptr trio in such case, but this forces us to always store the parent object on the heap under shared_ptr. I would rather like to not have such a limitation.

Using weak_ptr.lock() to remove observers when dead

I am currently trying to implement the Observer pattern in a little game, using this as reference. Though, I am trying to implement a solution to one of the problems exposed in the chapter : how to prevent the subject from sending a notification to a removed observer ?
Instead of having simple pointers or owning the observers in the subject class (with a vector of shared_ptrs, simple pointers, or anything that implies owning the object), I thought that it could be good to have weak_ptrs (knowing that the observers are already instantiated using make_shared and shared_ptr). The primary use of this to me would be to be able to check if the observer is still alive with weak_ptr.lock() and to remove it if it isn't, but is this a good practice ? Does this implies other problems ?
Here's my (stripped down) code :
Main method :
int main() {
shared_ptr<Game> game = std::make_shared<Game>(); //Game extends subject
shared_ptr<Renderer> renderer = std::make_shared<Renderer>(); //Renderer extends observer
game.addObserver(renderer); //Automatic cast from shared_ptr to weak_ptr
}
Subject class :
class Subject {
public:
virtual void removeObserverAt(int index) {
m_observers.erase(m_observers.begin() + index);
}
protected:
virtual void notify(int objectID, Event ev, int64_t timestamp) {
int i = 0;
for(std::weak_ptr<Observer> obs : m_observers) {
if(auto sobs = obs.lock()) {
sobs->onNotify(objectID, shared_from_this(), ev, timestamp);
} else {
removeObserverAt(i);
}
i++;
}
}
private:
std::vector<std::weak_ptr<Observer>> m_observers;
};
Observer class :
class Observer {
public:
virtual ~Observer() {}
virtual void onNotify(int objectID, std::weak_ptr<Subject> caller, Event ev) = 0;
private:
};

Using RAII for callback registration in c++

I'm using some API to get a notification. Something like:
NOTIF_HANDLE register_for_notif(CALLBACK func, void* context_for_callback);
void unregister_for_notif(NOTIF_HANDLE notif_to_delete);
I want to wrap it in some decent RAII class that will set an event upon receiving the notification. My problem is how to synchronize it. I wrote something like this:
class NotifClass
{
public:
NotifClass(std::shared_ptr<MyEvent> event):
_event(event),
_notif_handle(register_for_notif(my_notif_callback, (void*)this))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
void my_notif_callback(void* context)
{
((NotifClass*)context)->_event->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NOTIF_HANDLE _notif_handle;
};
But I'm worried about the callback being called during construction\destruction (Maybe in this specific example, shared_ptr will be fine with it, but maybe with other constructed classes it will not be the same).
I will say again - I don't want a very specific solution for this very specific class, but a more general solution for RAII when passing a callback.
Your concerns about synchronisation are a little misplaced.
To summarise your problem, you have some library with which you can register a callback function and (via the void* pointer, or similar) some resources upon which the function acts via a register() function. This same library also provides an unregister() function.
Within your code you neither can, nor should attempt to protect against the possibility that the library can call your callback function after, or while it is being unregistered via the unregister() function: it is the library's responsibility to ensure that the callback cannot be triggered while it is being or after it has been unregistered. The library should worry about synchonisation, mutexes and the rest of that gubbins, not you.
The two responsibilities of your code are to:
ensure you construct the resources upon which the callback acts before registering it, and
ensure that you unregister the callback before you destroy the resources upon which the callback acts.
This inverse order of construction vs destruction is exactly what C++ does with its member variables, and why compilers warn you when you initialise them in the 'wrong' order.
In terms of your example, you need to ensure that 1) register_for_notif() is called after the shared pointer is initialised and 2) unregister_for_notif() is called before the std::shared_ptr (or whatever) is destroyed.
The key to the latter is understanding the order of destruction in a destructor. For a recap, checkout the "Destruction sequence" section of the following cppreference.com page.
First, the body of the destructor is executed;
then the compiler calls the destructors for all non-static non-variant members of the class, in reverse order of declaration.
Your example code is, therefore "safe" (or as safe as it can be), because unregister_for_notif() is called in the destructor body, prior to the destruction of the member variable std::shared_ptr<MyEvent> _event.
An alternative (and in some sense more clearly RAII adherent) way to do this would be to separate the notification handle from the resources upon which the callback function operates by splitting it into its own class. E.g. something like:
class NotifHandle {
public:
NotifHandle(void (*callback_fn)(void *), void * context)
: _handle(register_for_notif(callback_fn, context)) {}
~NotifHandle() { unregister_for_notif(_handle); }
private:
NOTIF_HANDLE _handle;
};
class NotifClass {
public:
NotifClass(std::shared_ptr<MyEvent> event)
: _event(event),
_handle(my_notif_callback, (void*)this) {}
~NotifClass() {}
static void my_notif_callback(void* context) {
((NotifClass*)context)->_event->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NotifHandle _handle;
};
The important thing is the member variable declaration order: NotifHandle _handle is declared after the resource std::shared_ptr<MyEvent> _event, so the notification is guaranteed to be unregistered before the resource is destroyed.
You can do this with thread-safe accesses to a static container that holds pointers to your live instances. The RAII class constructor adds this to the container and the destructor removes it. The callback function checks the context against the container and returns if it is not present. It will look something like this (not tested):
class NotifyClass {
public:
NotifyClass(const std::shared_ptr<MyEvent>& event)
: event_(event) {
{
// Add to thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.insert(this);
}
// Register the callback at the end of the constructor to
// ensure initialization is complete.
handle_ = register_for_notif(&callback, this);
}
~NotifyClass() {
unregister_for_notif(handle_);
{
// Remove from thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.erase(this);
}
// Guaranteed not to be called from this point so
// further destruction is safe.
}
static void callback(void *context) {
std::shared_ptr<MyEvent> event;
{
// Ignore if the instance does not exist.
std::lock_guard<std::mutex> lock(mutex_);
if (instances_.count(context) == 0)
return;
NotifyClass *instance = static_cast<NotifyClass*>(context);
event = instance->event_;
}
event->set_event();
}
// Rule of Three. Implement if desired.
NotifyClass(const NotifyClass&) = delete;
NotifyClass& operator=(const NotifyClass&) = delete;
private:
// Synchronized associative container of instances.
static std::mutex mutex_;
static std::unordered_set<void*> instances_;
const std::shared_ptr<MyEvent> event_;
NOTIF_HANDLE handle_;
};
Note that the callback increments the shared pointer and releases the lock on the container before using the shared pointer. This prevents a potential deadlock if triggering MyEvent could synchronously create or destroy a NotifyClass instance.
Technically, the above could fail because of address re-use. That is, if one NotifyClass instance is destroyed and a new instance is immediately created at the exact same memory address, then an API callback meant for the old instance conceivably could be delivered to the new instance. For certain usages, perhaps even most usages, this will not matter. If it does matter, then the static container keys must be made globally unique. This can be done by replacing the set with a map and passing the map key instead of a pointer to the API, e.g.:
class NotifyClass {
public:
NotifyClass(const std::shared_ptr<MyEvent>& event)
: event_(event) {
{
// Add to thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
key_ = nextKey++;
instances_[key_] = this;
}
// Register the callback at the end of the constructor to
// ensure initialization is complete.
handle_ = register_for_notif(&callback, reinterpret_cast<void *>(key_));
}
~NotifyClass() {
unregister_for_notif(handle_);
{
// Remove from thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.erase(key_);
}
// Guaranteed not to be called from this point so
// further destruction is safe.
}
static void callback(void *context) {
// Ignore if the instance does not exist.
std::shared_ptr<MyEvent> event;
{
std::lock_guard<std::mutex> lock(mutex_);
uintptr_t key = reinterpret_cast<uintptr_t>(context);
auto i = instances_.find(key);
if (i == instances_.end())
return;
NotifyClass *instance = i->second;
event = instance->event_;
}
event->set_event();
}
// Rule of Three. Implement if desired.
NotifyClass(const NotifyClass&) = delete;
NotifyClass& operator=(const NotifyClass&) = delete;
private:
// Synchronized associative container of instances.
static std::mutex mutex_;
static uintptr_t nextKey_;
static std::unordered_map<unsigned long, NotifyClass*> instances_;
const std::shared_ptr<MyEvent> event_;
NOTIF_HANDLE handle_;
uintptr_t key_;
};
There are two common general solutions for RAII callbacks. One is a common interface to a shared_ptr of your object. The other is std::function.
Using a common interface allows for one smart_ptr to control the lifetime of all the callbacks for an object. This is similar to the observer pattern.
class Observer
{
public:
virtual ~Observer() {}
virtual void Callback1() = 0;
virtual void Callback2() = 0;
};
class MyEvent
{
public:
void SignalCallback1()
{
const auto lock = m_spListener.lock();
if (lock) lock->Callback1();
}
void SignalCallback2()
{
const auto lock = m_spListener.lock();
if (lock) lock->Callback2();
}
void RegisterCallbacks(std::shared_ptr<Observer> spListener)
{
m_spListener = spListener;
}
private:
std::weak_ptr<Observer> m_spListener;
};
class NotifClass : public Observer
{
public:
void Callback1() { std::cout << "NotifClass 1" << std::endl; }
void Callback2() { std::cout << "NotifClass 2" << std::endl; }
};
Example use.
MyEvent source;
{
auto notif = std::make_shared<NotifClass>();
source.RegisterCallbacks(notif);
source.SignalCallback1(); // Prints NotifClass 1
}
source.SignalCallback2(); // Doesn't print NotifClass 2
If you use a C style member pointer, you have to worry about the address of the object and the member callback. std::function can encapsulate these two things nicely with a lambda. This allows you to manage the lifetime of each callback individually.
class MyEvent
{
public:
void SignalCallback()
{
const auto lock = m_spListener.lock();
if (lock) (*lock)();
}
void RegisterCallback(std::shared_ptr<std::function<void(void)>> spListener)
{
m_spListener = spListener;
}
private:
std::weak_ptr<std::function<void(void)>> m_spListener;
};
class NotifClass
{
public:
void Callback() { std::cout << "NotifClass 1" << std::endl; }
};
Example use.
MyEvent source;
// This doesn't need to be a smart pointer.
auto notif = std::make_shared<NotifClass>();
{
auto callback = std::make_shared<std::function<void(void)>>(
[notif]()
{
notif->Callback();
});
notif = nullptr; // note the callback already captured notif and will keep it alive
source.RegisterCallback(callback);
source.SignalCallback(); // Prints NotifClass 1
}
source.SignalCallback(); // Doesn't print NotifClass 1
AFAICT, you are concerned that my_notif_callback can be called in parallel to the destructor and context can be a dangling pointer. That is a legitimate concern and I don't think you can solve it with a simple locking mechanism.
Instead, you probably need to use a combination of shared and weak pointers to avoid such dangling pointers. To solve your issue, for example, you can store the event in widget which is a shared_ptr and then you can create a weak_ptr to the widget and pass it as a context to register_for_notif.
In other words, NotifClass has as share_ptr to the Widget and the context is a weak_ptr to the Widget. If you can't lock the weak_ptr the class is already destructed:
class NotifClass
{
public:
NotifClass(const std::shared_ptr<MyEvent>& event):
_widget(std::make_shared<Widget>(event)),
_notif_handle(register_for_notif(my_notif_callback, (void*)new std::weak_ptr<Widget>(_widget)))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
static void my_notif_callback(void* context)
{
auto ptr = ((std::weak_ptr<Widget>*)context)->lock();
// If destructed, do not set the event.
if (!ptr) {
return;
}
ptr->_event->set_event();
}
private:
struct Widget {
Widget(const std::shared_ptr<MyEvent>& event)
: _event(event) {}
std::shared_ptr<MyEvent> _event;
};
std::shared_ptr<Widget> _widget;
NOTIF_HANDLE _notif_handle;
};
Note that any functionality you want to add to your NotifClass should actually go into Widget. If you don't have such extra functionalities, you can skip the Widget indirection and use a weak_ptr to event as the context:
class NotifClass
{
public:
NotifClass(const std::shared_ptr<MyEvent>& event):
_event(event),
_notif_handle(register_for_notif(my_notif_callback, (void*)new std::weak_ptr<MyEvent>(event)))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
static void my_notif_callback(void* context)
{
auto ptr = ((std::weak_ptr<MyEvent>*)context)->lock();
// If destructed, do not set the event.
if (!ptr) {
return;
}
ptr->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NOTIF_HANDLE _notif_handle;
};
Moderator warning: In order to request me, to delete this post, simply edit it!
make certain, that the callback object is fully constructed, before registering it. Means, make the callback object a separate class and the registration/deregistration wrapper a separate class.
Then you can chain both classes into a member or base class relationship.
struct A
{ CCallBackObject m_sCallback;
CRegistration m_sRegistration;
A(void)
:m_sCallback(),
m_sRegistration(&m_sCallback)
{
}
};
As an additional benefit, you can reuse the register/unregister wrapper...
If the callback could happen in another thread, I would redesign this software in order to avoid this.
E.g. one could make the shutdown of the main thread (e.g. destruction of this object) wait until all worker threads are shutdown/finished.

Generic observer pattern in C++

In many cases in my application i need class A to register itself as a listener on class B to receive notification when something happens. In every case i define a separate interface B implements and A can call do. So for example, A will have the following method:
void registerSomeEventListener(SomeEventListener l);
Also, in many cases, B will need to support multiple listeners so i reimplement the registration and notifyAll logic.
One generic way i know is to have some EventListener (implement by A) and EventNotifier (implement by B) classes. In this case each event is identified by a string and A implements the method:
void eventNotified(string eventType);
I think this is not a good solution. It will result in many if-else statements in case A listens to several events and might result in bugs when event names are changed only in the listener or the notifier.
I wonder what is the correct way to implement the observer pattern in C++?
Take a look at boost::signals2. It provides a generic mechanism to define "signals" where other objects can register. The signal owner can then notify observers by "firing" the signal. Instead of register-methods, the subject defines signals as members which then keep track of connected observers and notify them when initiated. The signals are statically typed and accept every function with the matching signature. This has the advantage that there is no need for inheritance and thus a weaker coupling than the traditional observer inheritance hierarchy.
class Subject {
public:
void setData(int x) {
data_ = x;
dataChanged(x);
}
boost::signals2<void (int)> dataChanged;
private:
int data_;
};
class Observer {
public:
Observer(Subject& s) {
c_ = s.dataChanged.connect([&](int x) {this->processData(x);});
}
~Observer() {
c_.disconnect();
}
private:
void processData(int x) {
std::cout << "Updated: " << x << std::endl;
}
boost::signals2::connection c_;
};
int main() {
Subject s;
Observer o1(s);
Observer o2(s);
s.setData(42);
return 0;
}
In this example, the subject holds some int data and notifies all registered observers when the data is changed.
Lets say you have a generic event fireing object:
class base_invoke {
public:
virtual ~base_invoke () {};
virtual void Invoke() = 0;
}
But you want to fire events on different types of objects, so you derive from base:
template<class C>
class methodWrapper : public base_invoke {
public:
typedef void (C::*pfMethodWrapperArgs0)();
C * mInstance;
pfMethodWrapperArgs0 mMethod;
public:
methodWrapper(C * instance, pfMethodWrapperArgs0 meth)
: mInstance(instance)
{
mMethod = meth;
}
virtual void Invoke () {
(mInstance->*mMethod)();
}
}
Now if you create a wrapper for a collection of pointers to base_invoke you can call each fireing object and signal whichever method on whichever class you'd like.
You can also turn this collection class into a factory for the fireing objects. to simplyfy the work.
class Event {
protected:
Collection<base_invoke *> mObservers;
public:
// class method observers
template<class C>
void Add (C * classInstance, typename methodWrapper<C>::pfMethodWrapperArgs0 meth) {
methodWrapper<C> * mw = NEW(methodWrapper<C>)(classInstance, meth);
mObservers.Add(ObserverEntry(key, mw));
}
void Invoke () {
int count = mObservers.Count();
for (int i = 0; i < count; ++i) {
mObservers[i]->Invoke();
}
}
};
And your done with the hard work. Add an Event object anyplace you want listeners to subscribe. You'll probably want to expand this to allow removal of listeners, and perhaps to take a few function parameters but the core is pretty much the same.

Select mutex or dummy mutex at runtime

I have a class that is shared between several projects, some uses of it are single-threaded and some are multi-threaded. The single-threaded users don't want the overhead of mutex locking, and the multi-threaded users don't want to do their own locking and want to be able to optionally run in "single-threaded mode." So I would like to be able to select between real and "dummy" mutexes at runtime.
Ideally, I would have a shared_ptr<something> and assign either a real or fake mutex object. I would then "lock" this without regard to what's in it.
unique_lock<something> guard(*mutex);
... critical section ...
Now there is a signals2::dummy_mutex but it does not share a common base class with boost::mutex.
So, what's an elegant way to select between a real mutex and a dummy mutex (either the one in signals2 or something else) without making the lock/guard code more complicated than the example above?
And, before you point out the alternatives:
I could select an implementation at compile time, but preprocessor macros are ugly and maintaining project configurations is painful for us.
Users of the class in a multi-threaded environment do not want to take on the responsibility of locking the use of the class rather than having the class do its own locking internally.
There are too many APIs and existing usages involved for a "thread-safe wrapper" to be a practical solution.
How about something like this?
Its untested but should be close to OK.
You might consider making the template class hold a value rather than a pointer
if your mutexes support the right kinds of constructions. Otherwise you could specialise the MyMutex class to get value behaviour.
Also it's not being careful about copying or destruction .. I leave that as an exercise to the reader ;) ( shared_ptr or storing a value rather than a pointer should fix this)
Oh and the code would be nicer using RAII rather than explicit lock/unlock... but that's a different question.I assume thats what the unique_lock in your code does?
struct IMutex
{
virtual ~IMutex(){}
virtual void lock()=0;
virtual bool try_lock()=0;
virtual void unlock()=0;
};
template<typename T>
class MyMutex : public IMutex
{
public:
MyMutex(T t) : t_(t) {}
void lock() { t_->lock(); }
bool try_lock() { return t_->try_lock(); }
void unlock() { t_->unlock(); }
protected:
T* t_;
};
IMutex * createMutex()
{
if( isMultithreaded() )
{
return new MyMutex<boost::mutex>( new boost::mutex );
}
else
{
return new MyMutex<signal2::dummy_mutex>( new signal2::dummy_mutex );
}
}
int main()
{
IMutex * mutex = createMutex();
...
{
unique_lock<IMutex> guard( *mutex );
...
}
}
Since the two mutex classes signals2::dummy_mutex and boost::mutex don't share a common base class you could use something like "external polymorphism" to allow to them to be treated polymorphically. You'd then use them as locking strategies for a common mutex/lock interface. This allows you to avoid using "if" statements in the lock implementation.
NOTE: This is basically what Michael's proposed solution implements. I'd suggest going with his answer.
Have you ever heard about Policy-based Design ?
You can define a Lock Policy interface, and the user may choose which policy she wishes. For ease of use, the "default" policy is precised using a compile-time variable.
#ifndef PROJECT_DEFAULT_LOCK_POLICY
#define PROJECT_DEFAULT_LOCK_POLICY TrueLock
#endif
template <class LP = PROJECT_DEFAULT_LOCK_POLICY>
class MyClass {};
This way, your users can choose their policies with a simple compile-time switch, and may override it one instance at a time ;)
This is my solution:
std::unique_lock<std::mutex> lock = dummy ?
std::unique_lock<std::mutex>(mutex, std::defer_lock) :
std::unique_lock<std::mutex>(mutex);
Is this not sufficient?
class SomeClass
{
public:
SomeClass(void);
~SomeClass(void);
void Work(bool isMultiThreaded = false)
{
if(isMultiThreaded)
{
lock // mutex lock ...
{
DoSomething
}
}
else
{
DoSomething();
}
}
};
In general, a mutex is only needed if the resource is shared between multiple processes. If an instance of the object is unique for a (possibly multi-threaded) process, then a Critical Section is often more appropriate.
In Windows, the single-threaded implementation of a Critical Section is a dummy one. Not sure what platform you are using.
Just FYI, here's the implementation I ended up with.
I did away with the abstract base class, merging it with the no-op "dummy" implementation. Also note the shared_ptr-derived class with an implicit conversion operator. A little too tricky, I think, but it lets me use shared_ptr<IMutex> objects where I previously used boost::mutex objects with zero changes.
header file:
class Foo {
...
private:
struct IMutex {
virtual ~IMutex() { }
virtual void lock() { }
virtual bool try_lock() { return true; }
virtual void unlock() { }
};
template <typename T> struct MutexProxy;
struct MutexPtr : public boost::shared_ptr<IMutex> {
operator IMutex&() { return **this; }
};
typedef boost::unique_lock<IMutex> MutexGuard;
mutable MutexPtr mutex;
};
implementation file:
template <typename T>
struct Foo::MutexProxy : public IMutex {
virtual void lock() { mutex.lock(); }
virtual bool try_lock() { return mutex.try_lock(); }
virtual void unlock() { mutex.unlock(); }
private:
T mutex;
};
Foo::Foo(...) {
mutex.reset(single_thread ? new IMutex : new MutexProxy<boost::mutex>);
}
Foo::Method() {
MutexGuard guard(mutex);
}
Policy based Option:
class SingleThreadedPolicy {
public:
class Mutex {
public:
void Lock() {}
void Unlock() {}
bool TryLock() { return true; }
};
class ScopedGuard {
public:
ScopedGuard(Mutex& mutex) {}
};
};
class MultithreadingPolicy {
public:
class ScopedGuard;
class Mutex {
friend class ScopedGuard;
private:
std::mutex mutex_;
public:
void Lock() {
mutex_.lock();
}
void Unlock() {
mutex_.unlock();
}
bool TryLock() {
return mutex_.try_lock();
}
};
class ScopedGuard {
private:
std::lock_guard<std::mutex> lock_;
public:
ScopedGuard(Mutex& mutex) : lock_(mutex.mutex_) {}
};
};
Then it can be used as follows:
template<class ThreadingPolicy = SingleThreadedPolicy>
class MyClass {
private:
typedef typename ThreadingPolicy::Mutex Mutex;
typedef typename ThreadingPolicy::ScopedGuard ScopedGuard;
Mutex mutex_;
public:
void DoSomething(){
ScopedGuard guard(mutex_);
std::cout<<"Hello World"<<std::endl;
}
};