A very common scenario for a thread's callback is to inform the caller that it has finished his job. Here's the minimal example:
class task
{
public:
void operator()(std::function<void()>&& callback)
{
std::thread t
{
[c = std::move(callback)]{
std::this_thread::sleep_for(std::chrono::milliseconds{100});
c();
}
};
t.detach();
}
};
class processor
{
public:
void new_task()
{
auto& t = tasks.emplace_back();
t([this]{ if (true/*this object still alives*/) finish_callback(); });
}
private:
void finish_callback()
{
// ...
}
private:
std::vector<task> tasks;
};
In such scenario, we have to support the case when the child task overlives the parent/caller. Is there any common design pattern that allows us to do this?
Theoretically, we may use shared_ptr + enable_shared_from_this + weak_ptr trio in such case, but this forces us to always store the parent object on the heap under shared_ptr. I would rather like to not have such a limitation.
Related
Assuming we have the classical Base class and derived class like this
class B {
public:
virtual ~B() {
// calling it here is too late, see explanations
//common_pre_cleanup_function();
}
void common_pre_cleanup_function() { }
};
class D : public B {
public:
virtual ~D() {
// What if we forget to do this call in another derived class?
common_pre_cleanup_function();
}
};
How would you make sure a function like common_pre_cleanup_function() is called in all derived Ds destructors before the members of D are destroyed but without having to explicitly call this function in every destructor-implementation of a new D?
Background
In my current project we have a base class that implements certain parallelism and threading features and will eventually start a new thread that does the actual work.
In the destructor of this base class we wanted to make sure, that the thread is always stopped and joined so that it gets cleaned up properly.
However derived classes may create members that are used by this thread in the base class. So if we destroy objects of the derived class, these members are also destroyed. But at this time the thread that is managed by the base class can still be running and now wrongfully access destroyed members.
I'm aware that this isn't the smartest approach to solve the issue and probably splitting up the threading/parallelisation parts and the "actual work" parts into separate classes might be the much smarter idea. However I'm interested if there are any approaches that don't involve an entire rewrite of the existing code base.
This code here is closer to our situation
class BackgroundTask {
public:
virtual ~BackgroundTask() {
// if we forget to call stop() in the derived classes, we will
// at this point have already destroyed any derived members
// while the thread might still run and access them; so how/where
// can we put this call?
//stop();
}
void stop() {
cancelFlag_.set();
thread_.join();
}
// more functions helping with Background tasks
private:
Thread thread_;
Condition cancelFlag_;
};
class MyTask : public BackgroundTask {
public:
virtual ~MyTask() {
// with the current case, we have to remember to call
// this function in all destructors in classes derived
// from BackgroundTask; that's what I want to avoid
stop();
}
private:
std::unique_ptr<MyClass> member;
};
Quite simply you don't. The best thing to do in this situation is to redesign how everything works to prevent this from being a problem.
But lets face it, in all likelihood you don't have the time and/or resources to achieve that. So your second best option (in my opinion) is to ensure that any call to the destroyed members of the derived class kills you application immediately with a very clear error message.
If a system must fail, fail early.
You might do something like:
template <typename TaskImpl>
class Task final : public TaskImpl
{
static_assert(std::is_base_of<BackgroundTask, TaskImpl>);
public:
virtual ~Task() { stop(); }
};
And then
class MyTaskImpl : public BackgroundTask
{
// ...
private:
std::unique_ptr<MyClass> member;
};
using MyTask = Task<MyTaskImpl>;
While I agree with comments that the design is flawed .....
Assuming that the objects are dynamically allocated, one solution is to make the destructors virtual and protected, and use a separate function to take care of calling the "pre-cleanup" before destroying the objects. For example;
class B
{
public:
void die()
{
common_pre_cleanup_function();
delete this;
};
protected:
virtual ~B() {};
private:
void common_pre_cleanup_function() { };
};
class D : public B
{
protected:
virtual ~D() {};
};
int main()
{
B *b = new D;
b->die();
}
This has a few limitations for the user of the class. In particular, behaviour is undefined if
the object is not created using a new expression;
any non-static member function of the object is called after calling die()
any non-static data member is accessed after calling die()
This also means that, if you maintain a set of objects (like a vector of pointers, B*) then it is necessary to remove the pointer from the list to ensure no usage of the object after it has died.
The protected destructors prevent a few things. Functions that are not members of friends of B or D cannot;
Create a B or a D of automatic storage duration
Use operator delete directly. For example, a statement delete b; in main() above will not compile. This also prevents destroying an object before calling the "pre-cleanup"
Edit: I realized this doesn't aswer your question but I'll leave it here for reference.
As mentioned earlier, each object should be responsible for managing its own resources so your design is a bit flawed to begin with.
Consider the following example. The TaskRunner is responsible for firing up a thread, and shutting it down when the constructor is called (textbook RAII). The Task class specifies what to do during the lifetime of the task, through pure virtual inheritance.
#include <atomic>
#include <future>
#include <iostream>
#include <memory>
struct Task {
virtual void run( ) = 0;
virtual ~Task( ) {
}
};
class TaskRunner final {
std::unique_ptr<Task> task;
std::future<void> fut;
std::atomic<bool> terminate;
public:
TaskRunner(std::unique_ptr<Task>&& task)
: task {std::move(task)}
, terminate {false} {
fut = std::async(std::launch::async, [this] {
while(!terminate) {
this->task->run( );
}
this->task.reset( );
});
}
TaskRunner(TaskRunner&&) = delete;
TaskRunner& operator=(TaskRunner&&) = delete;
TaskRunner(const TaskRunner&) = delete;
TaskRunner& operator=(const TaskRunner&) = delete;
~TaskRunner( ) {
terminate = true;
fut.wait( ); // Block until cleanup is completed
std::cout << "~TaskRunner()" << std::endl;
}
};
struct MyTask : public Task {
int i = 0;
void
run( ) {
// Do important stuf here, don't block.
std::cout << "MyTask::run() " << i++ << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds {100});
}
~MyTask( ) override {
// Clean up stuff here, run() is guaranteed to never be run again
std::cout << "~MyTask()" << std::endl;
}
};
int
main( ) {
TaskRunner t {std::make_unique<MyTask>( )};
std::this_thread::sleep_for(std::chrono::seconds {1});
}
Output
MyTask::run() 0
MyTask::run() 1
MyTask::run() 2
MyTask::run() 3
MyTask::run() 4
MyTask::run() 5
MyTask::run() 6
MyTask::run() 7
MyTask::run() 8
MyTask::run() 9
~MyTask()
~TaskRunner()
Imagine the following situation:
class A {
public:
folly::Future<folly::Unit> fooA(std::function<void()> callback);
};
class B {
public:
void fooB() {
a_->fooA([] { doSomethingCheap_(); }) /* Executed in thread 1 */
.via(exec_.get())
.then([] { doSomethingExpensive_(); }) /* Executed in thread 2 */
}
private:
std::shared_ptr<folly::Executor> exec_;
std::shared_ptr<A> a_;
void doSomethingCheap_();
void doSomethingExpensive_();
};
If at the time we end executing doSomethingCheap_() object B b will be destroyed then we will get segfault. Probably we can hold weak_ptr<B> in class A, but this approach is not extensible when we want to use class A not only in class B but also in some class C, ...
What is the best way avoiding it?
I'm not familiar with folly or what synchronization mechanisms you're using, but it seems like you could maybe use a Mutex-guarded bool that you capture and pass to the lambda calling doSomethingExpensive - this would be a "poor-man's join". Lock the mutex and then flip the bool to true. Alternately, you could use something like absl::Notification [since that what I know].
#include "absl/synchronization/notification.h"
class A {
public:
folly::Future<folly::Unit> fooA(std::function<void()> callback);
};
class B {
public:
void fooB() {
a_->fooA([] { doSomethingCheap_(); }) /* Executed in thread 1 */
.via(exec_.get())
.then([this] {
doSomethingExpensive_();
finished_.Notify();
}) /* Executed in thread 2 */
finished_.WaitForNotification();
}
private:
std::shared_ptr<folly::Executor> exec_;
std::shared_ptr<A> a_;
absl::Notification finished_;
void doSomethingCheap_();
void doSomethingExpensive_();
};
Ultimately, joining on the threads seems like the right way to go, I'm just not sure what is exposed in folly.
I'm looking to run a thread in a base class that constantly calls pure virtual method in that class that's overridden by a derived class.
For starting the thread, I've no issue as I can call an HasInitalized() function after it's been constructed. Therefore the thread is started after the class is fully constructed.
However, as the class' lifetime is managed by a shared_ptr, I cannot call a similar method for stopping the thread. If I stop the thread in the destructor, it will cause a seg-fault as the derived class is destroyed before the base and therefore will try to call a function that's not there.
I'm aware I can call a stop function from the derived class but would rather not have to on every instance of the derived class.
Is there a way around this.
Example:
#include "boost/thread.hpp"
class BaseClass
{
public:
BaseClass()
{
}
// Start the thread
void Start()
{
_thread = boost::thread(&BaseClass::ThreadLoop, this);
}
virtual ~BaseClass()
{
_thread.interrupt();
_thread.join();
}
private:
// Will loop until thread is interupted
void ThreadLoop()
{
try
{
while(true)
{
DoSomethingInDerivedClass();
boost::this_thread::interruption_point();
}
}
catch(...)
{
}
}
boost::thread _thread;
protected:
virtual void DoSomethingInDerivedClass() = 0;
};
class DerivedClass : public BaseClass
{
DerivedClass()
{
}
~DerivedClass()
{
// This gets called before base class destructor.
}
protected:
void DoSomethingInDerivedClass();
};
I don't think you will be able to avoid repeating the call to join the thread in the destructor of each derived class. If a thread depends on a non-static object o, then it's a good idea to have a clear ownership relation to guarantee the validity of the object:
The thread should own o and the destruction of o will be handled by the destructor of the thread object, after the joining.
o should own the thread and should join the thread in it's own destructor.
You've chosen the 2nd approach, except the thread depends on the derived object, but the derived object doesn't own the thread directly but through the sub-object (the base-object). Since the thread depends on the derived object, it must be joined in the derived object's destructor.
You should separate the two behaviours: a class to run and join the thread, the base class for the functional hierarchy.
class Runner {
public:
explicit Runner(std::shared_ptr<BaseClass> ptr) : m_ptr(ptr) {
m_thread = boost::thread(&Runner::ThreadLoop, this);
}
~Runner() {
m_thread.interrupt();
m_thread.join();
}
private:
void ThreadLoop() {
try {
while(true) {
m_ptr->DoSomethingInDerivedClass();
boost::this_thread::interruption_point();
}
} catch(...) {
}
}
std::shared_ptr<BaseClass> m_ptr;
std::thread m_thread;
};
My recommendation would be to use a weak_ptr to know when the object's lifetime is over:
The factory instantiates the (derived) object and stores it in a shared_ptr
The factory instantiates the watchdog class and passes it a weak_ptr to the new object
The watchdog thread can now check if the weak pointer is expired each time it needs to access it. When it is expired, the thread will terminate itself.
Here is an example (instead of a factory, I just used main):
#include <thread>
class BaseClass
{
public:
virtual ~BaseClass() = default;
virtual void DoSomethingInDerivedClass() = 0;
};
class DerivedClass : public BaseClass
{
public:
void DoSomethingInDerivedClass() override {}
};
// Will loop until weak_base expires
void ThreadLoop(std::weak_ptr<BaseClass> weak_base)
{
try
{
while (true)
{
std::shared_ptr<BaseClass> base = weak_base.lock();
if (base) {
base->DoSomethingInDerivedClass();
}
else {
break; // Base is gone. Terminate thread.
}
}
}
catch (...)
{
}
}
int main()
{
std::shared_ptr<DerivedClass> obj = std::make_shared<DerivedClass>();
std::thread([&] { ThreadLoop(obj); }).detach();
return 0;
}
Note that there is no need to explicitly stop the thread, since it will stop itself as soon as it detects that the object's lifetime is over. On the other hand, note that the thread may slightly outlive the lifetime of the being-watchted object, which could be considered bad design (it could e.g. defer program termination). I guess one could work around that by joining with the thread in the base class destructor, after signalling that it should terminate (if not already terminated).
I'm using some API to get a notification. Something like:
NOTIF_HANDLE register_for_notif(CALLBACK func, void* context_for_callback);
void unregister_for_notif(NOTIF_HANDLE notif_to_delete);
I want to wrap it in some decent RAII class that will set an event upon receiving the notification. My problem is how to synchronize it. I wrote something like this:
class NotifClass
{
public:
NotifClass(std::shared_ptr<MyEvent> event):
_event(event),
_notif_handle(register_for_notif(my_notif_callback, (void*)this))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
void my_notif_callback(void* context)
{
((NotifClass*)context)->_event->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NOTIF_HANDLE _notif_handle;
};
But I'm worried about the callback being called during construction\destruction (Maybe in this specific example, shared_ptr will be fine with it, but maybe with other constructed classes it will not be the same).
I will say again - I don't want a very specific solution for this very specific class, but a more general solution for RAII when passing a callback.
Your concerns about synchronisation are a little misplaced.
To summarise your problem, you have some library with which you can register a callback function and (via the void* pointer, or similar) some resources upon which the function acts via a register() function. This same library also provides an unregister() function.
Within your code you neither can, nor should attempt to protect against the possibility that the library can call your callback function after, or while it is being unregistered via the unregister() function: it is the library's responsibility to ensure that the callback cannot be triggered while it is being or after it has been unregistered. The library should worry about synchonisation, mutexes and the rest of that gubbins, not you.
The two responsibilities of your code are to:
ensure you construct the resources upon which the callback acts before registering it, and
ensure that you unregister the callback before you destroy the resources upon which the callback acts.
This inverse order of construction vs destruction is exactly what C++ does with its member variables, and why compilers warn you when you initialise them in the 'wrong' order.
In terms of your example, you need to ensure that 1) register_for_notif() is called after the shared pointer is initialised and 2) unregister_for_notif() is called before the std::shared_ptr (or whatever) is destroyed.
The key to the latter is understanding the order of destruction in a destructor. For a recap, checkout the "Destruction sequence" section of the following cppreference.com page.
First, the body of the destructor is executed;
then the compiler calls the destructors for all non-static non-variant members of the class, in reverse order of declaration.
Your example code is, therefore "safe" (or as safe as it can be), because unregister_for_notif() is called in the destructor body, prior to the destruction of the member variable std::shared_ptr<MyEvent> _event.
An alternative (and in some sense more clearly RAII adherent) way to do this would be to separate the notification handle from the resources upon which the callback function operates by splitting it into its own class. E.g. something like:
class NotifHandle {
public:
NotifHandle(void (*callback_fn)(void *), void * context)
: _handle(register_for_notif(callback_fn, context)) {}
~NotifHandle() { unregister_for_notif(_handle); }
private:
NOTIF_HANDLE _handle;
};
class NotifClass {
public:
NotifClass(std::shared_ptr<MyEvent> event)
: _event(event),
_handle(my_notif_callback, (void*)this) {}
~NotifClass() {}
static void my_notif_callback(void* context) {
((NotifClass*)context)->_event->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NotifHandle _handle;
};
The important thing is the member variable declaration order: NotifHandle _handle is declared after the resource std::shared_ptr<MyEvent> _event, so the notification is guaranteed to be unregistered before the resource is destroyed.
You can do this with thread-safe accesses to a static container that holds pointers to your live instances. The RAII class constructor adds this to the container and the destructor removes it. The callback function checks the context against the container and returns if it is not present. It will look something like this (not tested):
class NotifyClass {
public:
NotifyClass(const std::shared_ptr<MyEvent>& event)
: event_(event) {
{
// Add to thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.insert(this);
}
// Register the callback at the end of the constructor to
// ensure initialization is complete.
handle_ = register_for_notif(&callback, this);
}
~NotifyClass() {
unregister_for_notif(handle_);
{
// Remove from thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.erase(this);
}
// Guaranteed not to be called from this point so
// further destruction is safe.
}
static void callback(void *context) {
std::shared_ptr<MyEvent> event;
{
// Ignore if the instance does not exist.
std::lock_guard<std::mutex> lock(mutex_);
if (instances_.count(context) == 0)
return;
NotifyClass *instance = static_cast<NotifyClass*>(context);
event = instance->event_;
}
event->set_event();
}
// Rule of Three. Implement if desired.
NotifyClass(const NotifyClass&) = delete;
NotifyClass& operator=(const NotifyClass&) = delete;
private:
// Synchronized associative container of instances.
static std::mutex mutex_;
static std::unordered_set<void*> instances_;
const std::shared_ptr<MyEvent> event_;
NOTIF_HANDLE handle_;
};
Note that the callback increments the shared pointer and releases the lock on the container before using the shared pointer. This prevents a potential deadlock if triggering MyEvent could synchronously create or destroy a NotifyClass instance.
Technically, the above could fail because of address re-use. That is, if one NotifyClass instance is destroyed and a new instance is immediately created at the exact same memory address, then an API callback meant for the old instance conceivably could be delivered to the new instance. For certain usages, perhaps even most usages, this will not matter. If it does matter, then the static container keys must be made globally unique. This can be done by replacing the set with a map and passing the map key instead of a pointer to the API, e.g.:
class NotifyClass {
public:
NotifyClass(const std::shared_ptr<MyEvent>& event)
: event_(event) {
{
// Add to thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
key_ = nextKey++;
instances_[key_] = this;
}
// Register the callback at the end of the constructor to
// ensure initialization is complete.
handle_ = register_for_notif(&callback, reinterpret_cast<void *>(key_));
}
~NotifyClass() {
unregister_for_notif(handle_);
{
// Remove from thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.erase(key_);
}
// Guaranteed not to be called from this point so
// further destruction is safe.
}
static void callback(void *context) {
// Ignore if the instance does not exist.
std::shared_ptr<MyEvent> event;
{
std::lock_guard<std::mutex> lock(mutex_);
uintptr_t key = reinterpret_cast<uintptr_t>(context);
auto i = instances_.find(key);
if (i == instances_.end())
return;
NotifyClass *instance = i->second;
event = instance->event_;
}
event->set_event();
}
// Rule of Three. Implement if desired.
NotifyClass(const NotifyClass&) = delete;
NotifyClass& operator=(const NotifyClass&) = delete;
private:
// Synchronized associative container of instances.
static std::mutex mutex_;
static uintptr_t nextKey_;
static std::unordered_map<unsigned long, NotifyClass*> instances_;
const std::shared_ptr<MyEvent> event_;
NOTIF_HANDLE handle_;
uintptr_t key_;
};
There are two common general solutions for RAII callbacks. One is a common interface to a shared_ptr of your object. The other is std::function.
Using a common interface allows for one smart_ptr to control the lifetime of all the callbacks for an object. This is similar to the observer pattern.
class Observer
{
public:
virtual ~Observer() {}
virtual void Callback1() = 0;
virtual void Callback2() = 0;
};
class MyEvent
{
public:
void SignalCallback1()
{
const auto lock = m_spListener.lock();
if (lock) lock->Callback1();
}
void SignalCallback2()
{
const auto lock = m_spListener.lock();
if (lock) lock->Callback2();
}
void RegisterCallbacks(std::shared_ptr<Observer> spListener)
{
m_spListener = spListener;
}
private:
std::weak_ptr<Observer> m_spListener;
};
class NotifClass : public Observer
{
public:
void Callback1() { std::cout << "NotifClass 1" << std::endl; }
void Callback2() { std::cout << "NotifClass 2" << std::endl; }
};
Example use.
MyEvent source;
{
auto notif = std::make_shared<NotifClass>();
source.RegisterCallbacks(notif);
source.SignalCallback1(); // Prints NotifClass 1
}
source.SignalCallback2(); // Doesn't print NotifClass 2
If you use a C style member pointer, you have to worry about the address of the object and the member callback. std::function can encapsulate these two things nicely with a lambda. This allows you to manage the lifetime of each callback individually.
class MyEvent
{
public:
void SignalCallback()
{
const auto lock = m_spListener.lock();
if (lock) (*lock)();
}
void RegisterCallback(std::shared_ptr<std::function<void(void)>> spListener)
{
m_spListener = spListener;
}
private:
std::weak_ptr<std::function<void(void)>> m_spListener;
};
class NotifClass
{
public:
void Callback() { std::cout << "NotifClass 1" << std::endl; }
};
Example use.
MyEvent source;
// This doesn't need to be a smart pointer.
auto notif = std::make_shared<NotifClass>();
{
auto callback = std::make_shared<std::function<void(void)>>(
[notif]()
{
notif->Callback();
});
notif = nullptr; // note the callback already captured notif and will keep it alive
source.RegisterCallback(callback);
source.SignalCallback(); // Prints NotifClass 1
}
source.SignalCallback(); // Doesn't print NotifClass 1
AFAICT, you are concerned that my_notif_callback can be called in parallel to the destructor and context can be a dangling pointer. That is a legitimate concern and I don't think you can solve it with a simple locking mechanism.
Instead, you probably need to use a combination of shared and weak pointers to avoid such dangling pointers. To solve your issue, for example, you can store the event in widget which is a shared_ptr and then you can create a weak_ptr to the widget and pass it as a context to register_for_notif.
In other words, NotifClass has as share_ptr to the Widget and the context is a weak_ptr to the Widget. If you can't lock the weak_ptr the class is already destructed:
class NotifClass
{
public:
NotifClass(const std::shared_ptr<MyEvent>& event):
_widget(std::make_shared<Widget>(event)),
_notif_handle(register_for_notif(my_notif_callback, (void*)new std::weak_ptr<Widget>(_widget)))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
static void my_notif_callback(void* context)
{
auto ptr = ((std::weak_ptr<Widget>*)context)->lock();
// If destructed, do not set the event.
if (!ptr) {
return;
}
ptr->_event->set_event();
}
private:
struct Widget {
Widget(const std::shared_ptr<MyEvent>& event)
: _event(event) {}
std::shared_ptr<MyEvent> _event;
};
std::shared_ptr<Widget> _widget;
NOTIF_HANDLE _notif_handle;
};
Note that any functionality you want to add to your NotifClass should actually go into Widget. If you don't have such extra functionalities, you can skip the Widget indirection and use a weak_ptr to event as the context:
class NotifClass
{
public:
NotifClass(const std::shared_ptr<MyEvent>& event):
_event(event),
_notif_handle(register_for_notif(my_notif_callback, (void*)new std::weak_ptr<MyEvent>(event)))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
static void my_notif_callback(void* context)
{
auto ptr = ((std::weak_ptr<MyEvent>*)context)->lock();
// If destructed, do not set the event.
if (!ptr) {
return;
}
ptr->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NOTIF_HANDLE _notif_handle;
};
Moderator warning: In order to request me, to delete this post, simply edit it!
make certain, that the callback object is fully constructed, before registering it. Means, make the callback object a separate class and the registration/deregistration wrapper a separate class.
Then you can chain both classes into a member or base class relationship.
struct A
{ CCallBackObject m_sCallback;
CRegistration m_sRegistration;
A(void)
:m_sCallback(),
m_sRegistration(&m_sCallback)
{
}
};
As an additional benefit, you can reuse the register/unregister wrapper...
If the callback could happen in another thread, I would redesign this software in order to avoid this.
E.g. one could make the shutdown of the main thread (e.g. destruction of this object) wait until all worker threads are shutdown/finished.
In many cases in my application i need class A to register itself as a listener on class B to receive notification when something happens. In every case i define a separate interface B implements and A can call do. So for example, A will have the following method:
void registerSomeEventListener(SomeEventListener l);
Also, in many cases, B will need to support multiple listeners so i reimplement the registration and notifyAll logic.
One generic way i know is to have some EventListener (implement by A) and EventNotifier (implement by B) classes. In this case each event is identified by a string and A implements the method:
void eventNotified(string eventType);
I think this is not a good solution. It will result in many if-else statements in case A listens to several events and might result in bugs when event names are changed only in the listener or the notifier.
I wonder what is the correct way to implement the observer pattern in C++?
Take a look at boost::signals2. It provides a generic mechanism to define "signals" where other objects can register. The signal owner can then notify observers by "firing" the signal. Instead of register-methods, the subject defines signals as members which then keep track of connected observers and notify them when initiated. The signals are statically typed and accept every function with the matching signature. This has the advantage that there is no need for inheritance and thus a weaker coupling than the traditional observer inheritance hierarchy.
class Subject {
public:
void setData(int x) {
data_ = x;
dataChanged(x);
}
boost::signals2<void (int)> dataChanged;
private:
int data_;
};
class Observer {
public:
Observer(Subject& s) {
c_ = s.dataChanged.connect([&](int x) {this->processData(x);});
}
~Observer() {
c_.disconnect();
}
private:
void processData(int x) {
std::cout << "Updated: " << x << std::endl;
}
boost::signals2::connection c_;
};
int main() {
Subject s;
Observer o1(s);
Observer o2(s);
s.setData(42);
return 0;
}
In this example, the subject holds some int data and notifies all registered observers when the data is changed.
Lets say you have a generic event fireing object:
class base_invoke {
public:
virtual ~base_invoke () {};
virtual void Invoke() = 0;
}
But you want to fire events on different types of objects, so you derive from base:
template<class C>
class methodWrapper : public base_invoke {
public:
typedef void (C::*pfMethodWrapperArgs0)();
C * mInstance;
pfMethodWrapperArgs0 mMethod;
public:
methodWrapper(C * instance, pfMethodWrapperArgs0 meth)
: mInstance(instance)
{
mMethod = meth;
}
virtual void Invoke () {
(mInstance->*mMethod)();
}
}
Now if you create a wrapper for a collection of pointers to base_invoke you can call each fireing object and signal whichever method on whichever class you'd like.
You can also turn this collection class into a factory for the fireing objects. to simplyfy the work.
class Event {
protected:
Collection<base_invoke *> mObservers;
public:
// class method observers
template<class C>
void Add (C * classInstance, typename methodWrapper<C>::pfMethodWrapperArgs0 meth) {
methodWrapper<C> * mw = NEW(methodWrapper<C>)(classInstance, meth);
mObservers.Add(ObserverEntry(key, mw));
}
void Invoke () {
int count = mObservers.Count();
for (int i = 0; i < count; ++i) {
mObservers[i]->Invoke();
}
}
};
And your done with the hard work. Add an Event object anyplace you want listeners to subscribe. You'll probably want to expand this to allow removal of listeners, and perhaps to take a few function parameters but the core is pretty much the same.