Using RAII for callback registration in c++ - c++

I'm using some API to get a notification. Something like:
NOTIF_HANDLE register_for_notif(CALLBACK func, void* context_for_callback);
void unregister_for_notif(NOTIF_HANDLE notif_to_delete);
I want to wrap it in some decent RAII class that will set an event upon receiving the notification. My problem is how to synchronize it. I wrote something like this:
class NotifClass
{
public:
NotifClass(std::shared_ptr<MyEvent> event):
_event(event),
_notif_handle(register_for_notif(my_notif_callback, (void*)this))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
void my_notif_callback(void* context)
{
((NotifClass*)context)->_event->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NOTIF_HANDLE _notif_handle;
};
But I'm worried about the callback being called during construction\destruction (Maybe in this specific example, shared_ptr will be fine with it, but maybe with other constructed classes it will not be the same).
I will say again - I don't want a very specific solution for this very specific class, but a more general solution for RAII when passing a callback.

Your concerns about synchronisation are a little misplaced.
To summarise your problem, you have some library with which you can register a callback function and (via the void* pointer, or similar) some resources upon which the function acts via a register() function. This same library also provides an unregister() function.
Within your code you neither can, nor should attempt to protect against the possibility that the library can call your callback function after, or while it is being unregistered via the unregister() function: it is the library's responsibility to ensure that the callback cannot be triggered while it is being or after it has been unregistered. The library should worry about synchonisation, mutexes and the rest of that gubbins, not you.
The two responsibilities of your code are to:
ensure you construct the resources upon which the callback acts before registering it, and
ensure that you unregister the callback before you destroy the resources upon which the callback acts.
This inverse order of construction vs destruction is exactly what C++ does with its member variables, and why compilers warn you when you initialise them in the 'wrong' order.
In terms of your example, you need to ensure that 1) register_for_notif() is called after the shared pointer is initialised and 2) unregister_for_notif() is called before the std::shared_ptr (or whatever) is destroyed.
The key to the latter is understanding the order of destruction in a destructor. For a recap, checkout the "Destruction sequence" section of the following cppreference.com page.
First, the body of the destructor is executed;
then the compiler calls the destructors for all non-static non-variant members of the class, in reverse order of declaration.
Your example code is, therefore "safe" (or as safe as it can be), because unregister_for_notif() is called in the destructor body, prior to the destruction of the member variable std::shared_ptr<MyEvent> _event.
An alternative (and in some sense more clearly RAII adherent) way to do this would be to separate the notification handle from the resources upon which the callback function operates by splitting it into its own class. E.g. something like:
class NotifHandle {
public:
NotifHandle(void (*callback_fn)(void *), void * context)
: _handle(register_for_notif(callback_fn, context)) {}
~NotifHandle() { unregister_for_notif(_handle); }
private:
NOTIF_HANDLE _handle;
};
class NotifClass {
public:
NotifClass(std::shared_ptr<MyEvent> event)
: _event(event),
_handle(my_notif_callback, (void*)this) {}
~NotifClass() {}
static void my_notif_callback(void* context) {
((NotifClass*)context)->_event->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NotifHandle _handle;
};
The important thing is the member variable declaration order: NotifHandle _handle is declared after the resource std::shared_ptr<MyEvent> _event, so the notification is guaranteed to be unregistered before the resource is destroyed.

You can do this with thread-safe accesses to a static container that holds pointers to your live instances. The RAII class constructor adds this to the container and the destructor removes it. The callback function checks the context against the container and returns if it is not present. It will look something like this (not tested):
class NotifyClass {
public:
NotifyClass(const std::shared_ptr<MyEvent>& event)
: event_(event) {
{
// Add to thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.insert(this);
}
// Register the callback at the end of the constructor to
// ensure initialization is complete.
handle_ = register_for_notif(&callback, this);
}
~NotifyClass() {
unregister_for_notif(handle_);
{
// Remove from thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.erase(this);
}
// Guaranteed not to be called from this point so
// further destruction is safe.
}
static void callback(void *context) {
std::shared_ptr<MyEvent> event;
{
// Ignore if the instance does not exist.
std::lock_guard<std::mutex> lock(mutex_);
if (instances_.count(context) == 0)
return;
NotifyClass *instance = static_cast<NotifyClass*>(context);
event = instance->event_;
}
event->set_event();
}
// Rule of Three. Implement if desired.
NotifyClass(const NotifyClass&) = delete;
NotifyClass& operator=(const NotifyClass&) = delete;
private:
// Synchronized associative container of instances.
static std::mutex mutex_;
static std::unordered_set<void*> instances_;
const std::shared_ptr<MyEvent> event_;
NOTIF_HANDLE handle_;
};
Note that the callback increments the shared pointer and releases the lock on the container before using the shared pointer. This prevents a potential deadlock if triggering MyEvent could synchronously create or destroy a NotifyClass instance.
Technically, the above could fail because of address re-use. That is, if one NotifyClass instance is destroyed and a new instance is immediately created at the exact same memory address, then an API callback meant for the old instance conceivably could be delivered to the new instance. For certain usages, perhaps even most usages, this will not matter. If it does matter, then the static container keys must be made globally unique. This can be done by replacing the set with a map and passing the map key instead of a pointer to the API, e.g.:
class NotifyClass {
public:
NotifyClass(const std::shared_ptr<MyEvent>& event)
: event_(event) {
{
// Add to thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
key_ = nextKey++;
instances_[key_] = this;
}
// Register the callback at the end of the constructor to
// ensure initialization is complete.
handle_ = register_for_notif(&callback, reinterpret_cast<void *>(key_));
}
~NotifyClass() {
unregister_for_notif(handle_);
{
// Remove from thread-safe collection of instances.
std::lock_guard<std::mutex> lock(mutex_);
instances_.erase(key_);
}
// Guaranteed not to be called from this point so
// further destruction is safe.
}
static void callback(void *context) {
// Ignore if the instance does not exist.
std::shared_ptr<MyEvent> event;
{
std::lock_guard<std::mutex> lock(mutex_);
uintptr_t key = reinterpret_cast<uintptr_t>(context);
auto i = instances_.find(key);
if (i == instances_.end())
return;
NotifyClass *instance = i->second;
event = instance->event_;
}
event->set_event();
}
// Rule of Three. Implement if desired.
NotifyClass(const NotifyClass&) = delete;
NotifyClass& operator=(const NotifyClass&) = delete;
private:
// Synchronized associative container of instances.
static std::mutex mutex_;
static uintptr_t nextKey_;
static std::unordered_map<unsigned long, NotifyClass*> instances_;
const std::shared_ptr<MyEvent> event_;
NOTIF_HANDLE handle_;
uintptr_t key_;
};

There are two common general solutions for RAII callbacks. One is a common interface to a shared_ptr of your object. The other is std::function.
Using a common interface allows for one smart_ptr to control the lifetime of all the callbacks for an object. This is similar to the observer pattern.
class Observer
{
public:
virtual ~Observer() {}
virtual void Callback1() = 0;
virtual void Callback2() = 0;
};
class MyEvent
{
public:
void SignalCallback1()
{
const auto lock = m_spListener.lock();
if (lock) lock->Callback1();
}
void SignalCallback2()
{
const auto lock = m_spListener.lock();
if (lock) lock->Callback2();
}
void RegisterCallbacks(std::shared_ptr<Observer> spListener)
{
m_spListener = spListener;
}
private:
std::weak_ptr<Observer> m_spListener;
};
class NotifClass : public Observer
{
public:
void Callback1() { std::cout << "NotifClass 1" << std::endl; }
void Callback2() { std::cout << "NotifClass 2" << std::endl; }
};
Example use.
MyEvent source;
{
auto notif = std::make_shared<NotifClass>();
source.RegisterCallbacks(notif);
source.SignalCallback1(); // Prints NotifClass 1
}
source.SignalCallback2(); // Doesn't print NotifClass 2
If you use a C style member pointer, you have to worry about the address of the object and the member callback. std::function can encapsulate these two things nicely with a lambda. This allows you to manage the lifetime of each callback individually.
class MyEvent
{
public:
void SignalCallback()
{
const auto lock = m_spListener.lock();
if (lock) (*lock)();
}
void RegisterCallback(std::shared_ptr<std::function<void(void)>> spListener)
{
m_spListener = spListener;
}
private:
std::weak_ptr<std::function<void(void)>> m_spListener;
};
class NotifClass
{
public:
void Callback() { std::cout << "NotifClass 1" << std::endl; }
};
Example use.
MyEvent source;
// This doesn't need to be a smart pointer.
auto notif = std::make_shared<NotifClass>();
{
auto callback = std::make_shared<std::function<void(void)>>(
[notif]()
{
notif->Callback();
});
notif = nullptr; // note the callback already captured notif and will keep it alive
source.RegisterCallback(callback);
source.SignalCallback(); // Prints NotifClass 1
}
source.SignalCallback(); // Doesn't print NotifClass 1

AFAICT, you are concerned that my_notif_callback can be called in parallel to the destructor and context can be a dangling pointer. That is a legitimate concern and I don't think you can solve it with a simple locking mechanism.
Instead, you probably need to use a combination of shared and weak pointers to avoid such dangling pointers. To solve your issue, for example, you can store the event in widget which is a shared_ptr and then you can create a weak_ptr to the widget and pass it as a context to register_for_notif.
In other words, NotifClass has as share_ptr to the Widget and the context is a weak_ptr to the Widget. If you can't lock the weak_ptr the class is already destructed:
class NotifClass
{
public:
NotifClass(const std::shared_ptr<MyEvent>& event):
_widget(std::make_shared<Widget>(event)),
_notif_handle(register_for_notif(my_notif_callback, (void*)new std::weak_ptr<Widget>(_widget)))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
static void my_notif_callback(void* context)
{
auto ptr = ((std::weak_ptr<Widget>*)context)->lock();
// If destructed, do not set the event.
if (!ptr) {
return;
}
ptr->_event->set_event();
}
private:
struct Widget {
Widget(const std::shared_ptr<MyEvent>& event)
: _event(event) {}
std::shared_ptr<MyEvent> _event;
};
std::shared_ptr<Widget> _widget;
NOTIF_HANDLE _notif_handle;
};
Note that any functionality you want to add to your NotifClass should actually go into Widget. If you don't have such extra functionalities, you can skip the Widget indirection and use a weak_ptr to event as the context:
class NotifClass
{
public:
NotifClass(const std::shared_ptr<MyEvent>& event):
_event(event),
_notif_handle(register_for_notif(my_notif_callback, (void*)new std::weak_ptr<MyEvent>(event)))
// initialize some other stuff
{
// Initialize some more stuff
}
~NotifClass()
{
unregister_for_notif(_notif_handle);
}
static void my_notif_callback(void* context)
{
auto ptr = ((std::weak_ptr<MyEvent>*)context)->lock();
// If destructed, do not set the event.
if (!ptr) {
return;
}
ptr->set_event();
}
private:
std::shared_ptr<MyEvent> _event;
NOTIF_HANDLE _notif_handle;
};

Moderator warning: In order to request me, to delete this post, simply edit it!
make certain, that the callback object is fully constructed, before registering it. Means, make the callback object a separate class and the registration/deregistration wrapper a separate class.
Then you can chain both classes into a member or base class relationship.
struct A
{ CCallBackObject m_sCallback;
CRegistration m_sRegistration;
A(void)
:m_sCallback(),
m_sRegistration(&m_sCallback)
{
}
};
As an additional benefit, you can reuse the register/unregister wrapper...
If the callback could happen in another thread, I would redesign this software in order to avoid this.
E.g. one could make the shutdown of the main thread (e.g. destruction of this object) wait until all worker threads are shutdown/finished.

Related

How to check if caller still exist in task callback

A very common scenario for a thread's callback is to inform the caller that it has finished his job. Here's the minimal example:
class task
{
public:
void operator()(std::function<void()>&& callback)
{
std::thread t
{
[c = std::move(callback)]{
std::this_thread::sleep_for(std::chrono::milliseconds{100});
c();
}
};
t.detach();
}
};
class processor
{
public:
void new_task()
{
auto& t = tasks.emplace_back();
t([this]{ if (true/*this object still alives*/) finish_callback(); });
}
private:
void finish_callback()
{
// ...
}
private:
std::vector<task> tasks;
};
In such scenario, we have to support the case when the child task overlives the parent/caller. Is there any common design pattern that allows us to do this?
Theoretically, we may use shared_ptr + enable_shared_from_this + weak_ptr trio in such case, but this forces us to always store the parent object on the heap under shared_ptr. I would rather like to not have such a limitation.

In C++, can two other different shared objects access a Singleton from a third shared object?

I have an application in C++, which loads most of its code from two or more plugins (each with at least one thread), through share objects. I use the following code to load the plugins:
pluginHandle = dlopen(fileName, RTLD_NOW|RTLD_GLOBAL);
init_t* init = (init_t*) dlsym(pluginHandle, "init") // Create should return an instance of the class of the plugin
plugin = init();
I arrived to the point to which I need two of those plugins to start adding data to a common Queue. As the application does not allow for communication between both plugins without changing the code in the application itself (a point we are trying to avoid), I think I found a way to solve that: a third plugin, which includes a singleton class with a thread-safe Queue.
I would then recompile and link both plugins against the library, and use getInstance() to get the singleton and start adding tasks to the queue.
Is that a safe implementation? Will the singleton Queue work?
A dynamic library (shared object), which includes a singleton class with a thread-safe Queue.
Singleton are used when you want to constraint a class to be instantiated only once. That's not what you want: you want all your plugins to work on a particular instance of a class. There is no "only one can live" requirement here.
A thread-safe singleton in C++11 using Meyer's pattern may looks like this:
class Singleton
{
private:
Singleton();
public:
Singleton(const &Singleton) = delete;
Singleton& operator=(const &Singleton) = delete;
static Singleton& get_instance()
{
static Singleton s;
return s;
}
};
Default constructor is declared private, and copy/assignment operations are deleted to avoid multiple instances.
You need something more simple: a function always returning the same instance. Something like this:
class Manager
{
public:
static Resource& get_resource()
{
static Resource r;
return r;
}
};
No need to prevent multiple instantiation: if you want the same instance, just ask for the same instance.
You can also extend the design with a resource pool returning a same instance given some id:
enum class ResourceId
{
ID_FOR_A_FAMILY_OF_PLUGIN,
ID_FOR_AN_OTHER_FAMILY_OF_PLUGIN
};
class Pool
{
public:
static Resource& get_resource(ResourceId id)
{
static std::map<ResourceId, Resource> p;
return p[id];
}
};
Note that in this example p[id] is created on the fly with Resource's default constructor. You may want to pass parameters during construction:
class Resource
{
public:
Resource():ready(false){}
void init(some parameters)
{
// do some intialization
ready = true;
}
bool is_ready() const { return ready; }
private:
bool ready;
};
class Pool
{
public:
static Resource& get_resource(ResourceId id)
{
static std::map<ResourceId, Resource> p;
auto& r = p[id];
if(!r.is_ready())
{
r.init(some parameters);
}
return r;
}
};
Or, using pointers to allow polymorphism:
class Pool
{
public:
static std::unique_ptr<Resource>& get_resource(ResourceId id)
{
static std::map<ResourceId, std::unique_ptr<Resource>> p;
auto& r = p[id];
if(!r)
{
r = std::make_unique<SomeResourceTypeForId>(some parameters);
}
return r;
}
};
Note that the last two implementations need a mutex around the non-static code to be thread-safe.

Using weak_ptr to implement the Observer pattern

What I have so far is:
Observer.h
class Observer
{
public:
~Observer();
virtual void Notify() = 0;
protected:
Observer();
};
class Observable
{
public:
~Observable();
void Subscribe( std::shared_ptr<Observer> observer );
void Unsubscribe( std::shared_ptr<Observer> observer );
void Notify();
protected:
Observable();
private:
std::vector<std::weak_ptr<Observer>> observers;
};
Observer.cpp
void Observable::Subscribe( std::shared_ptr<Observer> observer )
{
observers.push_back( observer );
}
void Observable::Unsubscribe( std::shared_ptr<Observer> observer )
{
???
}
void Observable::Notify()
{
for ( auto wptr : observers )
{
if ( !wptr.expired() )
{
auto observer = wptr.lock();
observer->Notify();
}
}
}
(de/constructors are implemented here but empty, so I've left them out)
What I'm stuck on is how to implement the Unsubscribe procedure. I came across the erase - remove - end idiom, but I understand that it will not work "out of the box" with how I have setup my Observable. How do I inspect the weak_ptr elements in the observers vector such that I can remove the desired Observer?
I'm also looking for some advice on what the parameter type should be for my Un/Subscribe procedures. Would it be better to use std::shared_ptr<Observer>& or const std::shared_ptr<Observer>&, since we will not be modifying it?
I really do not want to have Observables owning their Observers, as it seems to betray the intentions of the pattern, and is certainly not how I want to structure the rest of the project that will ultimately be making use of the pattern. That said, an added layer of security / automation that I am considering is to have Observers store a mirror vector of weak_ptr. An Observer on its way out could then unsubscribe from all Observables it had subscribed to, and an Observable on its way out could erase the back-reference to itself from each of the Observers observing it. Evidently the two classes would be friends in such a scenario.
You can use std::remove_if with std::erase like this:
void Observable::Unsubscribe( std::shared_ptr<Observer> observer )
{
std::erase(
std::remove_if(
this->observers.begin(),
this->observers.end(),
[&](const std::weak_ptr<Observer>& wptr)
{
return wptr.expired() || wptr.lock() == observer;
}
),
this->observers.end()
);
}
You should indeed pass observer as const std::shared_ptr<Observer>&.
What I'm stuck on is how to implement the Unsubscribe procedure.
I suggest to store observers in a std::list because it's iterators are not invalidated on container modification. Then in subscribe in observer you store iterator to it and in unsubscribe you use the iterator to remove the element.
But of course you can use std::vector and std::remove_if as suggested in another answer.
Now about all that *_ptr stuff. In C++ RAII is your friend so use it. Get rid of public unsubscribe method. Instead observer must unsubscribe itself in it's destructor. This simplifies things very much: no more locking of weak pointers: if observer has been deleted, then it is not on the list. Just don't forget to protect observer list with a mutex if you have multithreaded application. If you use this design, then Observable would need only plain pointers to Observers and there will be no requirements how Observers must be stored.
class Observer {
public:
void subscribe(std::function<void()> unsubscribe) {
unsubscribe_ = std::move(unsubscribe);
}
virtual ~Observer() {
unsubscribe_();
}
private:
std::function<void()> unsubscribe_;
};
class Observable {
public:
void subscribe(Observer* observer) {
std::lock_guard<std::mutex> lock(observablesMutex_);
auto itr = observers_.insert(observers_.end(), observer);
observer->subscribe([this, itr]{
std::lock_guard<std::mutex> lock(observablesMutex_);
observers_.erase(itr);
});
}
private:
std::list<Observer*> observers_;
std::mutex observablesMutex_;
};
Note: for this code Observers must always be destroyed before the Observable.
Update: if you get more used to C++ lambdas you may find that having std::function as observer is more convenient in many cases than having a special class hierarchy. In this case you API can be like this:
class Handle {
public:
explicit Handle(std::function<void()> onDestroy)
: onDestroy_(std::move(onDestroy)) {}
Handle(const Handle&) = delete;
Handle(Handle&&) = default;
virtual ~Handle() {
onDestroy_();
}
private:
std::function<void()> onDestroy_;
};
class Observable {
public:
Handle subscribe(std::function<void()> observer) {
std::lock_guard<std::mutex> lock(observablesMutex_);
auto itr = observers_.insert(observers_.end(), observer);
return {[this, itr]{
std::lock_guard<std::mutex> lock(observablesMutex_);
observers_.erase(itr);
}};
}
private:
std::list<std::function<void()>> observers_;
std::mutex observablesMutex_;
};

Threaded base class with pure virtual callback, stopping on destruction c++

I'm looking to run a thread in a base class that constantly calls pure virtual method in that class that's overridden by a derived class.
For starting the thread, I've no issue as I can call an HasInitalized() function after it's been constructed. Therefore the thread is started after the class is fully constructed.
However, as the class' lifetime is managed by a shared_ptr, I cannot call a similar method for stopping the thread. If I stop the thread in the destructor, it will cause a seg-fault as the derived class is destroyed before the base and therefore will try to call a function that's not there.
I'm aware I can call a stop function from the derived class but would rather not have to on every instance of the derived class.
Is there a way around this.
Example:
#include "boost/thread.hpp"
class BaseClass
{
public:
BaseClass()
{
}
// Start the thread
void Start()
{
_thread = boost::thread(&BaseClass::ThreadLoop, this);
}
virtual ~BaseClass()
{
_thread.interrupt();
_thread.join();
}
private:
// Will loop until thread is interupted
void ThreadLoop()
{
try
{
while(true)
{
DoSomethingInDerivedClass();
boost::this_thread::interruption_point();
}
}
catch(...)
{
}
}
boost::thread _thread;
protected:
virtual void DoSomethingInDerivedClass() = 0;
};
class DerivedClass : public BaseClass
{
DerivedClass()
{
}
~DerivedClass()
{
// This gets called before base class destructor.
}
protected:
void DoSomethingInDerivedClass();
};
I don't think you will be able to avoid repeating the call to join the thread in the destructor of each derived class. If a thread depends on a non-static object o, then it's a good idea to have a clear ownership relation to guarantee the validity of the object:
The thread should own o and the destruction of o will be handled by the destructor of the thread object, after the joining.
o should own the thread and should join the thread in it's own destructor.
You've chosen the 2nd approach, except the thread depends on the derived object, but the derived object doesn't own the thread directly but through the sub-object (the base-object). Since the thread depends on the derived object, it must be joined in the derived object's destructor.
You should separate the two behaviours: a class to run and join the thread, the base class for the functional hierarchy.
class Runner {
public:
explicit Runner(std::shared_ptr<BaseClass> ptr) : m_ptr(ptr) {
m_thread = boost::thread(&Runner::ThreadLoop, this);
}
~Runner() {
m_thread.interrupt();
m_thread.join();
}
private:
void ThreadLoop() {
try {
while(true) {
m_ptr->DoSomethingInDerivedClass();
boost::this_thread::interruption_point();
}
} catch(...) {
}
}
std::shared_ptr<BaseClass> m_ptr;
std::thread m_thread;
};
My recommendation would be to use a weak_ptr to know when the object's lifetime is over:
The factory instantiates the (derived) object and stores it in a shared_ptr
The factory instantiates the watchdog class and passes it a weak_ptr to the new object
The watchdog thread can now check if the weak pointer is expired each time it needs to access it. When it is expired, the thread will terminate itself.
Here is an example (instead of a factory, I just used main):
#include <thread>
class BaseClass
{
public:
virtual ~BaseClass() = default;
virtual void DoSomethingInDerivedClass() = 0;
};
class DerivedClass : public BaseClass
{
public:
void DoSomethingInDerivedClass() override {}
};
// Will loop until weak_base expires
void ThreadLoop(std::weak_ptr<BaseClass> weak_base)
{
try
{
while (true)
{
std::shared_ptr<BaseClass> base = weak_base.lock();
if (base) {
base->DoSomethingInDerivedClass();
}
else {
break; // Base is gone. Terminate thread.
}
}
}
catch (...)
{
}
}
int main()
{
std::shared_ptr<DerivedClass> obj = std::make_shared<DerivedClass>();
std::thread([&] { ThreadLoop(obj); }).detach();
return 0;
}
Note that there is no need to explicitly stop the thread, since it will stop itself as soon as it detects that the object's lifetime is over. On the other hand, note that the thread may slightly outlive the lifetime of the being-watchted object, which could be considered bad design (it could e.g. defer program termination). I guess one could work around that by joining with the thread in the base class destructor, after signalling that it should terminate (if not already terminated).

How to cancel boost asio io_service post

How can I cancel already posted callback:
getIoService()->post(boost::bind(&MyClass::myCallback, this));
and keep other posted callbacks untouched?
The problem is that I have some object that receives events from different thread and I post them to ioservice in order to handle events in main thread. What if at some point I want to delete my object - ioservice will try to execute already posted callbacks in destroyed object. And in this case I can't store any flag in object since it will be removed.
There is a possible solution to use enable_shared_from_this and shared_from_this(), but wondering whether another solution or not.
Thanks
As Sam answered, it is not possible to selectively cancel posted handlers.
If the goal is to prevent calling a member function on an object whose lifetime has expired, then using enable_shared_from_this is the idiomatic solution. One consequence of this approach is that the lifetime of the object is extended to be at least that of the handler. If the object's destructor can be deferred, then consider binding the object to a handler via shared_from_this().
On the other hand, if destruction needs to be immediate, then consider writing a functor that weakly binds to the instance. This question discusses binding to a weak_ptr, and provides some research/discussion links. Here is a simplified complete example of a functor that weakly binds to an object:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>
/// #brief Mocked up type.
class MyClass:
public boost::enable_shared_from_this<MyClass>
{
public:
MyClass() { std::cout << "MyClass()" << std::endl; }
~MyClass() { std::cout << "~MyClass()" << std::endl; }
void action() { std::cout << "MyClass::action()" << std::endl; }
};
/// #brief weak_binder is a functor that binds a member function
/// to a weakly managed object instance. I.e. this
/// functor will not extend the life of the instance to
/// which it has been bound.
template <typename Fn,
typename C>
struct weak_binder
{
private:
typedef typename C::element_type element_type;
public:
/// #brief Constructor.
weak_binder(Fn& fn, C& c) : fn_(fn), c_(c)
{}
/// #brief Conditional invoke Fn if C still exists.
void operator()()
{
std::cout << "weak_binder::operator()" << std::endl;
// Create a shared pointer from the weak pointer. If
// succesful, then the object is still alive.
if (boost::shared_ptr<element_type> ptr = c_.lock())
{
// Invoke the function on the object.
(*ptr.*fn_)();
}
}
private:
Fn fn_;
boost::weak_ptr<element_type> c_;
};
/// #brief Helper function to create a functor that weakly
/// binds to a shared object.
template <typename Fn,
typename C>
weak_binder<Fn, C> weak_bind(Fn fn, C c)
{
return weak_binder<Fn, C>(fn, c);
}
int main()
{
boost::asio::io_service io_service;
boost::shared_ptr<MyClass> my_class = boost::make_shared<MyClass>();
// my_class will remain alive for this handler because a shared_ptr
// is bound to handler B, and handler B will only be destroyed after
// handler A has been destroyed.
io_service.post(weak_bind(&MyClass::action,
my_class->shared_from_this())); // A
// my_class will remain alive for this handler because it is bound
// via a shared_ptr.
io_service.post(boost::bind(&MyClass::action,
my_class->shared_from_this())); // B
// my_class will not be alive for this handler, because B will have
// been destroyed, and the my_class is reset before invoking the
// io_service.
io_service.post(weak_bind(&MyClass::action,
my_class->shared_from_this())); // C
// Reset the shared_ptr, resulting in the only remaining shared_ptr
// instance for my_class residing within handler B.
my_class.reset();
io_service.run();
}
And the resulting output:
MyClass()
weak_binder::operator()
MyClass::action()
MyClass::action()
~MyClass()
weak_binder::operator()
As can be observed, MyClass::action() is only invoked twice: once through weak_binder while the instance was alive (handler A), and once through the boost::bind where the instance is maintained via a shared_ptr (handler B). Handler C is invoked, but weak_binder::operator() detects that the instance has been destroyed, resulting in a silent no-op.
You cannot selectively cancel callbacks in such a manner through an io_service. One option is to move the logic to a higher level, such as inside of MyClass. A sample implementation may be:
class MyClass : public boost::enable_shared_from_this<MyClass>
{
public:
typedef boost::shared_ptr<MyClas> Ptr;
static Ptr create( boost::asio::io_service& io_service ) {
const Ptr result( new MyClass );
io_service.post( boost::bind(&MyClass::myCallback, result) );
return result;
}
void myCallback() {
if ( _canceled ) return;
}
void cancel() { _canceled = true; }
private:
MyClass() : _canceled(false) { }
private:
bool _canceled;
};
This class uses a boost::shared_ptr to enforce shared ownership semantics. Doing this gurantees the object lifetime will persist as long as the callback remains in the io_service queue before being dispatched.