How should I restructure this event-handling code? - c++

I've been reading some C++ books (Sutters, Meyers) lately which motivated me to start using smart pointers (and object destruction in general) more effectively. But now I'm not sure how to fix what I have.
Specifically, I now have a IntroScene class which inherits from both Scene and InputListener.
Scene isn't really relevant, but the InputListener subscribes to an InputManager on construction,
and unsubs again on destruction.
class IntroScene : public sfg::Scene, public sfg::InputListener {
/*structors, inherited methods*/
virtual bool OnEvent(sf::Event&) override; //inputlistener
}
But now, if the inputmanager sends events over to a scene, and the scene decided to replace itself
because of it, I have function running on an object that no longer exists.
bool IntroScene::OnEvent(sf::Event& a_Event) {
if (a_Event.type == sf::Event::MouseButtonPressed) {
sfg::Game::Get()->SceneMgr()->Replace(ScenePtr(new IntroScene()));
} //here the returned smartpointer kills the scene/listener
}
Side-question: Does that matter? I googled it but did not find a definite yes or no. I do know 100%
no methods are invoked on the destroyed object after it is destroyed.
I can store the Replace() return value until the end of the OnEvent() method if I have to.
The real problem is InputListener
InputListener::InputListener() {
Game::Get()->InputMgr()->Subscribe(this);
}
InputListener::~InputListener() {
if (m_Manager) m_Manager->Unsubscribe(this);
}
since it is called during OnEvent(), which is called by InputManager during HandleEvents()
void InputManager::HandleEvents(EventQueue& a_Events) const {
while (!a_Events.empty()) {
sf::Event& e = a_Events.front();
for (auto& listener : m_Listeners) {
if (listener->OnEvent(e)) //swallow event
break;
}
a_Events.pop();
}
void InputManager::Subscribe(InputListener* a_Listener) {
m_Listeners.insert(a_Listener);
a_Listener->m_Manager = this;
}
void InputManager::Unsubscribe(InputListener* a_Listener) {
m_Listeners.erase(a_Listener);
a_Listener->m_Manager = nullptr;
}
So when the new Scene+Listener is created, and when the old one is destroyed, the list m_Listeners is modified during the loop. So the thing breaks.
I've thought about setting a flag when starting and stopping the loop, and storing (un)subscriptions that happen while it is set in a separate list, and handle that after. But it feels a bit hacky.
So, how can I actually redesign this properly to prevent these kind of situations? Thanks in advance.
EDIT, Solution:
I ended up going with the loop flags and deferred entry list (inetknight's answer below)
for subscription only, since that can be safely done later.
Unsubscriptions have to be dealt with immediately, so instead of storing raw pointers I store a (pointer-mutable bool) pair (mutable since a set only returns a const_iterator). I set the bool to false when that happens and check for it in the event loop (see dave's comment below).
Not sure it's cleanest possible solution, but it works like a charm. Thanks a lot guys

Side-question: Does that matter? I googled it but did not find a definite yes or no. I do know 100% no methods are invoked on the destroyed object after it is destroyed. I can store the Replace() return value until the end of the OnEvent() method if I have to.
If you know 100% no methods are invoked ont he destroyed object and none of its member variables are accessed, then it's safe. Whether or not it's intended is up to you.
You could have another list of objects which have requested to be un/subscribed. Then after you've told everyone in the list of events, you would then process the list of un/subscription requests before continuing on to the next event.
/* this should be a member of InputManager however you did not provide a class definition */
typedef std::pair<InputListener *, bool> SubscriptionRequest;
bool handleEventsActive = false;
std::vector<SubscriptionRequest> deferredSubscriptionRequests;
void InputManager::HandleEvents(EventQueue& a_Events) const {
// process events
handleEventsActive = true;
while (!a_Events.empty()) {
sf::Event& e = a_Events.front();
for (auto& listener : m_Listeners)
{
//swallow event
if (listener->OnEvent(e)) {
break;
}
}
a_Events.pop();
// process deferred subscription requests occurred during event
while ( not deferredSubscriptionRequests.empty() ) {
SubscriptionRequest request = deferredSubscriptionRequests.back();
deferredSubscriptionRequests.pop_back();
DoSubscriptionRequest(request);
}
}
handleEventsActive = false;
}
void InputManager::DoSubscriptionRequest(SubscriptionRequest &request) {
if ( request.second ) {
m_Listeners.insert(request.first);
request.first->m_Manager = this;
} else {
m_Listeners.erase(request.first);
request.first->m_Manager = nullptr;
}
}
void InputManager::Subscribe(InputListener* a_Listener)
{
SubscriptionRequest request{a_Listener, true};
if ( handleEventsActive ) {
deferredSubscriptionRequests.push_back(request);
} else {
DoSubscriptionRequest(request);
}
}
void InputManager::Unsubscribe(InputListener* a_Listener)
{
SubscriptionRequest request{a_Listener, false};
if ( handleEventsActive ) {
deferredSubscriptionRequests.push_back(request);
} else {
DoSubscriptionRequest(request);
}
}

Related

Is there a way to protect a smart pointer from being deallocated on one thread, when work is being done on another thread?

In our program, we have a class FooLogger which logs specific events (strings). We use the FooLogger as a unique_ptr.
We have two threads which use this unique_ptr instance:
Thread 1 logs the latest event to file in a while loop, first checking if the instance is not nullptr
Thread 2 deallocates the FooLogger unique_ptr instance when the program has reached a certain point (set to nullptr)
However, due to bad interleaving, it is possible that, while logging, the member variables of FooLogger are deallocated, resulting in an EXC_BAD_ACCESS error.
class FooLogger {
public:
FooLogger() {};
void Log(const std::string& event="") {
const float32_t time_step_s = timer_.Elapsed() - runtime_s_; // Can get EXC_BAD_ACCESS on timer_
runtime_s_ += time_step_s;
std::cout << time_step_s << runtime_s_ << event << std::endl;
}
private:
Timer timer_; // Timer is a custom class
float32_t runtime_s_ = 0.0;
};
int main() {
auto foo_logger = std::make_unique<FooLogger>();
std::thread foo_logger_thread([&] {
while(true) {
if (foo_logger)
foo_logger->Log("some event");
else
break;
}
});
SleepMs(50); // pseudo code
foo_logger = nullptr;
foo_logger_thread.join();
}
Is it possible, using some sort of thread synchronisation/locks etc. to ensure that the foo_logger instance is not deallocated while logging? If not, are there any good ways of handling this case?
The purpose of std::unique_ptr is to deallocate the instance once std::unique_ptr is out of scope. In your case, you have multiple threads each having access to the element and the owning thread might get eliminated prior to other users.
You either need to ensure that owner thread never gets deleted prior to the user threads or change ownership model from std::unique_ptr to std::shared_ptr. It is the whole purpose of std::shared_ptr to ensure that the object is alive as long as you use it.
You just need to figure out what's required for program and use the right tools to achieve it.
Use a different mechanism than the disappearance of an object for determining when to stop.
(When you use a single thing for two separate purposes, you often get into trouble.)
For instance, an atomic bool:
int main() {
FooLogger foo_logger;
std::atomic<bool> keep_going = true;
std::thread foo_logger_thread([&] {
while(keep_going) {
foo_logger.Log("some event");
}
});
SleepMs(50);
keep_going = false;
foo_logger_thread.join();
}
It sounds like std::weak_ptr can help in this case.
You can make one from a std::shared_ptr and pass it to the logger thread.
For example:
class FooLogger {
public:
void Log(std::string const& event) {
// log the event ...
}
};
int main() {
auto shared_logger = std::make_shared<FooLogger>();
std::thread foo_logger_thread([w_logger = std::weak_ptr(shared_logger)]{
while (true) {
auto logger = w_logger.lock();
if (logger)
logger->Log("some event");
else
break;
}
});
// some work ...
shared_logger.reset();
foo_logger_thread.join();
}
Use should use make_shared instead of make_unique. And change:
std::thread foo_logger_thread([&] {
to
std::thread foo_logger_thread([foo_logger] {
It will create new instance of shared_ptr.

Creating a C++ Event System

I've decided to begin making a game engine lately. I know most people don't finish theirs, and if I'm being honest I may not either. I'm doing this because I'm sick of googling "Cool C++ projects" and doing the 3 answers every single user gives (that'd be an address book or something similar, tic tac toe, and a report card generator or something like that). I like programming, but unfortunately I have no real use for it. Everything I would use it for I can do faster and easier in another way, or a solution already exists. However, in an effort to learn more than the basic level of C++ and do something that would teach me something that's truly in depth, I've revoked this policy and decided to begin a game engine, as it's something I've always been interested in. I've decided to model it loosely after Amazon's Lumberyard engine, as it's almost entirely C++ and gives me a good basis to learn from, as I can always just go there and do something with it to see how it behaves.
Onto the actual problem now:
I've got a working Entity Component system (yay), that although is in its early stages and not super great functionality wise, I'm very proud of. Honestly I never thought I'd get this far. I'm currently working with the Event Bus system. Now, I really love LY's EBus system. It's extremely easy to use and very straight forward, but from a programming newbie-ish's eyes it's black magic and witchcraft. I have no clue how they did certain things, so hopefully you do!
Making an EBus goes something like this:
#include <EBusThingy.h>
class NewEbusDealio
: public EbusThingy
{
public:
//Normally there's some setup work involved here, but I'm excluding it as I don't really feel that it's necessary for now. I can always add it later (see the footnote for details on what these actually are).
//As if by magic, this is all it takes to do it (I'd like to clarify that I'm aware that this is a pure virtual function, I just don't get how they generate so much usage out of this one line):
virtual void OnStuffHappening(arguments can go here if you so choose) = 0;
};
And that's it...
As if by magic, when you go to use it, all you have to do is this:
#include "NewEbusDealio.h"
class ComponentThatUsesTheBus
: public NewEbusDealio::Handler
{
public:
void Activate() override
{
NewEbusDealio::Handler::BusConnect();
}
protected:
void OnStuffHappening(arguments so chosen)
{
//Do whatever you want to happen when the event fires
}
};
class ComponentThatSendsEvents
{
public:
void UpdateOrWhatever()
{
NewEbusDealio::Broadcast(NewEbusDealio::Events::OnStuffHappening, arguments go here)
}
};
I just don't get how you can do this much stuff just by adding a single virtual function to NewEbusDealio. Any help on this is much appreciated. Sorry for so many text walls but I'd really like to get something out of this, and I've hit a massive brick wall on this bit. This may be way overkill for what I'm making, and it also may wind up being so much work that it's just not within the realm of possibility for one person to make in a reasonable amount of time, but if a simple version of this is possible I'd like to give it a go.
I'm putting this down here so people know what the setup work is. All you do is define a static const EBusHandlerPolicy and EBusAddressPolicy, which defines how many handlers can connect to each address on the bus, and whether the bus works on a single address (no address needed in event call), or whether you can use addresses to send events to handlers listening on a certain address. For now, I'd like to have a simple bus where if you send an event, all handlers receive it.
Not familiar with EBus you given, but event buses should be similar: one side creates an event and puts it into a list, the other side picks up events one by one and reacts.
As modern C++ gives us closure feature, it ismuch easier to implement a event bus now.
Following, I'm going to give a simple example, where looper is a event bus.
Be aware mutexs and conditional variables are necessary for this looper in production.
#include <queue>
#include <list>
#include <thread>
#include <functional>
class ThreadWrapper {
public:
ThreadWrapper() = default;
~ThreadWrapper() { Detach(); }
inline void Attach(std::thread &&th) noexcept {
Detach();
routine = std::forward<std::thread &&>(th);
}
inline void Detach() noexcept {
if (routine.joinable()) {
routine.join();
}
}
private:
std::thread routine{};
};
class Looper {
public:
// return ture to quit the loop, false to continue
typedef std::function<void()> Task;
typedef std::list<Task> MsgQueue;
Looper() = default;
~Looper() {
Deactivate();
}
// Post a method
void Post(const Task &tsk) noexcept {
Post(tsk, false);
}
// Post a method
void Post(const Task &tsk, bool flush) noexcept {
if(!running) {
return;
}
if (flush) msg_queue.clear();
msg_queue.push_back(tsk);
}
// Start looping
void Activate() noexcept {
if (running) {
return;
}
msg_queue.clear();
looping = true;
worker.Attach(std::thread{&Looper::Entry, this});
running = true;
}
// stop looping
void Deactivate() noexcept {
{
if(!running) {
return;
}
looping = false;
Post([] { ; }, true);
worker.Detach();
running = false;
}
}
bool IsActive() const noexcept { return running; }
private:
void Entry() noexcept {
Task tsk;
while (looping) {
//if(msg_queue.empty()) continue;
tsk = msg_queue.front();
msg_queue.pop_front();
tsk();
}
}
MsgQueue msg_queue{};
ThreadWrapper worker{};
volatile bool running{false};
volatile bool looping{false};
};
An example to use this Looper:
class MySpeaker: public Looper{
public:
// Call SayHi without blocking current thread
void SayHiAsync(const std::string &msg){
Post([this, msg] {
SayHi(msg);
});
}
private:
// SayHi will be called in the working thread
void SayHi() {
std::cout << msg << std::endl;
}
};

Event Driven SDL2

I'm in the process of wrapping SDL into C++ objects. Basically, I'm just tired of seeing SDL_ in my code. I'd like at least namespaces... SDL::Window. I've done that, its going more or less fine.
Issue arises with events. I'd like it to be event driven (callbacks) rather than me having to poll an events queue. (the propagation routines you have to write to get SDL_Event fit the abstraction I've designed is painful).
Take for example, a Window class. Its constructor calls
SDL_AddEventWatch(window_events, this);
where window_events is a static member of the Window class. It catches anything of type SDL_WINDOWEVENT.
int Window::window_events(void* data, SDL::Events::Event* ev)
{
if (ev->type == SDL::Events::Window::Any)
{
auto win = static_cast<Window *>(data);
if (ev->window.windowID == SDL_GetWindowID(win->mWindow))
{
std::vector<event_callback> callbacks = win->mWindowCallbacks;
for (const auto cbk : callbacks)
{
cbk(*ev);
}
}
}
return 0;
}
My Window class also contains hook and unhook methods. Each takes a std::function. This is what mWindowCallbacks is a collection of. Any external routine interested in an event gets a copy forwarded to it.
//...
using event_callback = std::function<void(SDL::Events::Event)>;
//...
template<typename T> bool
find_func(const T & object,
const std::vector<T> & list,
int * location=nullptr)
{
int offset = 0;
for (auto single : list)
{
if (single.target<T>() ==
object.target<T>())
{
if (location != nullptr) *location = offset;
return true;
}
offset++;
}
return false;
}
void
Window::hook(event_callback cbk)
{
if (!find_func(cbk, mWindowCallbacks))
{
mWindowCallbacks.push_back(cbk);
}
}
void
Window::unhook(event_callback cbk)
{
int offset = 0;
if (find_func(cbk, mWindowCallbacks, &offset))
{
mWindowCallbacks.erase(mWindowCallbacks.begin() + offset);
}
}
Usage:
///...
void cbk_close(SDL::Events::Event e)
{
if (e.window.event == SDL::Events::Window::Close)
{
window.close();
quit = true;
}
}
///...
std::function<void(SDL::Events::Event)> handler = cbk_close;
SDL::Window window;
window.hook(handler);
Close:
void Window::close()
{
SDL_DelEventWatch(window_events, this);
SDL_DestroyWindow(mWindow);
mWindowCallbacks.clear();
}
To me, this doesn't seem like terrible design.
Once you press close on the window the cbk_close is invoked, it calls close, it sets the quit flag... Then it returns to the window_events loop. As expected... However that function doesn't seem to return control to the program.
This is what I need help with. I don't really understand why. I think its hijacking the main thread, as the program will exit once that function exits if you have one window, or... crash if you have two.
Am I on the right lines with that? I've been stuck on this for a week. Its really rather infuriating. To anyone willing to have a play about with it; here's the git repo for the full code.
Windows, Visual Studio 2015/VC solution.
https://bitbucket.org/andywm/sdl_oowrapper/
Okay, so I think I more or less understand what's going on here now.
SDL_AddEventWatch(void (*)(void *, SDL_Event*), void *)
If you're using C++, you should set the calling convention.
SDLCALL
In my case;
int SDLCALL
Window::window_events(void* data, SDL::Events::Event* ev)
This seems to stop the sdl events system from nabbing the main thread.
As for why its crashing with multiple windows... Well, if I remove this line
SDL_DelEventWatch(window_events, this);
it doesn't crash... Not really sure why yet, but if I figure it out I'll amend my answer - and if anyone with more experienced with SDL could fill me in.. That'd be great.

Delete an object after the callback is called C++

I create a new object and set a data and a callback something like this:
class DownloadData
{
std::function<void(int, bool)> m_callback;
int m_data;
public:
void sendHttpRequest()
{
// send request with data
}
private:
void getHttpResponse(int responseCode)
{
if (responseCode == 0)
{
// save data
m_callback(responseCode, true);
delete this;
return;
}
// some processing here
if (responseCode == 1 && some other condition here)
{
m_callback(responseCode, false);
delete this;
return;
}
}
}
Now the usage - I create a new object:
if (isNeededToDownloadTheFile)
{
DownloadData* p = new DownloadData(15, [](){});
p->sendHttpRequest();
}
But as you can see https://isocpp.org/wiki/faq/freestore-mgmt#delete-this it is highly not desirable to make a suicide. Is there a good design pattern or an approach for this?
You could put them in a vector or list, have getHttpResponse() set a flag instead of delete this when it's completed, and then have another part of the code occasionally traverse the list looking for completed requests.
That would also allow you to implement a timeout. If the request hasn't returned in a day, it's probably not going to and you should delete that object.
If you want to put the delete out of that function, the only way is to store the object somehow. However, this raises the ownership questions: who is the owner of the asynchronous http request that's supposed to call a callback?
In this scenario, doing the GCs job actually makes the code pretty clear. However, if you wanted to make it more adaptable to C++, I'd probably settle on a promise-like interface, similar to std::async. That way the synchronous code path makes it way easier to store the promise objects.
You asked for a code example, so there goes:
Typical approach would look like this:
{
DownloadData* p = new DownloadData(15, [](auto data){
print(data)
});
p->sendHttpRequest();
}
Once the data is available, it can be printed. However, you can look at the problem "from the other end":
{
Future<MyData> f = DownloadData(15).getFuture();
// now you can either
// a) synchronously wait for the future
// b) return it for further processing
return f;
}
f will hold the actual value once the request actually processes. That way you can push it as if it was a regular value all the way up to the place where that value is actually needed, and wait for it there. Of course, if you consume it asynchronously, you might as well spawn another asynchronous action for that.
The implementation of the Future is something that's outside of the scope of this answer, I think, but then again numerous resources are available online. The concept of Promises and Futures isn't something specific to C++.
If the caller keeps a reference to the downloading object then it can erase it when the download signals it has ended:
class DownloadData
{
// true until download stops (atomic to prevent race)
std::atomic_bool m_downloading;
int m_data;
std::function<void(int, bool)> m_callback;
public:
DownloadData(int data, std::function<void(int, bool)> callback)
: m_downloading(true), m_data(data), m_callback(callback) {}
void sendHttpRequest()
{
// send request with data
}
// called asynchronously to detect dead downloads
bool ended() const { return !m_downloading; }
private:
void getHttpResponse(int responseCode)
{
if (responseCode == 0)
{
// save data
m_callback(responseCode, true);
m_downloading = false; // signal end
return;
}
// some processing here
if(responseCode == 1)
{
m_callback(responseCode, false);
m_downloading = false; // signal end
return;
}
}
};
Then from the caller's side:
std::vector<std::unique_ptr<DownloadData>> downloads;
// ... other code ...
if (isNeededToDownloadTheFile)
{
// clean current downloads by deleting all those
// whose download is ended
downloads.erase(std::remove_if(downloads.begin(), downloads.end(),
[](std::unique_ptr<DownloadData> const& d)
{
return d->ended();
}), downloads.end());
// store this away to keep it alive until its download ends
downloads.push_back(std::make_unique<DownloadData>(15, [](int, bool){}));
downloads.back()->sendHttpRequest();
}
// ... etc ...

any simple and fast call back mechanism?

i am implmenting an event-driven message processing logic for a speed-sensitive application. I have various business logics which wrapped into a lot of Reactor classes:
class TwitterSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class FacebookSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class YoutubeSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
on_new_plus_one(PlusOneEvent&);
};
let's say, there are 8 such event types, each Reactor respond to a subset of them.
the core program has 8 'entry point' for the message, which hooked up with some low-level socket processing library, for instance
on_new_post(PostEvent& pe){
youtube_sentiment_reactor_instance->on_new_post(pe);
twitter_sentiment_reactor_instance->on_new_post(pe);
youtube_sentiment_reactor_instance->on_new_post(pe);
}
I am thinking about using std::function and std::bind, to build a std::vector<std::function<>>, then I loop through the vector to call each call-back function.
However, when I tried it,std::function proved to not be fast enough. Is there a fast yet simple solution here? As i mentioned earlier, this is VERY speed sensitive, so i want to avoid using virtual function and inheritance, to cut the v-table look up
comments are welcomed. thanks
I think that in your case it is easier to do an interface, as you know are going to call simple member functions that match exactly the expected parameters:
struct IReactor {
virtual void on_new_post(PostEvent&) =0;
virtual void on_new_comment(CommentEvent&) =0;
virtual void on_new_plus_one(PlusOneEvent&) =0;
};
And then make each of your classes inherit and implement this interface.
You can have a simple std::vector<IReactor*> to manage the callbacks.
And remember that in C++, interfaces are just ordinary classes, so you can even write default implementations for some or all of the functions:
struct IReactor {
virtual void on_new_post(PostEvent&) {}
virtual void on_new_comment(CommentEvent&) {}
virtual void on_new_plus_one(PlusOneEvent&) {}
};
std::function main performance issue is that whenever you need to store some context (such as bound arguments, or the state of a lambda) then memory is required which often translates into a memory allocation. Also, the current library implementations that exist may not have been optimized to avoid this memory allocation.
That being said:
is it too slow ? you will have to measure it for yourself, in your context
are there alternatives ? yes, plenty!
As an example, what don't you use a base class Reactor which has all the required callbacks defined (doing nothing by default), and then derive from it to implement the required behavior ? You could then easily have a std::vector<std::unique_ptr<Reactor>> to iterate over!
Also, depending on whether the reactors need state (or not) you may gain a lot by avoiding allocating objects from then and use just functions instead.
It really, really, depends on the specific constraints of your projects.
If you need fast delegates and event system take a look to Offirmo:
It is as fast as the "Fastest possible delegates", but it has 2 major advantages:
1) it is ready and well tested library (don't need to write your own library from an article)
2) Does not relies on compiler hacks (fully compliant to C++ standard)
https://github.com/Offirmo/impossibly-fast-delegates
If you need a managed signal/slot system I have developed my own(c++11 only).
It is not fast as Offirmo, but is fast enough for any real scenario, most important is order of magnitude faster than Qt or Boost signals and is simple to use.
Signal is responsible for firing events.
Slots are responsible for holding callbacks.
Connect how many Slots as you wish to a Signal.
Don't warry about lifetime (everything autodisconnect)
Performance considerations:
The overhead for a std::function is quite low (and improving with every compiler release). Actually is just a bit slower than a regular function call. My own signal/slot library, is capable of 250 millions(I measured the pure overhead) callbacks/second on a 2Ghz processor and is using std::function.
Since your code has to do with network stuff you should mind that your main bottleneck will be the sockets.
The second bottleneck is latency of instruction cache. It does not matter if you use Offirmo (few assembly instructions), or std::function. Most of the time is spent by fetchin instructions from L1 cache. The best optimization is to keep all callbacks code compiled in the same translation unit (same .cpp file) and possibly in the same order in wich callbacks are called (or mostly the same order), after you do that you'll see only a very tiny improvement using Offirmo (seriously, you CAN'T BE faster than Offirmo) over std::function.
Keep in mind that any function doing something really usefull would be at least few dozens instructions (especially if dealing with sockets: you'll have to wait completion of system calls and processor context switch..) so the overhead of the callback system will be neglictible.
I can't comment on the actual speed of the method that you are using, other than to say:
Premature optimization does not usually give you what you expect.
You should measure the performance contribution before you start slicing and dicing. If you know it won't work before hand, then you can search now for something better or go "suboptimal" for now but encapsulate it so it can be replaced.
If you are looking for a general event system that does not use std::function (but does use virtual methods), you can try this one:
Notifier.h
/*
The Notifier is a singleton implementation of the Subject/Observer design
pattern. Any class/instance which wishes to participate as an observer
of an event can derive from the Notified base class and register itself
with the Notiifer for enumerated events.
Notifier derived classes implement variants of the Notify function:
bool Notify(const NOTIFIED_EVENT_TYPE_T& event, variants ....)
There are many variants possible. Register for the message
and create the interface to receive the data you expect from
it (for type safety).
All the variants return true if they process the event, and false
if they do not. Returning false will be considered an exception/
assertion condition in debug builds.
Classes derived from Notified do not need to deregister (though it may
be a good idea to do so) as the base class destrctor will attempt to
remove itself from the Notifier system automatically.
The event type is an enumeration and not a string as it is in many
"generic" notification systems. In practical use, this is for a closed
application where the messages will be known at compile time. This allows
us to increase the speed of the delivery by NOT having a
dictionary keyed lookup mechanism. Some loss of generality is implied
by this.
This class/system is NOT thread safe, but could be made so with some
mutex wrappers. It is safe to call Attach/Detach as a consequence
of calling Notify(...).
*/
/* This is the base class for anything that can receive notifications.
*/
typedef enum
{
NE_MIN = 0,
NE_SETTINGS_CHANGED,
NE_UPDATE_COUNTDOWN,
NE_UDPATE_MESSAGE,
NE_RESTORE_FROM_BACKGROUND,
NE_MAX,
} NOTIFIED_EVENT_TYPE_T;
class Notified
{
public:
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const uint32& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const bool& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const string& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const double& value)
{ return false; };
virtual ~Notified();
};
class Notifier : public SingletonDynamic<Notifier>
{
public:
private:
typedef vector<NOTIFIED_EVENT_TYPE_T> NOTIFIED_EVENT_TYPE_VECTOR_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T> NOTIFIED_MAP_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T>::iterator NOTIFIED_MAP_ITER_T;
typedef vector<Notified*> NOTIFIED_VECTOR_T;
typedef vector<NOTIFIED_VECTOR_T> NOTIFIED_VECTOR_VECTOR_T;
NOTIFIED_MAP_T _notifiedMap;
NOTIFIED_VECTOR_VECTOR_T _notifiedVector;
NOTIFIED_MAP_ITER_T _mapIter;
// This vector keeps a temporary list of observers that have completely
// detached since the current "Notify(...)" operation began. This is
// to handle the problem where a Notified instance has called Detach(...)
// because of a Notify(...) call. The removed instance could be a dead
// pointer, so don't try to talk to it.
vector<Notified*> _detached;
int32 _notifyDepth;
void RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& orgEventTypes, NOTIFIED_EVENT_TYPE_T eventType);
void RemoveNotified(NOTIFIED_VECTOR_T& orgNotified, Notified* observer);
public:
virtual void Reset();
virtual bool Init() { Reset(); return true; }
virtual void Shutdown() { Reset(); }
void Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for a specific event
void Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for ALL events
void Detach(Notified* observer);
// This template function (defined in the header file) allows you to
// add interfaces to Notified easily and call them as needed. Variants
// will be generated at compile time by this template.
template <typename T>
bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const T& value)
{
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
// Keep a copy of the list. If it changes while iterating over it because of a
// deletion, we may miss an object to update. Instead, we keep track of Detach(...)
// calls during the Notify(...) cycle and ignore anything detached because it may
// have been deleted.
NOTIFIED_VECTOR_T notified = _notifiedVector[eventType];
// If a call to Notify leads to a call to Notify, we need to keep track of
// the depth so that we can clear the detached list when we get to the end
// of the chain of Notify calls.
_notifyDepth++;
// Loop over all the observers for this event.
// NOTE that the the size of the notified vector may change if
// a call to Notify(...) adds/removes observers. This should not be a
// problem because the list is a simple vector.
bool result = true;
for(int idx = 0; idx < notified.size(); idx++)
{
Notified* observer = notified[idx];
if(_detached.size() > 0)
{ // Instead of doing the search for all cases, let's try to speed it up a little
// by only doing the search if more than one observer dropped off during the call.
// This may be overkill or unnecessary optimization.
switch(_detached.size())
{
case 0:
break;
case 1:
if(_detached[0] == observer)
continue;
break;
default:
if(std::find(_detached.begin(), _detached.end(), observer) != _detached.end())
continue;
break;
}
}
result = result && observer->Notify(eventType,value);
assert(result == true);
}
// Decrement this each time we exit.
_notifyDepth--;
if(_notifyDepth == 0 && _detached.size() > 0)
{ // We reached the end of the Notify call chain. Remove the temporary list
// of anything that detached while we were Notifying.
_detached.clear();
}
assert(_notifyDepth >= 0);
return result;
}
/* Used for CPPUnit. Could create a Mock...maybe...but this seems
* like it will get the job done with minimal fuss. For now.
*/
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> GetEvents(Notified* observer);
// Return all objects registered for this event.
vector<Notified*> GetNotified(NOTIFIED_EVENT_TYPE_T event);
};
Notifier.cpp
#include "Notifier.h"
void Notifier::Reset()
{
_notifiedMap.clear();
_notifiedVector.clear();
_notifiedVector.resize(NE_MAX);
_detached.clear();
_notifyDepth = 0;
}
void Notifier::Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter == _notifiedMap.end())
{ // Registering for the first time.
NOTIFIED_EVENT_TYPE_VECTOR_T eventTypes;
eventTypes.push_back(eventType);
// Register it with this observer.
_notifiedMap[observer] = eventTypes;
// Register the observer for this type of event.
_notifiedVector[eventType].push_back(observer);
}
else
{
NOTIFIED_EVENT_TYPE_VECTOR_T& events = _mapIter->second;
bool found = false;
for(int idx = 0; idx < events.size() && !found; idx++)
{
if(events[idx] == eventType)
{
found = true;
break;
}
}
if(!found)
{
events.push_back(eventType);
_notifiedVector[eventType].push_back(observer);
}
}
}
void Notifier::RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes, NOTIFIED_EVENT_TYPE_T eventType)
{
int foundAt = -1;
for(int idx = 0; idx < eventTypes.size(); idx++)
{
if(eventTypes[idx] == eventType)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
eventTypes.erase(eventTypes.begin()+foundAt);
}
}
void Notifier::RemoveNotified(NOTIFIED_VECTOR_T& notified, Notified* observer)
{
int foundAt = -1;
for(int idx = 0; idx < notified.size(); idx++)
{
if(notified[idx] == observer)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
notified.erase(notified.begin()+foundAt);
}
}
void Notifier::Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{ // Was registered
// Remove it from the map.
RemoveEvent(_mapIter->second, eventType);
// Remove it from the vector
RemoveNotified(_notifiedVector[eventType], observer);
// If there are no events left, remove this observer completely.
if(_mapIter->second.size() == 0)
{
_notifiedMap.erase(_mapIter);
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
}
}
void Notifier::Detach(Notified* observer)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes = _mapIter->second;
for(int idx = 0; idx < eventTypes.size();idx++)
{
NOTIFIED_EVENT_TYPE_T eventType = eventTypes[idx];
// Remove this observer from the Notified list for this event type.
RemoveNotified(_notifiedVector[eventType], observer);
}
_notifiedMap.erase(_mapIter);
}
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
Notified::~Notified()
{
Notifier::Instance().Detach(this);
}
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> Notifier::GetEvents(Notified* observer)
{
vector<NOTIFIED_EVENT_TYPE_T> result;
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
result = _mapIter->second;
}
return result;
}
// Return all objects registered for this event.
vector<Notified*> Notifier::GetNotified(NOTIFIED_EVENT_TYPE_T event)
{
return _notifiedVector[event];
}
NOTES:
You must call init() on the class before using it.
You don't have to use it as a singleton, or use the singleton template I used here. That is just to get a reference/init/shutdown mechanism in place.
This is from a larger code base. You can find some other examples on github here.
There was a topic on SO, where virtually all mechanisms available in C++ was enumerated, but can't find it.
It had a list something like this:
function pointers
functors: member function pointers wrapped along with this to object with overloaded operator()
Fast Delegates
Impossibly Fast Delegates
boost::signals
Qt signal-slots
Fast delegates and boost::function performance comparison article: link
Oh, by the way, premature optimization..., profile first then optimize, 80/20-rule, blah-blah, blah-blah, you know ;)
Happy coding!
Unless you can parameterize your handlers statically and get the inlined, std::function<...> is your best option. When type exact type needs to be erased or you need to call run-time specified function you'll have an indirection and, hence, an actual function call without the ability to get things inlined. std::function<...> does exactly this and you won't get better.