I'm in the process of wrapping SDL into C++ objects. Basically, I'm just tired of seeing SDL_ in my code. I'd like at least namespaces... SDL::Window. I've done that, its going more or less fine.
Issue arises with events. I'd like it to be event driven (callbacks) rather than me having to poll an events queue. (the propagation routines you have to write to get SDL_Event fit the abstraction I've designed is painful).
Take for example, a Window class. Its constructor calls
SDL_AddEventWatch(window_events, this);
where window_events is a static member of the Window class. It catches anything of type SDL_WINDOWEVENT.
int Window::window_events(void* data, SDL::Events::Event* ev)
{
if (ev->type == SDL::Events::Window::Any)
{
auto win = static_cast<Window *>(data);
if (ev->window.windowID == SDL_GetWindowID(win->mWindow))
{
std::vector<event_callback> callbacks = win->mWindowCallbacks;
for (const auto cbk : callbacks)
{
cbk(*ev);
}
}
}
return 0;
}
My Window class also contains hook and unhook methods. Each takes a std::function. This is what mWindowCallbacks is a collection of. Any external routine interested in an event gets a copy forwarded to it.
//...
using event_callback = std::function<void(SDL::Events::Event)>;
//...
template<typename T> bool
find_func(const T & object,
const std::vector<T> & list,
int * location=nullptr)
{
int offset = 0;
for (auto single : list)
{
if (single.target<T>() ==
object.target<T>())
{
if (location != nullptr) *location = offset;
return true;
}
offset++;
}
return false;
}
void
Window::hook(event_callback cbk)
{
if (!find_func(cbk, mWindowCallbacks))
{
mWindowCallbacks.push_back(cbk);
}
}
void
Window::unhook(event_callback cbk)
{
int offset = 0;
if (find_func(cbk, mWindowCallbacks, &offset))
{
mWindowCallbacks.erase(mWindowCallbacks.begin() + offset);
}
}
Usage:
///...
void cbk_close(SDL::Events::Event e)
{
if (e.window.event == SDL::Events::Window::Close)
{
window.close();
quit = true;
}
}
///...
std::function<void(SDL::Events::Event)> handler = cbk_close;
SDL::Window window;
window.hook(handler);
Close:
void Window::close()
{
SDL_DelEventWatch(window_events, this);
SDL_DestroyWindow(mWindow);
mWindowCallbacks.clear();
}
To me, this doesn't seem like terrible design.
Once you press close on the window the cbk_close is invoked, it calls close, it sets the quit flag... Then it returns to the window_events loop. As expected... However that function doesn't seem to return control to the program.
This is what I need help with. I don't really understand why. I think its hijacking the main thread, as the program will exit once that function exits if you have one window, or... crash if you have two.
Am I on the right lines with that? I've been stuck on this for a week. Its really rather infuriating. To anyone willing to have a play about with it; here's the git repo for the full code.
Windows, Visual Studio 2015/VC solution.
https://bitbucket.org/andywm/sdl_oowrapper/
Okay, so I think I more or less understand what's going on here now.
SDL_AddEventWatch(void (*)(void *, SDL_Event*), void *)
If you're using C++, you should set the calling convention.
SDLCALL
In my case;
int SDLCALL
Window::window_events(void* data, SDL::Events::Event* ev)
This seems to stop the sdl events system from nabbing the main thread.
As for why its crashing with multiple windows... Well, if I remove this line
SDL_DelEventWatch(window_events, this);
it doesn't crash... Not really sure why yet, but if I figure it out I'll amend my answer - and if anyone with more experienced with SDL could fill me in.. That'd be great.
Related
I've decided to begin making a game engine lately. I know most people don't finish theirs, and if I'm being honest I may not either. I'm doing this because I'm sick of googling "Cool C++ projects" and doing the 3 answers every single user gives (that'd be an address book or something similar, tic tac toe, and a report card generator or something like that). I like programming, but unfortunately I have no real use for it. Everything I would use it for I can do faster and easier in another way, or a solution already exists. However, in an effort to learn more than the basic level of C++ and do something that would teach me something that's truly in depth, I've revoked this policy and decided to begin a game engine, as it's something I've always been interested in. I've decided to model it loosely after Amazon's Lumberyard engine, as it's almost entirely C++ and gives me a good basis to learn from, as I can always just go there and do something with it to see how it behaves.
Onto the actual problem now:
I've got a working Entity Component system (yay), that although is in its early stages and not super great functionality wise, I'm very proud of. Honestly I never thought I'd get this far. I'm currently working with the Event Bus system. Now, I really love LY's EBus system. It's extremely easy to use and very straight forward, but from a programming newbie-ish's eyes it's black magic and witchcraft. I have no clue how they did certain things, so hopefully you do!
Making an EBus goes something like this:
#include <EBusThingy.h>
class NewEbusDealio
: public EbusThingy
{
public:
//Normally there's some setup work involved here, but I'm excluding it as I don't really feel that it's necessary for now. I can always add it later (see the footnote for details on what these actually are).
//As if by magic, this is all it takes to do it (I'd like to clarify that I'm aware that this is a pure virtual function, I just don't get how they generate so much usage out of this one line):
virtual void OnStuffHappening(arguments can go here if you so choose) = 0;
};
And that's it...
As if by magic, when you go to use it, all you have to do is this:
#include "NewEbusDealio.h"
class ComponentThatUsesTheBus
: public NewEbusDealio::Handler
{
public:
void Activate() override
{
NewEbusDealio::Handler::BusConnect();
}
protected:
void OnStuffHappening(arguments so chosen)
{
//Do whatever you want to happen when the event fires
}
};
class ComponentThatSendsEvents
{
public:
void UpdateOrWhatever()
{
NewEbusDealio::Broadcast(NewEbusDealio::Events::OnStuffHappening, arguments go here)
}
};
I just don't get how you can do this much stuff just by adding a single virtual function to NewEbusDealio. Any help on this is much appreciated. Sorry for so many text walls but I'd really like to get something out of this, and I've hit a massive brick wall on this bit. This may be way overkill for what I'm making, and it also may wind up being so much work that it's just not within the realm of possibility for one person to make in a reasonable amount of time, but if a simple version of this is possible I'd like to give it a go.
I'm putting this down here so people know what the setup work is. All you do is define a static const EBusHandlerPolicy and EBusAddressPolicy, which defines how many handlers can connect to each address on the bus, and whether the bus works on a single address (no address needed in event call), or whether you can use addresses to send events to handlers listening on a certain address. For now, I'd like to have a simple bus where if you send an event, all handlers receive it.
Not familiar with EBus you given, but event buses should be similar: one side creates an event and puts it into a list, the other side picks up events one by one and reacts.
As modern C++ gives us closure feature, it ismuch easier to implement a event bus now.
Following, I'm going to give a simple example, where looper is a event bus.
Be aware mutexs and conditional variables are necessary for this looper in production.
#include <queue>
#include <list>
#include <thread>
#include <functional>
class ThreadWrapper {
public:
ThreadWrapper() = default;
~ThreadWrapper() { Detach(); }
inline void Attach(std::thread &&th) noexcept {
Detach();
routine = std::forward<std::thread &&>(th);
}
inline void Detach() noexcept {
if (routine.joinable()) {
routine.join();
}
}
private:
std::thread routine{};
};
class Looper {
public:
// return ture to quit the loop, false to continue
typedef std::function<void()> Task;
typedef std::list<Task> MsgQueue;
Looper() = default;
~Looper() {
Deactivate();
}
// Post a method
void Post(const Task &tsk) noexcept {
Post(tsk, false);
}
// Post a method
void Post(const Task &tsk, bool flush) noexcept {
if(!running) {
return;
}
if (flush) msg_queue.clear();
msg_queue.push_back(tsk);
}
// Start looping
void Activate() noexcept {
if (running) {
return;
}
msg_queue.clear();
looping = true;
worker.Attach(std::thread{&Looper::Entry, this});
running = true;
}
// stop looping
void Deactivate() noexcept {
{
if(!running) {
return;
}
looping = false;
Post([] { ; }, true);
worker.Detach();
running = false;
}
}
bool IsActive() const noexcept { return running; }
private:
void Entry() noexcept {
Task tsk;
while (looping) {
//if(msg_queue.empty()) continue;
tsk = msg_queue.front();
msg_queue.pop_front();
tsk();
}
}
MsgQueue msg_queue{};
ThreadWrapper worker{};
volatile bool running{false};
volatile bool looping{false};
};
An example to use this Looper:
class MySpeaker: public Looper{
public:
// Call SayHi without blocking current thread
void SayHiAsync(const std::string &msg){
Post([this, msg] {
SayHi(msg);
});
}
private:
// SayHi will be called in the working thread
void SayHi() {
std::cout << msg << std::endl;
}
};
I'm writing an event handler that listens for key presses, then calls a handler on any pressed keys. My goal was to allow something like this:
Entity player(0, 0);
EventHandler eh([&](char c) {
switch (c) {
case 'W': {
player.moveBy(0,-1);
break;
}
case 'S': {
player.moveBy(0, 1);
break;
}
case 'A': {
player.moveBy(-1, 0);
break;
}
case 'D': {
player.moveBy(1, 0);
break;
}
}
});
where an Entity is just a movable point-like object.
I was all set, then I realized that lambdas with referential captures can't be made into a function pointer (the reason makes sense, in retrospect).
The only alternative I could find was to use std::/boost::function, but the syntax is rather ugly, and apparently they come with a decent amount of overhead.
What's a good alternative to this system? I want to be able to pass in some kind of "handler" to EventHandler that accepts a character, and is capable of carrying out side effects on some external scope.
In the below source, LockedQueue is a FIFO queue that's been made thread safe using mutexes.
EventHandler.h:
#ifndef EVENT_HANDLER_H
#define EVENT_HANDLER_H
#include <vector>
#include <atomic>
#include "LockedQueue.h"
class EventHandler {
typedef void(*KeyHandler)(char);
std::atomic<bool> listenOnKeys = false;
std::vector<char> keysToCheck;
LockedQueue<char> pressedKeys;
KeyHandler keyHandler = nullptr;
void updatePressedKeys();
void actOnPressedKeys();
public:
EventHandler();
EventHandler(KeyHandler);
~EventHandler();
void setKeyHandler(KeyHandler);
void setKeysToListenOn(std::vector<char>);
void listenForPresses(int loopMSDelay = 100);
void stopListening();
};
#endif
EventHandler.cpp:
#include "EventHandler.h"
#include <windows.h>
#include <WinUser.h>
#include <thread>
#include <stdexcept>
EventHandler::EventHandler() {
}
EventHandler::EventHandler(KeyHandler handler) {
keyHandler = handler;
}
EventHandler::~EventHandler() {
stopListening();
}
void EventHandler::updatePressedKeys() {
for (char key : keysToCheck) {
if (GetAsyncKeyState(key)) {
pressedKeys.push(key);
}
}
}
void EventHandler::actOnPressedKeys() {
while (!pressedKeys.empty()) {
//Blocking if the queue is empty
//We're making sure ahead of time though that it's not
keyHandler(pressedKeys.waitThenPop());
}
}
void EventHandler::setKeyHandler(KeyHandler handler) {
keyHandler = handler;
}
void EventHandler::setKeysToListenOn(std::vector<char> newListenKeys) {
if (listenOnKeys) {
throw std::runtime_error::runtime_error(
"Cannot change the listened-on keys while listening"
);
//This could be changed to killing the thread by setting
// listenOnKeys to false, changing the keys, then restarting
// the listening thread. I can't see that being necessary though.
}
//To-Do:
//Make sure all the keys are in upper-case so they're
// compatible with GetAsyncKeyState
keysToCheck = newListenKeys;
}
void EventHandler::listenForPresses(int loopMSDelay) {
listenOnKeys = true;
std::thread t([&]{
do {
updatePressedKeys();
actOnPressedKeys();
std::this_thread::sleep_for(std::chrono::milliseconds(loopMSDelay));
} while (listenOnKeys);
});
t.join();
}
void EventHandler::stopListening() {
listenOnKeys = false;
}
EDIT:
Whoops. Note that listenForPresses is "broken" because I'm joining inside the function, so control never leaves it. I'm going to need to figure out a workaround. Doesn't change the question though, but the code isn't testable in it's current state.
The only alternative I could find was to use std::/boost::function, but the syntax is rather ugly, and apparently they come with a decent amount of overhead.
The overhead is decent compared to an inlinable function, but it's measured in nanoseconds. If you're only calling the function 60 times a second, the overhead is immeasurable.
That said, if you need to be able to change the event handler at any time, your only alternative is virtual method calls, with similar overhead. The performance impact of these choices are explored thoroughly in this article: Member Function Pointers and the Fastest Possible C++ Delegates.
If you are happy to restrict the EventHandler object to executing a single block of code defined at compile-time, use templates to store an instance of the compiler's generated type for the lambda; this should allow the compiler to perform more optimisations, as it can know for sure what code is being called. In this case, KeyHandler becomes a template type, and the type of a lambda can either be found with the decltype keyword:
template <class KeyHandler>
class EventHandler {
// elided
}
void EventLoopDecltype() {
Entity player(0, 0);
auto myEventHandler = [&](char ch) { /* elided */ };
EventHandler<decltype(myEventHandler)> eh(myEventHandler);
}
or (more conveniently, for the caller) inferred as an argument to a template function:
template <class KeyHandler>
EventHandler<KeyHandler> MakeEventHandler(KeyHandler handler) {
return EventHandler<KeyHandler>(handler);
}
void EventLoopInferred() {
Entity player(0, 0);
auto eh = MakeEventHandler([&](char c) {
// elided
});
}
std::function and boost::function do not come with any overhead that's remotely meaningful considering how light your usage of them would be in this case. You've made a critical error by discarding the solution before determining that the purported downsides actually apply to you.
You could of course also use a template as described in the other answer, but there's really no need to.
I've been reading some C++ books (Sutters, Meyers) lately which motivated me to start using smart pointers (and object destruction in general) more effectively. But now I'm not sure how to fix what I have.
Specifically, I now have a IntroScene class which inherits from both Scene and InputListener.
Scene isn't really relevant, but the InputListener subscribes to an InputManager on construction,
and unsubs again on destruction.
class IntroScene : public sfg::Scene, public sfg::InputListener {
/*structors, inherited methods*/
virtual bool OnEvent(sf::Event&) override; //inputlistener
}
But now, if the inputmanager sends events over to a scene, and the scene decided to replace itself
because of it, I have function running on an object that no longer exists.
bool IntroScene::OnEvent(sf::Event& a_Event) {
if (a_Event.type == sf::Event::MouseButtonPressed) {
sfg::Game::Get()->SceneMgr()->Replace(ScenePtr(new IntroScene()));
} //here the returned smartpointer kills the scene/listener
}
Side-question: Does that matter? I googled it but did not find a definite yes or no. I do know 100%
no methods are invoked on the destroyed object after it is destroyed.
I can store the Replace() return value until the end of the OnEvent() method if I have to.
The real problem is InputListener
InputListener::InputListener() {
Game::Get()->InputMgr()->Subscribe(this);
}
InputListener::~InputListener() {
if (m_Manager) m_Manager->Unsubscribe(this);
}
since it is called during OnEvent(), which is called by InputManager during HandleEvents()
void InputManager::HandleEvents(EventQueue& a_Events) const {
while (!a_Events.empty()) {
sf::Event& e = a_Events.front();
for (auto& listener : m_Listeners) {
if (listener->OnEvent(e)) //swallow event
break;
}
a_Events.pop();
}
void InputManager::Subscribe(InputListener* a_Listener) {
m_Listeners.insert(a_Listener);
a_Listener->m_Manager = this;
}
void InputManager::Unsubscribe(InputListener* a_Listener) {
m_Listeners.erase(a_Listener);
a_Listener->m_Manager = nullptr;
}
So when the new Scene+Listener is created, and when the old one is destroyed, the list m_Listeners is modified during the loop. So the thing breaks.
I've thought about setting a flag when starting and stopping the loop, and storing (un)subscriptions that happen while it is set in a separate list, and handle that after. But it feels a bit hacky.
So, how can I actually redesign this properly to prevent these kind of situations? Thanks in advance.
EDIT, Solution:
I ended up going with the loop flags and deferred entry list (inetknight's answer below)
for subscription only, since that can be safely done later.
Unsubscriptions have to be dealt with immediately, so instead of storing raw pointers I store a (pointer-mutable bool) pair (mutable since a set only returns a const_iterator). I set the bool to false when that happens and check for it in the event loop (see dave's comment below).
Not sure it's cleanest possible solution, but it works like a charm. Thanks a lot guys
Side-question: Does that matter? I googled it but did not find a definite yes or no. I do know 100% no methods are invoked on the destroyed object after it is destroyed. I can store the Replace() return value until the end of the OnEvent() method if I have to.
If you know 100% no methods are invoked ont he destroyed object and none of its member variables are accessed, then it's safe. Whether or not it's intended is up to you.
You could have another list of objects which have requested to be un/subscribed. Then after you've told everyone in the list of events, you would then process the list of un/subscription requests before continuing on to the next event.
/* this should be a member of InputManager however you did not provide a class definition */
typedef std::pair<InputListener *, bool> SubscriptionRequest;
bool handleEventsActive = false;
std::vector<SubscriptionRequest> deferredSubscriptionRequests;
void InputManager::HandleEvents(EventQueue& a_Events) const {
// process events
handleEventsActive = true;
while (!a_Events.empty()) {
sf::Event& e = a_Events.front();
for (auto& listener : m_Listeners)
{
//swallow event
if (listener->OnEvent(e)) {
break;
}
}
a_Events.pop();
// process deferred subscription requests occurred during event
while ( not deferredSubscriptionRequests.empty() ) {
SubscriptionRequest request = deferredSubscriptionRequests.back();
deferredSubscriptionRequests.pop_back();
DoSubscriptionRequest(request);
}
}
handleEventsActive = false;
}
void InputManager::DoSubscriptionRequest(SubscriptionRequest &request) {
if ( request.second ) {
m_Listeners.insert(request.first);
request.first->m_Manager = this;
} else {
m_Listeners.erase(request.first);
request.first->m_Manager = nullptr;
}
}
void InputManager::Subscribe(InputListener* a_Listener)
{
SubscriptionRequest request{a_Listener, true};
if ( handleEventsActive ) {
deferredSubscriptionRequests.push_back(request);
} else {
DoSubscriptionRequest(request);
}
}
void InputManager::Unsubscribe(InputListener* a_Listener)
{
SubscriptionRequest request{a_Listener, false};
if ( handleEventsActive ) {
deferredSubscriptionRequests.push_back(request);
} else {
DoSubscriptionRequest(request);
}
}
i am implmenting an event-driven message processing logic for a speed-sensitive application. I have various business logics which wrapped into a lot of Reactor classes:
class TwitterSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class FacebookSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class YoutubeSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
on_new_plus_one(PlusOneEvent&);
};
let's say, there are 8 such event types, each Reactor respond to a subset of them.
the core program has 8 'entry point' for the message, which hooked up with some low-level socket processing library, for instance
on_new_post(PostEvent& pe){
youtube_sentiment_reactor_instance->on_new_post(pe);
twitter_sentiment_reactor_instance->on_new_post(pe);
youtube_sentiment_reactor_instance->on_new_post(pe);
}
I am thinking about using std::function and std::bind, to build a std::vector<std::function<>>, then I loop through the vector to call each call-back function.
However, when I tried it,std::function proved to not be fast enough. Is there a fast yet simple solution here? As i mentioned earlier, this is VERY speed sensitive, so i want to avoid using virtual function and inheritance, to cut the v-table look up
comments are welcomed. thanks
I think that in your case it is easier to do an interface, as you know are going to call simple member functions that match exactly the expected parameters:
struct IReactor {
virtual void on_new_post(PostEvent&) =0;
virtual void on_new_comment(CommentEvent&) =0;
virtual void on_new_plus_one(PlusOneEvent&) =0;
};
And then make each of your classes inherit and implement this interface.
You can have a simple std::vector<IReactor*> to manage the callbacks.
And remember that in C++, interfaces are just ordinary classes, so you can even write default implementations for some or all of the functions:
struct IReactor {
virtual void on_new_post(PostEvent&) {}
virtual void on_new_comment(CommentEvent&) {}
virtual void on_new_plus_one(PlusOneEvent&) {}
};
std::function main performance issue is that whenever you need to store some context (such as bound arguments, or the state of a lambda) then memory is required which often translates into a memory allocation. Also, the current library implementations that exist may not have been optimized to avoid this memory allocation.
That being said:
is it too slow ? you will have to measure it for yourself, in your context
are there alternatives ? yes, plenty!
As an example, what don't you use a base class Reactor which has all the required callbacks defined (doing nothing by default), and then derive from it to implement the required behavior ? You could then easily have a std::vector<std::unique_ptr<Reactor>> to iterate over!
Also, depending on whether the reactors need state (or not) you may gain a lot by avoiding allocating objects from then and use just functions instead.
It really, really, depends on the specific constraints of your projects.
If you need fast delegates and event system take a look to Offirmo:
It is as fast as the "Fastest possible delegates", but it has 2 major advantages:
1) it is ready and well tested library (don't need to write your own library from an article)
2) Does not relies on compiler hacks (fully compliant to C++ standard)
https://github.com/Offirmo/impossibly-fast-delegates
If you need a managed signal/slot system I have developed my own(c++11 only).
It is not fast as Offirmo, but is fast enough for any real scenario, most important is order of magnitude faster than Qt or Boost signals and is simple to use.
Signal is responsible for firing events.
Slots are responsible for holding callbacks.
Connect how many Slots as you wish to a Signal.
Don't warry about lifetime (everything autodisconnect)
Performance considerations:
The overhead for a std::function is quite low (and improving with every compiler release). Actually is just a bit slower than a regular function call. My own signal/slot library, is capable of 250 millions(I measured the pure overhead) callbacks/second on a 2Ghz processor and is using std::function.
Since your code has to do with network stuff you should mind that your main bottleneck will be the sockets.
The second bottleneck is latency of instruction cache. It does not matter if you use Offirmo (few assembly instructions), or std::function. Most of the time is spent by fetchin instructions from L1 cache. The best optimization is to keep all callbacks code compiled in the same translation unit (same .cpp file) and possibly in the same order in wich callbacks are called (or mostly the same order), after you do that you'll see only a very tiny improvement using Offirmo (seriously, you CAN'T BE faster than Offirmo) over std::function.
Keep in mind that any function doing something really usefull would be at least few dozens instructions (especially if dealing with sockets: you'll have to wait completion of system calls and processor context switch..) so the overhead of the callback system will be neglictible.
I can't comment on the actual speed of the method that you are using, other than to say:
Premature optimization does not usually give you what you expect.
You should measure the performance contribution before you start slicing and dicing. If you know it won't work before hand, then you can search now for something better or go "suboptimal" for now but encapsulate it so it can be replaced.
If you are looking for a general event system that does not use std::function (but does use virtual methods), you can try this one:
Notifier.h
/*
The Notifier is a singleton implementation of the Subject/Observer design
pattern. Any class/instance which wishes to participate as an observer
of an event can derive from the Notified base class and register itself
with the Notiifer for enumerated events.
Notifier derived classes implement variants of the Notify function:
bool Notify(const NOTIFIED_EVENT_TYPE_T& event, variants ....)
There are many variants possible. Register for the message
and create the interface to receive the data you expect from
it (for type safety).
All the variants return true if they process the event, and false
if they do not. Returning false will be considered an exception/
assertion condition in debug builds.
Classes derived from Notified do not need to deregister (though it may
be a good idea to do so) as the base class destrctor will attempt to
remove itself from the Notifier system automatically.
The event type is an enumeration and not a string as it is in many
"generic" notification systems. In practical use, this is for a closed
application where the messages will be known at compile time. This allows
us to increase the speed of the delivery by NOT having a
dictionary keyed lookup mechanism. Some loss of generality is implied
by this.
This class/system is NOT thread safe, but could be made so with some
mutex wrappers. It is safe to call Attach/Detach as a consequence
of calling Notify(...).
*/
/* This is the base class for anything that can receive notifications.
*/
typedef enum
{
NE_MIN = 0,
NE_SETTINGS_CHANGED,
NE_UPDATE_COUNTDOWN,
NE_UDPATE_MESSAGE,
NE_RESTORE_FROM_BACKGROUND,
NE_MAX,
} NOTIFIED_EVENT_TYPE_T;
class Notified
{
public:
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const uint32& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const bool& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const string& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const double& value)
{ return false; };
virtual ~Notified();
};
class Notifier : public SingletonDynamic<Notifier>
{
public:
private:
typedef vector<NOTIFIED_EVENT_TYPE_T> NOTIFIED_EVENT_TYPE_VECTOR_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T> NOTIFIED_MAP_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T>::iterator NOTIFIED_MAP_ITER_T;
typedef vector<Notified*> NOTIFIED_VECTOR_T;
typedef vector<NOTIFIED_VECTOR_T> NOTIFIED_VECTOR_VECTOR_T;
NOTIFIED_MAP_T _notifiedMap;
NOTIFIED_VECTOR_VECTOR_T _notifiedVector;
NOTIFIED_MAP_ITER_T _mapIter;
// This vector keeps a temporary list of observers that have completely
// detached since the current "Notify(...)" operation began. This is
// to handle the problem where a Notified instance has called Detach(...)
// because of a Notify(...) call. The removed instance could be a dead
// pointer, so don't try to talk to it.
vector<Notified*> _detached;
int32 _notifyDepth;
void RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& orgEventTypes, NOTIFIED_EVENT_TYPE_T eventType);
void RemoveNotified(NOTIFIED_VECTOR_T& orgNotified, Notified* observer);
public:
virtual void Reset();
virtual bool Init() { Reset(); return true; }
virtual void Shutdown() { Reset(); }
void Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for a specific event
void Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for ALL events
void Detach(Notified* observer);
// This template function (defined in the header file) allows you to
// add interfaces to Notified easily and call them as needed. Variants
// will be generated at compile time by this template.
template <typename T>
bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const T& value)
{
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
// Keep a copy of the list. If it changes while iterating over it because of a
// deletion, we may miss an object to update. Instead, we keep track of Detach(...)
// calls during the Notify(...) cycle and ignore anything detached because it may
// have been deleted.
NOTIFIED_VECTOR_T notified = _notifiedVector[eventType];
// If a call to Notify leads to a call to Notify, we need to keep track of
// the depth so that we can clear the detached list when we get to the end
// of the chain of Notify calls.
_notifyDepth++;
// Loop over all the observers for this event.
// NOTE that the the size of the notified vector may change if
// a call to Notify(...) adds/removes observers. This should not be a
// problem because the list is a simple vector.
bool result = true;
for(int idx = 0; idx < notified.size(); idx++)
{
Notified* observer = notified[idx];
if(_detached.size() > 0)
{ // Instead of doing the search for all cases, let's try to speed it up a little
// by only doing the search if more than one observer dropped off during the call.
// This may be overkill or unnecessary optimization.
switch(_detached.size())
{
case 0:
break;
case 1:
if(_detached[0] == observer)
continue;
break;
default:
if(std::find(_detached.begin(), _detached.end(), observer) != _detached.end())
continue;
break;
}
}
result = result && observer->Notify(eventType,value);
assert(result == true);
}
// Decrement this each time we exit.
_notifyDepth--;
if(_notifyDepth == 0 && _detached.size() > 0)
{ // We reached the end of the Notify call chain. Remove the temporary list
// of anything that detached while we were Notifying.
_detached.clear();
}
assert(_notifyDepth >= 0);
return result;
}
/* Used for CPPUnit. Could create a Mock...maybe...but this seems
* like it will get the job done with minimal fuss. For now.
*/
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> GetEvents(Notified* observer);
// Return all objects registered for this event.
vector<Notified*> GetNotified(NOTIFIED_EVENT_TYPE_T event);
};
Notifier.cpp
#include "Notifier.h"
void Notifier::Reset()
{
_notifiedMap.clear();
_notifiedVector.clear();
_notifiedVector.resize(NE_MAX);
_detached.clear();
_notifyDepth = 0;
}
void Notifier::Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter == _notifiedMap.end())
{ // Registering for the first time.
NOTIFIED_EVENT_TYPE_VECTOR_T eventTypes;
eventTypes.push_back(eventType);
// Register it with this observer.
_notifiedMap[observer] = eventTypes;
// Register the observer for this type of event.
_notifiedVector[eventType].push_back(observer);
}
else
{
NOTIFIED_EVENT_TYPE_VECTOR_T& events = _mapIter->second;
bool found = false;
for(int idx = 0; idx < events.size() && !found; idx++)
{
if(events[idx] == eventType)
{
found = true;
break;
}
}
if(!found)
{
events.push_back(eventType);
_notifiedVector[eventType].push_back(observer);
}
}
}
void Notifier::RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes, NOTIFIED_EVENT_TYPE_T eventType)
{
int foundAt = -1;
for(int idx = 0; idx < eventTypes.size(); idx++)
{
if(eventTypes[idx] == eventType)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
eventTypes.erase(eventTypes.begin()+foundAt);
}
}
void Notifier::RemoveNotified(NOTIFIED_VECTOR_T& notified, Notified* observer)
{
int foundAt = -1;
for(int idx = 0; idx < notified.size(); idx++)
{
if(notified[idx] == observer)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
notified.erase(notified.begin()+foundAt);
}
}
void Notifier::Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{ // Was registered
// Remove it from the map.
RemoveEvent(_mapIter->second, eventType);
// Remove it from the vector
RemoveNotified(_notifiedVector[eventType], observer);
// If there are no events left, remove this observer completely.
if(_mapIter->second.size() == 0)
{
_notifiedMap.erase(_mapIter);
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
}
}
void Notifier::Detach(Notified* observer)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes = _mapIter->second;
for(int idx = 0; idx < eventTypes.size();idx++)
{
NOTIFIED_EVENT_TYPE_T eventType = eventTypes[idx];
// Remove this observer from the Notified list for this event type.
RemoveNotified(_notifiedVector[eventType], observer);
}
_notifiedMap.erase(_mapIter);
}
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
Notified::~Notified()
{
Notifier::Instance().Detach(this);
}
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> Notifier::GetEvents(Notified* observer)
{
vector<NOTIFIED_EVENT_TYPE_T> result;
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
result = _mapIter->second;
}
return result;
}
// Return all objects registered for this event.
vector<Notified*> Notifier::GetNotified(NOTIFIED_EVENT_TYPE_T event)
{
return _notifiedVector[event];
}
NOTES:
You must call init() on the class before using it.
You don't have to use it as a singleton, or use the singleton template I used here. That is just to get a reference/init/shutdown mechanism in place.
This is from a larger code base. You can find some other examples on github here.
There was a topic on SO, where virtually all mechanisms available in C++ was enumerated, but can't find it.
It had a list something like this:
function pointers
functors: member function pointers wrapped along with this to object with overloaded operator()
Fast Delegates
Impossibly Fast Delegates
boost::signals
Qt signal-slots
Fast delegates and boost::function performance comparison article: link
Oh, by the way, premature optimization..., profile first then optimize, 80/20-rule, blah-blah, blah-blah, you know ;)
Happy coding!
Unless you can parameterize your handlers statically and get the inlined, std::function<...> is your best option. When type exact type needs to be erased or you need to call run-time specified function you'll have an indirection and, hence, an actual function call without the ability to get things inlined. std::function<...> does exactly this and you won't get better.
I'm writing a multi-threaded game engine, and I'm wondering about best practices around waiting for threads. It occurs to me that there could be much better options out there than what I've implemented, so I'm wondering what you guys think.
Option A) "wait()" method gets called at the top of every other method in the class. This is my current implementation, and I'm realizing it's not ideal.
class Texture {
public:
Texture(const char *filename, bool async = true);
~Texture();
void Render();
private:
SDL_Thread *thread;
const char *filename;
void wait();
static int load(void *data);
}
void Texture::wait() {
if (thread != NULL) {
SDL_WaitThread(thread, NULL);
thread = NULL;
}
}
int Texture::load(void *data) {
Texture *self = static_cast<Texture *>(data);
// Load the Image Data in the Thread Here...
return 0;
}
Texture::Texture(const char *filename, bool async) {
this->filename = filename;
if (async) {
thread = SDL_CreateThread(load, NULL, this);
} else {
thread = NULL;
load(this);
}
}
Texture::~Texture() {
// Unload the Thread and Texture Here
}
void Texture::Render() {
wait();
// Render the Texture Here
}
Option B) Convert the "wait()" method in to a function pointer. This would save my program from a jmp at the top of every other method, and simply check for "thread != NULL" at the top of every method. Still not ideal, but I feel like the less jumps, the better. (I've also considered just using the "inline" keyword on the function... but would this include the entire contents of the wait function when all I really need is the "if (thread != NULL)" check to determine whether the rest of the code should be executed or not?)
Option C) Convert all of the class' methods in to function pointers, and ditch the whole concept of calling "wait()" except while actually loading the texture. I see advantages and disadvantages to this approach... namely, this feels the most difficult to implement and keep track of. Admittedly, my knowledge of the inner workings on GCC's optimizations and assembly and especially memory->cpu->memory communication isn't the best, so using a bunch of function pointers might actually be slower than a properly defined class.
Anyone have any even better ideas?
Best practice is often not reinventing the wheel :D
You might want to take a look at std::thread library, if you have a compiler that supports C++11. Everything you need is already implemented and made as safe as possible (which is not really safe considering the topic).
In particular, your wait() function is implemented by std::condition_variable.
Boost thread library offers pretty much the same functionality.
I don't know about the library you're using sorry :D