I'm trying to design a simple event system that, "basically", looks like this:
the observer entity keeps a list of all objects that need to be notified. It also stores a queue of fired events. Events are then processed by the object list, iteratint through this object list.
an object keeps a list of observers it sends events to. Each particular object, that inherits from the base object, can fire its own specialised events (key, mouse, collision, etc.). The object also has a method HandleEvent(..) with different overloads for compile time type detection, instead of using dynamic_casting.
What would be better to choose, when firing events: creating them on the stack and passing them by reference, or allocating them dynamically on the heap and use dynamic_casting and let the observer deallocate them when they're processed by the objects that can handle them? (e.g. isn't dynamic allocation unnecessary when an event can be fired quite often; what about dynamic casting, isn't it avoidable?).
Also, this is not quite a thread-safe scenario..
Do you need dynamic allocation? No. Typically, you want
void fireEvent()
{
Event ev;
for ( each observer )
observer.trigger(ev);
}
And the observer's signature
void trigger(const Event& ev);
Note that "passing references to them" isn't true, pedantically speaking. It's actually "passing them by reference".
Related
I am currently creating my own GUI-Library based on SFML.
At the moment i am working on a Button. So when creating a button you also have to specify a callback which is a function, executed on the button click.
Now, I'm answering me what the disadvantages are of using just a pointer to a function as a button-callback, because I don't know any popular GUI-Library doing it so simply, too.
If the callback function is a long process, I would execute it in a new thread, but i'm not sure about that in the moment.
So, what would be reasons, not to use such simple solution and especially, what would be a better way?
It's a tricky problem!
Function pointers are simple to implement on the sender side, but they are difficult to use on the receiver side because they they don't have any context.
One issue is that a function pointer cannot point to a member function. That's why you often see (C-style) frameworks pass an arbitrary void *userData to their callbacks, so you can cast your this pointer and retrieve it in that way. This still needs you to write a static wrapper function to cast the pointer back and call the member function.
A more modern solution would be to use std::function. This can contain a regular function pointer, a member function pointer, but also a lambda or a functor.
However, when you add context like this (or in some other way), you quickly run into difficulties with lifetimes. When the receiving class is destroyed before the sender, what is supposed to happen? If you don't do anything, this situation will result in undefined behaviour. A solution is to track on the receiver side to which events the receiver is subscribed, and unbind them before the receiver is destroyed. And this needs to be done in both directions: when the sender is destroyed, it also needs to notify the receiver that it should forget about the sender, otherwise the receiver would later try to unbind an event that no longer exists.
And I haven't even begun to think about multithreading yet...
There are libraries that solve these problems in various ways, for example eventpp (just found through a web search, this is not an endorsement).
Another one to mention would be the Qt toolkit, which went so far as to write their own small signals and slots extension to the C++ language (implemented as a code generator and a pile of macros) to solve this problem in a very ergonomical way.
what the disadvantages are of using just a pointer to a function as a button-callback
Passing some context argument to that function would come handy.
I mean, the UI may have a lot of buttons performing the same action on various objects. Think maybe of "send message" button next to each nick in a friend list.
So you may want your buttom to pass some context arguments to the call.
But since we're talking C++, this'd better be abstracted as
struct IButtonAction
{
virtual void OnAttached() = 0;
virtual void OnDetached() = 0;
virtual void OnClick() = 0;
};
And let the client code implement this interface storing whichever Arg1, Arg2, etc in each instance object.
The button class would call OnAttached/OnDetached when it begins/ends using the pointer to an instance of this callback interface. These calls must be paired. Client implementation of these methods may perform lifetime management and synchronization with OnClick, if required.
OnClick method performs the action.
I don't think the button should bother with threads. It's the responsibility of the client code to decide whether to spawn a thread for a lengthy action.
TL;DR
How do I correctly pass information, wrapped as a QObject to QML in a signal that might be emitted with high frequency, reducing overhead, ensuring the object/reference outlives at least the execution of the connected slots?
I have a C++ QObject registered as QML type. This object has some signal
void someSignal(InformationQObject* someInformation)
in which I don't pass all the information in seperate parameters but in one object - similar to the signals found e.g. in the MouseArea whith e.g. the signal
void clicked(QQuickMouseEvent *mouse)
Now I am wondering about the right lifetime management of this someInformation.
So far, in my object, I have a member:
InformationQObject* m_lastInformation
and to send the signal I use:
void sendMySignal(/* possible params */)
{
delete m_lastInformation
m_lastInformation = new InformationQObject(/* right params here */)
emit someSignal(m_lastInformation)
}
Now this seems wrong.
Reasons: If you look at the implementation of the QQuickMouseArea they do it differently. Seemingly they don't create a new object for each event but recycle the existing one, seemingly. I find it hard to follow all their sources but I think this comment from one of their files gives a good reason:
QQuickPointerEvent is used as a long-lived object to store data related to
an event from a pointing device, such as a mouse, touch or tablet event,
during event delivery. It also provides properties which may be used later
to expose the event to QML, the same as is done with QQuickMouseEvent,
QQuickTouchPoint, QQuickKeyEvent, etc. Since only one event can be
delivered at a time, this class is effectively a singleton. We don't worry
about the QObject overhead because the instances are long-lived: we don't
dynamically create and destroy objects of this type for each event.
But this is where it gets to complicated for me to see through, how they do it. This comment is regarding a QQuickPointerEvent. There exists a QQuickPointerMouseEvent. In their signal they pass a QQuickMouseEvent*
The latter is a pointer to one of their members QQuickMouseEvent quickMouseEvent.
At some point, somehow, this pointer becomes invalid in QML
MouseArea {
anchors.fill: parent
property var firstEvent
onClicked: {
if (firstEvent === undefined) firstEvent = mouse
console.log(mouse.x, mouse.y)
console.log(firstEvent.x, firstEvent.y) // -> TypeError on second and consecutive clicks.
}
}
So there must be some magic happening, that I don't understand.
You are opening a can of worms. QML lifetime management is broken in above-trivial scenarios, and the API doesn't really give you a meaningful way to walk around that. The solution for me has been to set the ownership to CPP and manually manage the object lifetime. Primitive I know, but the only solution to avoid deletion of objects still in use and actual hard crashes.
If mouse area recycled the same event object, it wouldn't become invalid on the subsequent click.
If your code reflects your actual usage scenario, I recommend you simply copy the individual event properties rather than attempting to store the actual event, either in dedicated properties, or as a JS object if you want to avoid overhead and don't need notifications. I tend to use arrays, and rely on the faster index access.
Another solution I can recommend is a Q_GADGET with a PIMPL - gadgets are limited by design so they cannot be passed as pointers, and they are always copied by value, but you can have the actual object only contain a pointer to the heavier data implementation, and only serve as an accessor and interface to access the data from QML. This way you can reuse the data stuff, with the actual object value being negligible, as it will essentially just be a pointer and involve no dynamic memory allocation whatsoever. You can additionally expose the actual data as an opaque object for the sake of copying that to other gadgets and use ref counting to manage the data lifetime.
It's often recommended to use deleteLater() instead of normal delete in Qt. However, it leads to a problem of dangling objects: they are marked for deletion but still appear on children lists returned by Qt API. (Since this behaviour is seriously counterintuitive, my rapidly developing Qt quirks sense made me verify it. They do.) So, is there an idiomatic way to track such objects? I could, of course, use an ad-hoc solution like
class DeleteLaterable
{
public:
void markForDeletion() { mMarked = true; }
bool isMarked() const { return mMarked; }
private:
bool mMarked = false;
};
and publicly inherit everything from it, but it opens a whole different can of virtual inheritance worms. Any better ideas?
As of Qt 5.8, there are no way to track objects scheduled for deletion out of the box.
Calling deleteLater() just post an event (QDeferredDeleteEvent) to the target object. As there is no way to get the list of pending events you cannot know which object will receive a QDeferredDeleteEvent.
To achieve what you want there are several solutions:
Use a "DeleteLaterManager"
A class with a "deleteObject(QObject *)" function that will call deletelater() and keep track of the object until deleted.
Reimplement QAbstractEventDispatcher and track events of type QEvent::DeferredDelete.
Use a custom event class of type QEvent::DeferredDelete and instead of calling deleteLater() call QCoreApplication::postEvent().
If you are only concerned with such object showing in child lists, you could simply remove their parent when calling deleteLater().
On a side note, why "this behaviour is seriously counterintuitive"? The documentation of deleteLater() simply states that the object will be scheduled for deletion, why would the parent/child relation be affected?
I have a system, that receives messages (data chunks with a type) from somewhere (in this case network). Those are stored in a queue once received. Then these messages should get dispatched to handler functions and this got to happen fast (lots of messages)
Currently the system is designed in a way, that each Message type is an own class and overwrites a virtual function run(Handler&) which basicly calls the correct method in the handler. E.g.:
class PingMessage: public Message{
... // Some member variables
void run(Handler& handler){
handler.handlePing(*this);
}
}
class Handler{
void handlePing(const PingMessage& msg){...}
}
In this design, the Queue deletes the message, after it got dispatched. The problem is: Some handler functions need to store the message to execute them at a later time. Copying the message is not only a waste of memory and time (it gets deleted right after dispatch) but also not possible sometimes (complex deserialized data structures) So best would be to pass the ownership over to the handler.
I have the feeling, there is a design pattern or best practice for this. But I can't find it.
What I could imaging is calling a generic handler function "handleMessage(Type, Message*)" that switches on the type and does a dispatch with the Message static_cast'ed to the right type. Then it is clear by the convention of passing a pointer, that the handler is responsible for deleting the message. Maybe even use a base class, that does the switch and implements all handler functions empty. If a handler functions returns true, the handleMessage function deletes the Message, otherwise it assumes, the callee stored it somewhere. But I'm not sure if this is the right approach or if it incurs to much overhead. There seems to be to much room for errors.
Especially as I would have to do 2 checks for the Message Type: One for choosing the correct class to deserialize and one for calling the correct function.
Note: No C++11 available.
Sidenote: There is also something else to it: Most handlers just handle the message. So creating it on the heap with new and freeing it right after that is propably quite slow (mostly very small messages with just a couple of bytes) Using a handler, that deserializes the messages into stack based objects would be better, but then I'd have to copy them again which I can't. So shall I pass the raw message to the specific handler function and let them do deserialization as they wish? That means lots of duplicate code for different handlers... What do to here???
Even though you indicate that you do not have C++11, it does not take a lot of code to implement your own C++03-compatible equivalent of std::shared_ptr. If your application is not multi-threaded, you won't even need to worry about updating the object's reference count in a thread-safe manner.
I don't use Boost, so I can't say authoritatively, but it's possible that Boost might already have a C++03-compatible implementation of std::shared_ptr that you can use.
Most modern implementations of memory allocators are actually quite efficient, and instantiating a new message object on the heap isn't as big deal as you think.
So, your overall approach is:
You receive the message, and instantiate the appropriate subclass of Message on the heap.
The run() method should also receive a reference-counted handle to the message itself, which it passes to the handler.
If the handler does not need to save a copy of the message, it does nothing, and it will be destroyed soon thereafter, otherwise it grabs the reference handle, and stashes it away, someplace.
I have a situation where objects will add events (a struct containing a function pointer to a function like object::do_something) to a "chain of events" (std::multimap) in their constructor. My interpreter reads the chain of events (sorted by depth) every time the game updates and executes each one sequentially. When an object is destroyed, it will remove all its events from the chain in its destructor automatically (to prevent possible leaks of events).
Because events are sorted by depth, it's possible that an object might register multiple events which are "next" to each other in the chain. When an object destroys itself, it unlinks all its events and immediately stops running its share of code (when something is destroyed, it can't do anything). I've cunningly produced a way of doing this; the particular function which deletes an object, instance_destroy() will throw an exception which my event interpreter can catch and continue along with the next event in the chain.
I've come to realize;
Unpredictable amounts of events can be unlinked from the chain, and the current iterator is (likely) to be invalidated when an object destroys itself.
Objects can destroy other objects in their lifetime, as well as themselves. I can't simply keep a copy of the next iterator that doesn't belong to the current object in case of destruction, as it could also be removed!
When control is passed back to the interpreter (say, via exception) and heaps of events have been removed, including possibly the current iterator, I have no way of knowing what to execute next. I can't start the map from the beginning -- that would cause undefined behaviour in the game; things would be executed twice. I also can't copy the map -- it's absolutely HUGE -- it would come at an enormous performance penalty. I can't redesign the way the system should work either, as it's not my protocol.
Consider the following data structure;
typedef std::multimap<real_t, event> events_by_depth_t;
How can I iterate it given my requirements above?
I'm using C++11.