Is a templated, polymorphic callback a good idea? - c++

I'm making a Gui Api for games. The user can always use inheritance on the widget and override, but I want callbacks. I want to use a templated callback system:
so if they want to have one for the mouse they inherit from a version of the templated callback base with mouseargs:
So the base would look like this:
template <typename T>
class AguiEventCallback {
public:
virtual void callback(AguiWidget* sender, T arg) = 0;
};
Is it a good idea to mix templates with polymorphism like this? Would I be better off creating callbacks for each of the types I need (mouse, keyboard, gamepad, etc)?
Thanks

Have a look at boost::function and boost::bind. Accept a function object with a defined parameter list for particular events, and callers can do what they want.
This gives callback implementations lots of flexibility and the object generating the events requires even less knowledge of the callback implementation.
For example:
typedef boost::function<void (AguiWidget* sender)> CallbackFunc;
void register_callback(CallbackFunc const& f);
And the client:
class Caller {
void do_register() { register_callback(bind(&Caller::event, this, 123, _1)); }
void event(int arg, AguiWidget* sender) { ... }
};
Just showing function/bind, many other issues ignored; eg. memory management, object lifetime.

Using a template the way you have is fine sometimes. There are issues due to the fact that you must give your template a virtual destructor and that
If you inline your virtual destructor (as you do with most template functions) some compilers find it hard to stick to the One Definition Rule, particularly if the library is used across libraries.
If you do not inline your virtual destructor you have to instantiate every type you are going to use with that template. This is my own preferred approach.
For a callback, you do have the option of using boost::function. This avoids having to derive classes from your template, create them with new and probably stick them into a shared_ptr somewhere. The downside of boost::function as a callback, I have found, is that it is harder to debug into if something goes wrong. Beware of that issue.

Momentarily accepting your virtual dispatch solution, what your templated approach guarantees is a uniformity in the callback function name and arguments. Sadly, that will force a lot of other code to disambiguate which callback is being invoked / overridden, probably causing more trouble than good.
That said, as janm said other options exist. Functors are more powerful (you can change them on an existing object at run-time, you can have lists of observers) but also have to be initialised at the right time (pure virtual functions effectively remind the programmer to supply them at compile time), and introduce a bigger variety of run-time states to reason about and understand.
You might also be able to use a template policy or the Curiously Recurring Template Pattern to supply behaviours at compile time, allowing inlining, dead code elimination, type-specific behaviours and other optimisations.

In addition to other answers here you might have a look to the boost::signal library. It implements a signal/slot mechanism which is indeed useful for GUIs. Performance is not as good as you would expect (costs more than a call to a virtual method) but for a GUI it's just fine.
The boost::signal library can also be used together with boost::bind and this combo is very powerful.
I don't like using inheritance very much for callbacks. It spawns a lot of classes with just 1 method most of the time. It's C++ not Java. you have functions, use them :)

Related

Design patterns to construct an event-driven system in c++ without type erasure?

Event-driven systems are used to reduce dependencies in large systems with many-to-many object interactions of various types. As such, they aim to pass arbitrary information between arbitrary objects. This is typically done by registering event handlers with an event_manager, using type erasure in the handler, i.e.:
void handle_some_event(event *e) {/*Downcast e to an assumed appropriate type and work with it*/}
However, suppose we wanted to implement such a system without type erasure. It seems that certain language features, particularly generics, should make it possible. The handler registration in the event_manager could be templated, i.e. (loosely):
template<typename event_type>
void event_manager::register_handler(std::function<void(event_type)> handler) {
// Implementation
}
The struggle is, what should the //Implementation of this function be? After receiving the handler, it needs to be stored in some container related to event_type. But in order to avoid type erasure, there would have to be a container per event_type.
One possible option is to use static containers in template classes, i.e.:
template<typename event_type>
class handler_container {
public:
inline static std::vector<std::function<void(event_type)>> handlers;
};
The event_manager could store and execute handlers in the corresponding handler_container<event_type>::handlers container. However, this has the obvious disadvantage that there can only really be one event_manager, given that the containers are static and thus shared across all event_managers. Perhaps this is sufficient for most applications, but it's still an ugly solution.
Are there any design patterns that would allow for a cleaner solution to this problem?
Not a design pattern, but in theory it looks like a potential use case for std::variant. You can define your event type as an alias to a std::variant of all possible event options, and your handlers would be C++20's template lambdas that check for the type they want using constexpr if. Finally, the code that generates the event initializes the std::variant with an appropriate event type and all handlers are invoked with std::visit.
Use of std::variant might seem like a potential problem of scalability, but if the handlers are written sensibly they shouldn't break when adding more variants.
I don't know of any major project that uses this "technique", so I wouldn't jump ahead to use it in production code immediately. Also, it requires C++20 to write it without (too much) boilerplate, so it might not apply for most use cases out there. But it looks doable and doesn't require type erasure.

Static CRTP class without knowing derived type?

Given the following, working code.
#include <iostream>
template<class Detail>
class AbstractLogger
{
public:
static void log(const char* str) {
Detail::log_detailled(str);
}
};
class Logger : public AbstractLogger<Logger>
{
public:
static void log_detailled(const char* str) {
std::cerr << str << std::endl;
}
};
int main(void)
{
AbstractLogger<Logger>::log("main function running!");
return 0;
}
Now, I want to put AbstractLogger into a library, and let the library user define his own logger, like the Logger class here. This has one drawback: AbstractLogger<Logger> can not be used inside the library, since the library can not know Logger.
Notes:
Please no virtual functions or questions why not. Also, I am aware of the similar problem that "static virtual" members are invalid. Maybe, there is a workaround in CRTP :)
C++11 will be interesting, however, I need "usual" C++.
If what you mean is that you want to have a library that uses this as a logging mechanism without knowing the exact instantiating type, I would advice against it.
The only way of doing it while meeting your other requirements (i.e. no virtual functions) is that all your functions/types in the library that need to log are converted into templates that take the Logger type. The net result is that most of your interface becomes a template (although you can probably move a good amount of the implementation to non-templated code, it will make your life much harder than needed, and it will still generate a much larger binary).
If your concern with virtual functions is performance, then you should reconsider your approach and the problems it brings. In particular, logging is expensive. Most logging libraries tackle it by optimizing the non-logging case (by means of macros that avoid calling the logger if the log level/group/... are not enabled), but still leave dynamic dispatch for the actual writting. The cost of the dynamic dispatch is negligible compared with the cost of writing to the console, or a file, or even with the cost of generating the message that will be logged (I am assuming that you not only log literal strings)
The usual approach is to code against a concept, while providing helpers so that users may easily produce types that satisfy one or more of those concepts. As an example, something like boost::iterator_facade is a CRTP helper that makes it easier for a user to write an iterator. Then, that iterator can be used anywhere an iterator is accepted -- for instance in the range constructor of std::vector. Notice how that particular constructor has no foreknowledge of the user-defined type.
In your case, AbstractLogger would be the CRTP helper. The missing piece would be to define e.g. a logger concept. As a result, notice that everything that needs a logger either needs to be implemented as a template or you need a type-erasing container to hold arbitrary loggers.
Concept checks (like those provided by Boost) are convenient for this kind of programming, since they allow to represent a concept with actual code.
Template classes can't be 'put in a library' since they are instantiated by the compiler as specializations of their template parameters.
You may put parameter independent stuff used in the template implementation into a library though.

Why bother with virtual functions in c++?

This is not a question about how they work and declared, this I think is pretty much clear to me. The question is about why to implement this?
I suppose the practical reason is to simplify bunch of other code to relate and declare their variables of base type, to handle objects and their specific methods from many other subclasses?
Could this be done by templating and typechecking, like I do it in Objective C? If so, what is more efficient? I find it confusing to declare object as one class and instantiate it as another, even if it is its child.
SOrry for stupid questions, but I havent done any real projects in C++ yet and since I am active Objective C developer (it is much smaller language thus relying heavily on SDK's functionalities, like OSX, iOS) I need to have clear view on any parallel ways of both cousins.
Yes, this can be done with templates, but then the caller must know what the actual type of the object is (the concrete class) and this increases coupling.
With virtual functions the caller doesn't need to know the actual class - it operates through a pointer to a base class, so you can compile the client once and the implementor can change the actual implementation as much as it wants and the client doesn't have to know about that as long as the interface is unchanged.
Virtual functions implement polymorphism. I don't know Obj-C, so I cannot compare both, but the motivating use case is that you can use derived objects in place of base objects and the code will work. If you have a compiled and working function foo that operates on a reference to base you need not modify it to have it work with an instance of derived.
You could do that (assuming that you had runtime type information) by obtaining the real type of the argument and then dispatching directly to the appropriate function with a switch of shorts, but that would require either manually modifying the switch for each new type (high maintenance cost) or having reflection (unavailable in C++) to obtain the method pointer. Even then, after obtaining a method pointer you would have to call it, which is as expensive as the virtual call.
As to the cost associated to a virtual call, basically (in all implementations with a virtual method table) a call to a virtual function foo applied on object o: o.foo() is translated to o.vptr[ 3 ](), where 3 is the position of foo in the virtual table, and that is a compile time constant. This basically is a double indirection:
From the object o obtain the pointer to the vtable, index that table to obtain the pointer to the function and then call. The extra cost compared with a direct non-polymorphic call is just the table lookup. (In fact there can be other hidden costs when using multiple inheritance, as the implicit this pointer might have to be shifted), but the cost of the virtual dispatch is very small.
I don't know the first thing about Objective-C, but here's why you want to "declare an object as one class and instantiate it as another": the Liskov Substitution Principle.
Since a PDF is a document, and an OpenOffice.org document is a document, and a Word Document is a document, it's quite natural to write
Document *d;
if (ends_with(filename, ".pdf"))
d = new PdfDocument(filename);
else if (ends_with(filename, ".doc"))
d = new WordDocument(filename);
else
// you get the point
d->print();
Now, for this to work, print would have to be virtual, or be implemented using virtual functions, or be implemented using a crude hack that reinvents the virtual wheel. The program need to know at runtime which of various print methods to apply.
Templating solves a different problem, where you determine at compile time which of the various containers you're going to use (for example) when you want to store a bunch of elements. If you operate on those containers with template functions, then you don't need to rewrite them when you switch containers, or add another container to your program.
A virtual function is important in inheritance. Think of an example where you have a CMonster class and then a CRaidBoss and CBoss class that inherit from CMonster.
Both need to be drawn. A CMonster has a Draw() function, but the way a CRaidBoss and a CBoss are drawn is different. Thus, the implementation is left to them by utilizing the virtual function Draw.
Well, the idea is simply to allow the compiler to perform checks for you.
It's like a lot of features : ways to hide what you don't want to have to do yourself. That's abstraction.
Inheritance, interfaces, etc. allow you to provide an interface to the compiler for the implementation code to match.
If you didn't have the virtual function mecanism, you would have to write :
class A
{
void do_something();
};
class B : public A
{
void do_something(); // this one "hide" the A::do_something(), it replace it.
};
void DoSomething( A* object )
{
// calling object->do_something will ALWAYS call A::do_something()
// that's not what you want if object is B...
// so we have to check manually:
B* b_object = dynamic_cast<B*>( object );
if( b_object != NULL ) // ok it's a b object, call B::do_something();
{
b_object->do_something()
}
else
{
object->do_something(); // that's a A, call A::do_something();
}
}
Here there are several problems :
you have to write this for each function redefined in a class hierarchy.
you have one additional if for each child class.
you have to touch this function again each time you add a definition to the whole hierarcy.
it's visible code, you can get it wrong easily, each time
So, marking functions virtual does this correctly in an implicit way, rerouting automatically, in a dynamic way, the function call to the correct implementation, depending on the final type of the object.
You dont' have to write any logic so you can't get errors in this code and have an additional thing to worry about.
It's the kind of thing you don't want to bother with as it can be done by the compiler/runtime.
The use of templates is also technically known as polymorphism from theorists. Yep, both are valid approach to the problem. The implementation technics employed will explain better or worse performance for them.
For example, Java implements templates, but through template erasure. This means that it is only apparently using templates, under the surface is plain old polymorphism.
C++ has very powerful templates. The use of templates makes code quicker, though each use of a template instantiates it for the given type. This means that, if you use an std::vector for ints, doubles and strings, you'll have three different vector classes: this means that the size of the executable will suffer.

Which is better: Function overriding or passing a function pointer for event handling

So, I'm writing code for a class that will go into a library that will be used by others. This class will intercept and process incoming messages (details are not important but it's using the activemq-cpp library). The outline of this consumer class is
class MessageConsumer {
...
public:
void runConsumer();
virtual void onMessage(const Message* message);
}
where runConsumer() sets up the connection and starts listening and onMessage() is called when a message is received.
My questions is this: People who'll use this code will each have their own way of processing the different messages. How can I keep MessageConsumer generic but offer this flexibility, while keeping their code simple?
Two options:
Should they inherit a new class from MessageConsumer and write their own onMessage()?
Should they pass a pointer to a message handling function to MessageConsumer?
What do you think, which option is better and why?
Thanks!
In one approach, clients are allowed to register a callback and then the MessageConsumer invokes the registered callback. This is something like an observer/broadcast design pattern.
The second approach, where clients have to inherit and override MessageConsumer would be something like Strategy design pattern.
Basic design goals suggest to use the weakest relationship to promote loose coupling. Since inhertiance is a stronger relationship as compared to a simple association, everything else being the same Approach 1 is preferred.
From Herb's article
"Inheritance is often overused, even
by experienced developers. Always
minimize coupling: If a class
relationship can be expressed in more
than one way, use the weakest
relationship that's practical. Given
that inheritance is nearly the
strongest relationship you can express
in C++ (second only to friendship),
it's only really appropriate when
there is no equivalent weaker
alternative."
But as James points out, it is tough to comment unless the overall design constraints are known clearly.
Inheritance will make your library more OO-friendly and may improve readability. But really, the choices are about the same since the compiler will check that the user has supplied the function (assuming you declare a pure virtual handler in the base class), and the underlying mechanism will be accomplished via a pointer anyway (virtual table in the case of inheritance).
Pure virtual functions allow the compiler to check that the client code implements the handler. Virtual dispatch is active immediately after an object is constructed, and someone looking at the derived class can reason accurately about its handling. Data needed for the handling can be conveniently and clearly grouped into the derived class. Factories can still select a particular derived class to instantiate.
Function pointers are run-time state, so there's a little more care needed to initialise them in a timely fashion, for optional run-time checks on their being set and error handling, and to reason about which set is in effect during program execution. With that comes more freedom to vary them within the lifetime of the object.
A third alternative is a template policy class, or the Curiously Recurring Template Pattern, to lock in the behaviours at compile time. This potentially allows inlining of callbacks, dead-code elimination and other optimisations.
virtual function or tepmlated functor are the way to go. These approaches give greater flexibility ad looser coupling than function pointer one.
To illustrate that - function pointer approach can be wrapped with first two, but not vice-versa.
void cbFunction();
class Interface {
virtual void act() =0 ;
};
class CbFuctionWrapper:public Interface {
public:
virtual void act() {cbFunction()};
};
class AnotherImpl: public Interface {
Context _c; // You can't pass additional context with usual function without downcasting, but OO is all about that.
public:
virtual void act() {...}
}

A more generic visitor pattern

I'm sorry if my question is so long and technical but I think it's so important other people will be interested about it
I was looking for a way to separate clearly some softwares internals from their representation in c++
I have a generic parameter class (to be later stored in a container) that can contain any kind of value with the the boost::any class
I have a base class (roughly) of this kind (of course there is more stuff)
class Parameter
{
public:
Parameter()
template typename<T> T GetValue() const { return any_cast<T>( _value ); }
template typename<T> void SetValue(const T& value) { _value = value; }
string GetValueAsString() const = 0;
void SetValueFromString(const string& str) const = 0;
private:
boost::any _value;
}
There are two levels of derived classes:
The first level defines the type and the conversion to/from string (for example ParameterInt or ParameterString)
The second level defines the behaviour and the real creators (for example deriving ParameterAnyInt and ParameterLimitedInt from ParameterInt or ParameterFilename from GenericString)
Depending on the real type I would like to add external function or classes that operates depending on the specific parameter type without adding virtual methods to the base class and without doing strange casts
For example I would like to create the proper gui controls depending on parameter types:
Widget* CreateWidget(const Parameter& p)
Of course I cannot understand real Parameter type from this unless I use RTTI or implement it my self (with enum and switch case), but this is not the right OOP design solution, you know.
The classical solution is the Visitor design pattern http://en.wikipedia.org/wiki/Visitor_pattern
The problem with this pattern is that I have to know in advance which derived types will be implemented, so (putting together what is written in wikipedia and my code) we'll have sort of:
struct Visitor
{
virtual void visit(ParameterLimitedInt& wheel) = 0;
virtual void visit(ParameterAnyInt& engine) = 0;
virtual void visit(ParameterFilename& body) = 0;
};
Is there any solution to obtain this behaviour in any other way without need to know in advance all the concrete types and without deriving the original visitor?
Edit: Dr. Pizza's solution seems the closest to what I was thinking, but the problem is still the same and the method is actually relying on dynamic_cast, that I was trying to avoid as a kind of (even if weak) RTTI method
Maybe it is better to think to some solution without even citing the visitor Pattern and clean our mind. The purpose is just having the function such:
Widget* CreateWidget(const Parameter& p)
behave differently for each "concrete" parameter without losing info on its type
For a generic implementation of Vistor, I'd suggest the Loki Visitor, part of the Loki library.
I've used this ("acyclic visitor") to good effect; it makes adding new classes to the hierarchy possible without changing existing ones, to some extent.
If I understand this correctly...
We had a object that could use different hardware options. To facilitate this we used a abstract interface of Device. Device had a bunch of functions that would be fired on certain events. The use would be the same but the various implementations of the Device would either have a fully-fleshed out functions or just return immediately. To make life even easier, the functions were void and threw exceptions on when something went wrong.
For completeness's sake:
it's of course completely possible to write an own implementation of a multimethod pointer table for your objects and calculate the method addresses manually at run time. There's a paper by Stroustrup on the topic of implementing multimethods (albeit in the compiler).
I wouldn't really advise anyone to do this. Getting the implementation to perform well is quite complicated and the syntax for using it will probably be very awkward and error-prone. If everything else fails, this might still be the way to go, though.
I am having trouble understanding your requirements. But Ill state - in my own words as it were - what I understand the situation to be:
You have abstract Parameter class, which is subclassed eventually to some concrete classes (eg: ParameterLimitedInt).
You have a seperate GUI system which will be passed these parameters in a generic fashion, but the catch is that it needs to present the GUI component specific to the concrete type of the parameter class.
The restrictions are that you dont want to do RTTID, and dont want to write code to handle every possible type of concrete parameter.
You are open to using the visitor pattern.
With those being your requirements, here is how I would handle such a situation:
I would implement the visitor pattern where the accept() returns a boolean value. The base Parameter class would implement a virtual accept() function and return false.
Concrete implementations of the Parameter class would then contain accept() functions which will call the visitor's visit(). They would return true.
The visitor class would make use of a templated visit() function so you would only override for the concrete Parameter types you care to support:
class Visitor
{
public:
template< class T > void visit( const T& param ) const
{
assert( false && "this parameter type not specialised in the visitor" );
}
void visit( const ParameterLimitedInt& ) const; // specialised implementations...
}
Thus if accept() returns false, you know the concrete type for the Parameter has not implemented the visitor pattern yet (in case there is additional logic you would prefer to handle on a case by case basis). If the assert() in the visitor pattern triggers, its because its not visiting a Parameter type which you've implemented a specialisation for.
One downside to all of this is that unsupported visits are only caught at runtime.