C++ polymorphism with template interface - c++

Timer.h:
template<class T>
class Timer {
public:
typedef T Units;
virtual Units get() = 0;
};
TimerImpl.h:
class TimerImpl: public Timer<long> {
public:
TimerImpl() {
}
~TimerImpl() {
}
long get();
};
FpsMeter.h(version 1):
template <class T>
class FpsMeter {
private:
Timer<T>* timer;
public:
FpsMeter (Timer<T>* timer) {
this->timer = timer;
}
...
};
This example works. But it does not look pretty.
Timer<long>* t = new TimerImpl();
FpsMeter<long>* f1 = new FpsMeter<long> (t);
Here there are a lot of extra template uses. How can I realize this idea of multy-type interface when the type is defined by the implementation and user class has not to define new type, it should use the type of the implementation.

If you don't mind a helper template function which always creates FpsMeter on the heap you could something like the following
template < class T >
FpsMeter<T> *make_FpsMeter( Timer<T> *timer ) {
return new FpsMeter<T>( timer );
}
Then creating a FpsMeter of the appropriate type is like so
FpsMeter<long> *f1 = make_FpsMeter( new TimerImpl() );
Or if you can use C++11 auto you'd get
auto f1 = make_FpsMeter( new TimerImpl() );

This is the best you can do in C++, as far as I know. FpsCounter needs to know the type T so that it knows which Timer<T> implementations it can accept. Your sample code can be made somewhat simpler:
FpsMeter<long>* f1 = new FpsMeter<long> (new TimerImpl());
...which at least gets you out of repeating the template type, but of course in that case FpsMeter must take responsibility for deleting the TimerImpl, ideally through an auto_ptr or such.
I'd question, too, whether you really need to vary the return value of get(). What sorts of values do you expect it to return, besides long?

Maybe you can take inspiration from the <chrono> library from C++11 (also available in boost). Or better yet, save yourself some time and just use it directly. It's efficient, safe, flexible, and easy to use.

If you want to use only timers based on implementation of machine timer (which should be defined at compilation stage for entire program I assume), i would simply use typedef and perhaps some preprocessor magic to get it right:
[...]
#if TIMER_LONG // Here you should somehow check what type is used on target platform.
typedef Timer<long> implTimer;
typedef FpsMeter<long> implFpsMeter;
#else // If eg. using double?
typedef Timer<double> implTimer;
typedef FpsMeter<double> implFpsMeter;
#fi
This should make user code unaware of actual typee used, as long it is using implTimer and implFpsMeter.
If you mean that some parts of code will use different TimerImpl then you should make your FpsMeter class polymorphic
class FpsMeter{
public:
virtual double fps()=0;
virutal void newFrame()=0;
[...]
//Class counts new frames and using internal timer calculates fps.
};
template <typename T>
class FpsMeterImpl: public FpsMeter{
TimerImpl<T>* timer;
public:
FpsMeterImpl(TimerImpl<T>* timer);
virtual double fps();
virutal void newFrame();
};

Related

Testing template to memory location to replace defines in embedded systems

In embedded systems, you often have a memory location which is not within the program memory itself but which points to some hardware registers. Most C SDKs provide these as #define statements. According to the following article, https://arne-mertz.de/2017/06/stepping-away-from-define/ one method of transitioning from #define statements (as used by C SDKs) to something more C++ friendly, is to create a class which forces reinterpret_cast to occur at runtime.
I am trying to go about this in a slightly different way because I want to be able to create "type traits" for the different pointers. Let me illustrate with an example.
#define USART1_ADDR 0x1234
#define USART2_ADDR 0x5678
template <typename T_, std::intptr_t ADDR_>
class MemPointer {
public:
static T_& ref() { return *reinterpret_cast<T_*>(ADDR_); }
};
class USART {
public:
void foo() { _registerA = 0x10; }
private:
uint32_t _registerA;
uint32_t _registerB;
};
using USART1 = MemPointer<USART, USART1_ADDR>;
using USART2 = MemPointer<USART, USART2_ADDR>;
template <typename USART_>
class usart_name;
template <>
class usart_name<USART1> {
public:
static constexpr const char* name() { return "USART1"; }
};
template <>
class usart_name<USART2> {
public:
static constexpr const char* name() { return "USART2"; }
};
Each USART "instance" in this example is its own, unique type so that I am able to create traits which allow compile-time "lookup" of information about the USART instance.
This actually seems to work, however, I wanted to create some test code as follows
static USART testUsart;
#define TEST_USART_ADDR (std::intptr_t)(&testUsart);
using TEST_USART = MemPointer<USART, TEST_USART_ADDR>;
Which fails with the following error:
conversion from pointer type 'USART*' to arithmetic type
'intptr_t' {aka 'long long int'} in a constant expression
I believe I understand the source of the problem based upon Why is reinterpret_cast not constexpr?
My question is, is there a way to make my MemPointer template work for test code like above as well?
EDIT
One solution is to have a separate class for each "instance" has follows
class USART1 : public USART {
public:
static USART& ref() { return *reinterpret_cast<USART*>(USART1_ADDR); }
};
class USART2 : public USART {
public:
static USART& ref() { return *reinterpret_cast<USART*>(USART2_ADDR); }
};
I would prefer some sort of template + using combination though so that I don't need to write a bunch of classes. But perhaps this is the only option.
is there a way to make my MemPointer template work for test code like above as well?
You could just stop insisting that the address be an intptr_t. You're going to cast it to a pointer anyway, so why not just allow any type for which that conversion exists?
template <typename T_, typename P, P ADDR_>
class MemPointer {
public:
static T_& ref() { return *reinterpret_cast<T_*>(ADDR_); }
};
using USART1 = MemPointer<USART, std::intptr_t, USART1_ADDR>;
using USART2 = MemPointer<USART, std::intptr_t, USART2_ADDR>;
static USART testUsart;
using TEST_USART = MemPointer<USART, USART*, &testUsart>;
Follow-up notes:
if this were for a library to be used by others, I'd consider adding a static_assert(std::is_trivial_v<T_>) inside MemPointer to catch annoying errors
there are a few potential issues around things like padding & alignment, but I assume you know what your particular embedded platform is doing
you should volatile-qualify your register members, or the whole object (eg. you can return std::add_volatile_t<T_>& from MemPointer::ref)
This is so the compiler knows that every write is an observable side-effect (ie, observable by the hardware even if your program never reads it back), and that every read may produce a different value (because the hardware can update it even if your program doesn't).

is this a example where abstract factory pattern is used?

Now I am developing a class for recognize a object in a photo, and this class is composed of several components (classes). For example,
class PhotoRecognizer
{
public:
int perform_recogniton()
{
pPreProcessing->do_preprocessing();
pFeatureExtractor->do_feature_extraction();
pClassifier->do_classification()
}
boost::shared_ptr<PreProcessing> pPreProcessing;
boost::shared_ptr<FeatureExtractor> pFeatureExtractor;
boost::shared_ptr<Classifier> pClassifier;
}
In this example, when we use this class to perform recognition, we invoke other classes PreProcessing, FeatureExtractor and Classifier. As you can image, there are many different methods to implement each class. For example, for the Classifier class, we can use SVMClassfier or NeuralNetworkClassifer, which is a derived class of the basic Classifier class.
class SVMClassifier: public Classifier
{
public:
void do_classification();
};
Therefore, by using different elements within PhotoRecognizer class, we can create different kinds of PhotoRecongnizer. Now, I am building a benchmark to know how to combine these elements together to create an optimal PhotoRecognizer. One solution I can think of is to use abstract factory:
class MethodFactory
{
public:
MethodFactory(){};
boost::shared_ptr<PreProcessing> pPreProcessing;
boost::shared_ptr<FeatureExtractor> pFeatureExtractor;
boost::shared_ptr<Classifier> pClassifier;
};
class Method1:public MethodFactory
{
public:
Method1():MethodFactory()
{
pPreProcessing.reset(new GaussianFiltering);
pFeatureExtractor.reset(new FFTStatictis);
pClassifier.reset(new SVMClassifier);
}
};
class Method2:public MethodFactory
{
public:
Method1():MethodFactory()
{
pPreProcessing.reset(new MedianFiltering);
pFeatureExtractor.reset(new WaveletStatictis);
pClassifier.reset(new NearestNeighborClassifier);
}
};
class PhotoRecognizer
{
public:
PhotoRecognizer(MethodFactory *p):pFactory(p)
{
}
int perform_recogniton()
{
pFactory->pPreProcessing->do_preprocessing();
pFactory->pFeatureExtractor->do_feature_extraction();
pFactory->pClassifier->do_classification()
}
MethodFactory *pFactory;
}
So when I use Method1 to perform photo recognition, I can simply do the following:
Method1 med;
PhotoRecognizer recogMethod1(&med);
med.perform_recognition()
Further more, I can even make the class PhotoRecognizer more compact:
enum RecMethod
{
Method1, Method2
};
class PhotoRecognizer
{
public:
PhotoRecognizer(RecMethod)
{
switch(RecMethod)
{
case Method1:
pFactory.reset(new Method1());
break;
...
}
}
boost::shared_ptr<MethodFactory> pFactory;
};
So here is my question: is abstract factory design pattern well justified in the situation described above? are there alternative solutions? Thanks.
As so often there is no ultimate "right" method to do it, and the answer depends a lot on how the project will be used. So if it is only for quick tests, done once and never looked back - go on and use enums if it is your heart's desire, nobody should stop you.
However, if you plan to extend the possible methods over time, I would discourage the usage of your second approach with enums. The reason is: every time you want to add a new method you have to change PhotoRecognizer class, so you have to read the code, to remember what it is doing and if somebody else should do it - it would take even more time.
The design with enums violates two first rules of SOLID (https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)):
Open-Closed-Principle (OCP): PhotoRecognizer class cannot be extended (adding a new method) without modification of its code.
Single-Responsibility-Principle (SRP): PhotoRecognizer class does not only recognize the photo, but also serves as a factory for methods.
Your first approach is better, because if you would define another Method3 you could put it into your PhotoRecognizer and use it without changing the code of the class:
//define Method3 somewhere
Method3 med;
PhotoRecognizer recogMethod3(&med);
med.perform_recognition()
What I don't like about your approach, is that for every possible combination you have to write a class (MethodX), which might result in a lot of joyless work. I would do the following:
struct Method
{
boost::shared_ptr<PreProcessing> pPreProcessing;
boost::shared_ptr<FeatureExtractor> pFeatureExtractor;
boost::shared_ptr<Classifier> pClassifier;
};
See Method as as a collection of slots for different algorithms, it here because it is convenient to pass Processing/Extractor/Classifier in this way.
And one could use a factory function:
enum PreprocessingType {pType1, pType2, ...};
enum FeatureExtractorType {feType1, feType2, ..};
enum ClassifierType {cType1, cType2, ... };
Method createMethod(PreprocessingType p, FeatureExtractionType fe, ClassifierType ct){
Method result;
swith(p){
pType1: result.pPreprocessing.reset(new Type1Preprocessing());
break;
....
}
//the same for the other two: fe and ct
....
return result
}
You might ask: "But how about OCP?" - and you would be right! One has to change the createMethod to add other (new) classes. And it might be not much comfort to you, that you still have the possibility to create a Method-object by hand, initialize the fields with the new classes and pass it to a PhotoRecognizer-constructor.
But with C++, you have a mighty tool at your disposal - the templates:
template < typename P, typename FE, typename C>
Method createMethod(){
Method result;
result.pPrepricessing.reset(new P());
result.pFeatureExtractor.reset(new FE());
result.pClassifier.reset(new C());
return result
}
And you are free to chose any combination you want without changing the code:
//define P1, FE22, C2 somewhere
Method medX=createMethod<P1, FE22, C2>();
PhotoRecognizer recogMethod3(&med);
recogMethod3.perform_recognition()
There is yet another issue: What if the class PreProcessingA can not be used with the class ClassifierB? Earlier, if there was no class MethodAB nobody could use it, but now this mistake is possible.
To handle this problem, traits can be used:
template <class A, class B>
struct Together{
static const bool can_be_used=false;
template <>
struct Together<class PreprocessingA, class ClassifierA>{
static const bool can_be_used=true;
}
template < typename P, typename FE, typename C>
Method createMethod(){
static_assert(Together<P,C>::can_be_used, "classes cannot be used together");
Method result;
....
}
Conclusion
This approach has the following advantages:
SRP, i.e. PhotoRecognizer - only recognizes, Method - only bundles the algorithm parts and createMethod - only creates a method.
OCP, i.e. we can add new algorithms without changing the code of other classes/functions
Thanks to traits, we can detect a wrong combination of part-algorithms at compile time.
No boilerplate code / no code duplication.
PS:
You could say, why not scratch the whole Method class? One could just as well use:
template < typename P, typename FE, typename C>
PhotoRecognizer{
P preprocessing;
FE featureExtractor;
C classifier;
...
}
PhotoRecognizer<P1, FE22, C2> recog();
recog.perform_recognition();
Yeah it's true. This alternative has some advantages and disadvantages, one must know more about the project to be able to make the right trade off. But as default I would go with the more SRP-principle compliant approach of encapsulating the part-algorithms into the Method class.
I've implemented an abstract factory pattern here and there. I've always regret the decision after revisiting the code for maintenance. There is no case, I can think of, where one or more factory methods wouldn't have been a better idea. Therefore, I like your second approach best. Consider ditching the method class as ead suggested. Once your testing is complete you'll have one or more factory methods that construct exactly what you want, and best of all, you and others will be able to follow the code later. For example:
std::shared_ptr<PhotoRecognizer> CreateOptimizedPhotoRecognizer()
{
auto result = std::make_shared<PhotoRecognizer>(
CreatePreProcessing(PreProcessingMethod::MedianFiltering),
CreateFeatureExtractor(FeatureExtractionMethod::WaveletStatictis),
CreateClassifier(ClassificationMethod::NearestNeighborClassifier)
);
return result;
}
Use your factory method in code like this:
auto pPhotoRecognizer = CreateOptimizedPhotoRecognizer();
Create the enumerations as you suggested. I know, I know, open/closed principle... If you keep these enumerations in one spot you won't have a problem keeping them in sync with your factory methods. First the enumerations:
enum class PreProcessingMethod { MedianFiltering, FilteringTypeB };
enum class FeatureExtractionMethod { WaveletStatictis, FeatureExtractionTypeB };
enum class ClassificationMethod { NearestNeighborClassifier, SVMClassfier, NeuralNetworkClassifer };
Here's an example of a component factory method:
std::shared_ptr<PreProcessing> CreatePreProcessing(PreProcessingMethod method)
{
std::shared_ptr<PreProcessing> result;
switch (method)
{
case PreProcessingMethod::MedianFiltering:
result = std::make_shared<MedianFiltering>();
break;
case PreProcessingMethod::FilteringTypeB:
result = std::make_shared<FilteringTypeB>();
break;
default:
break;
}
return result;
}
In order to determine the best combinations of algorithms you'll probably want to create some automated tests that run through all the possible permutations of components. One way to do this could be as straight forward as:
for (auto preProc = static_cast<PreProcessingMethod>(0); ;
preProc = static_cast<PreProcessingMethod>(static_cast<int>(preProc) + 1))
{
auto pPreProcessing = CreatePreProcessing(preProc);
if (!pPreProcessing)
break;
for (auto feature = static_cast<FeatureExtractionMethod>(0); ;
feature = static_cast<FeatureExtractionMethod>(static_cast<int>(feature) + 1))
{
auto pFeatureExtractor = CreateFeatureExtractor(feature);
if (!pFeatureExtractor)
break;
for (auto classifier = static_cast<ClassificationMethod>(0); ;
classifier = static_cast<ClassificationMethod>(static_cast<int>(classifier) + 1))
{
auto pClassifier = CreateClassifier(classifier);
if (!pClassifier)
break;
{
auto pPhotoRecognizer = std::make_shared<PhotoRecognizer>(
pPreProcessing,
pFeatureExtractor,
pClassifier
);
auto testResults = TestRecognizer(pPhotoRecognizer);
PrintConfigurationAndResults(pPhotoRecognizer, testResults);
}
}
}
}
Unless you are reusing MethodFactory, I'd recommend the following:
struct Method1 {
using PreProcessing_t = GaussianFiltering;
using FeatureExtractor_t = FFTStatictis;
using Classifier_t = SVMClassifier;
};
class PhotoRecognizer
{
public:
template<typename Method>
PhotoRecognizer(Method tag) {
pPreProcessing.reset(new typename Method::PreProcessing_t());
pFeatureExtractor.reset(new typename Method::FeatureExtractor_t());
pClassifier.reset(new typename Method::Classifier_t());
}
};
Usage:
PhotoRecognizer(Method1());

Templates and lazy initialization

I have a UIManager that manages a series of classes that inherit from a single UI class. Currently, it works something like this, where the individual UIs are initialized lazily and are stored statically:
class UIManager
{
public:
UIManager(); // Constructor
virtual ~UIManager(); // Destructor
template <typename T>
T *getUI()
{
static T ui(); // Constructs T, stores result in ui when
// getUI<T>() is first called
return &ui;
}
}
Called with:
getUI<NameEntryUI>()->activate();
or
getUI<MenuUI>()->render();
I am considering a design change that would allow me to have more than one player, hence more than one game window, hence more than one UIManager. I want all my constructed ui objects to be cleaned up when the UIManager is deleted (currently, because the ui objects are static, they stick around until the program exits).
How can I rewrite the above to remove the ui objects when UIManager is killed?
======================================
Here is the solution I've implemented. Early results are that it is working well.
Basically, I started with the idea suggested by Potatoswatter, which I liked because it was similar to an approach I had started then aborted because I didn't know about typeid(T). I backported the code to use only C++98 features. The key to the whole thing is typeid(T), which lets you map instantiated interfaces to their type in a consistent manner.
class UIManager
{
typedef map<const char *, UserInterface *> UiMapType;
typedef UiMapType::iterator UiIterator;
map<const char *, UserInterface *> mUis;
public:
UIManager(); // Constructor
virtual ~UIManager() // Destructor
{
// Clear out mUis
for(UiIterator it = mUis.begin(); it != mUis.end(); it++)
delete it->second;
mUis.clear();
}
template <typename T>
T *getUI()
{
static const char *type = typeid(T).name();
T *ui = static_cast<T *>(mUis[type]);
if(!ui)
ui = new T();
mUis[type] = ui;
return ui;
}
}
Currently, you only have storage space allocated for one UI element of each type. It's fundamentally impossible to keep that principle yet have any number of windows.
The quick and dirty solution would be to add a template argument for the window number. If it's a game, and you only have a limited number of players, you can have static storage for some predetermined number of windows.
template <typename T, int N>
T *getUI()
The approach of tying UI identity to the type system is fundamentally flawed, though, and I would recommend a more conventional approach using polymorphism and containers.
One way to identify the objects by type, yet store them dynamically, could look like
class UIManager {
std::map< std::type_index, std::unique_ptr< UIBase > > elements;
template< typename T >
T & GetUI() { // Return reference because null is not an option.
auto & p = elements[ typeid( T ) ];
if ( ! p ) p.reset( new T );
return dynamic_cast< T & >( * p );
}
}
Note that this requires UIBase to have a virtual destructor, or the objects won't be terminated properly when you quit.
Since you clearly need multiple objects per type, let's simply store the objects in a std::map<UIManager const*, T>. To pull out a specific managed object, it is looked up in the map for the corresponding type. The tricky bit is later getting rid of the objects which is handled using a list of function objects:
class UIManager
{
std::vector<std::function<void()>> d_cleaners;
UIManager(UIManager const&) = delete;
void operator=(UIManager const&) = delete;
public:
UIManager();
~UIManager();
template <typename T>
T *getUI() {
static std::map<UIManager const*, T> uis;
typename std::map<UIManager const*, T>::iterator it = uis.find(this);
if (it == uis.end()) {
it = uis.insert(std::make_pair(this, T())).first;
this->d_cleaner.push_back([it, &uis](){ uis.erase(it); });
}
return &(it->second);
}
};
The respective getUI() function stores a map mapping the address of the UIManager, i.e., this, to the corresponding object. If there is no such mapping, a new mapping is inserted. In addition, to make sure objects are cleaned up, a cleaner function is registered with this, simply erase() in the iterator just obtained from the corresponding map. The code is untested but something along those lines should work.

Automatically Instantiating over a bunch of types in C++

In our library we have a number of "plugins", which are implemented in their own cpp files. Each plugin defines a template function, and should instantiate this function over a whole bunch of types. The number of types can be quite large, 30-100 of them, and can change depending on some compile time options. Each instance really have to be compiled and optimized individually, the performance improves by 10-100 times. The question is what is the best way to instantiate all of these functions.
Each plugin is written by a scientist who does not really know C++, so the code inside each plugin must be hidden inside macros or some simple construct. I have a half-baked solution based on a "database" of instances:
template<int plugin_id, class T>
struct S
{
typedef T (*ftype)(T);
ftype fp;
};
// By default we don't have any instances
template<int plugin_id, class T> S::ftype S::fp = 0;
Now a user that wants to use a plugin can check the value of
S<SOME_PLUGIN,double>::fp
to see if there is a version of this plugin for the double type. The template instantiation of fp will generate a weak reference, so the linker will use the "real" instance if we define it in a plugin implementation file. Inside the implementation of SOME_PLUGIN we will have an instantiation
template<> S<SOME_PLUGIN,double>::ftype S<SOME_PLUGIN,double>::fp =
some_plugin_implementation;
This seems to work. The question is if there is some way to automatically repeat this last statement for all types of interest. The types can be stored in a template class or generated by a template loop. I would prefer something that can be hidden by a macro. Of course this can be solved by an external code generator, but it's hard to do this portably and it interfers with the build systems of the people that use the library. Putting all the plugins in header files solves the problem, but makes the compiler explode (needing many gigabytes of memory and a very long compilation time).
I've used http://www.boost.org/doc/libs/1_44_0/libs/preprocessor/doc/index.html for such magic, in particular SEQ_FOR_EACH.
You could use a type list from Boost.MPL and then create a class template that recursively eats that list and instantiates every type. This would however make them all nested structs of that class template.
Hmm, I don't think I understand your problem correctly, so apologies if this answer is way off the mark, but could you not have a static member of S, which has a static instance of ftype, and return a reference to that, this way, you don't need to explicitly have an instance defined in your implementation files... i.e.
template<int plugin_id, class T>
struct S
{
typedef T (*ftype)(T);
static ftype& instance()
{
static ftype _fp = T::create();
return _fp;
}
};
and instead of accessing S<SOME_PLUGIN,double>::fp, you'd do S<SOME_PLUGIN,double>::instance(). To instantiate, at some point you have to call S<>::instance(). Do you need this to happen automagically as well?
EDIT: just noticed that you have a copy constructor, for ftype, changed the above code.. now you have to define a factory method in T called create() to really create the instance.
EDIT: Okay, I can't think of a clean way of doing this automatically, i.e. I don't believe there is a way to (at compile time) build a list of types, and then instantiate. However you could do it using a mix... Hopefully the example below will give you some ideas...
#include <iostream>
#include <typeinfo>
#include <boost/fusion/include/vector.hpp>
#include <boost/fusion/algorithm.hpp>
using namespace std;
// This simply calls the static instantiate function
struct instantiate
{
template <typename T>
void operator()(T const& x) const
{
T::instance();
}
};
// Shared header, presumably all plugin developers will use this header?
template<int plugin_id, class T>
struct S
{
typedef T (*ftype)(T);
static ftype& instance()
{
cout << "S: " << typeid(S<plugin_id, T>).name() << endl;
static ftype _fp; // = T::create();
return _fp;
}
};
// This is an additional struct, each plugin developer will have to implement
// one of these...
template <int plugin_id>
struct S_Types
{
// All they have to do is add the types that they will support to this vector
static void instance()
{
boost::fusion::vector<
S<plugin_id, double>,
S<plugin_id, int>,
S<plugin_id, char>
> supported_types;
boost::fusion::for_each(supported_types, instantiate());
}
};
// This is a global register, so once a plugin has been developed,
// add it to this list.
struct S_Register
{
S_Register()
{
// Add each plugin here, you'll only have to do this when a new plugin
// is created, unfortunately you have to do it manually, can't
// think of a way of adding a type at compile time...
boost::fusion::vector<
S_Types<0>,
S_Types<1>,
S_Types<2>
> plugins;
boost::fusion::for_each(plugins, instantiate());
}
};
int main(void)
{
// single instance of the register, defining this here, effectively
// triggers calls to instanc() of all the plugins and supported types...
S_Register reg;
return 0;
}
Basically uses a fusion vector to define all the possible instances that could exist. It will take a little bit of work from you and the developers, as I've outlined in the code... hopefully it'll give you an idea...

handling pointer to member functions within hierachy in C++

I'm trying to code the following situation:
I have a base class providing a framework for handling events. I'm trying to use an array of pointer-to-member-functions for that. It goes as following:
class EH { // EventHandler
virtual void something(); // just to make sure we get RTTI
public:
typedef void (EH::*func_t)();
protected:
func_t funcs_d[10];
protected:
void register_handler(int event_num, func_t f) {
funcs_d[event_num] = f;
}
public:
void handle_event(int event_num) {
(this->*(funcs_d[event_num]))();
}
};
Then the users are supposed to derive other classes from this one and provide handlers:
class DEH : public EH {
public:
typedef void (DEH::*func_t)();
void handle_event_5();
DEH() {
func_t f5 = &DEH::handle_event_5;
register_handler(5, f5); // doesn't compile
........
}
};
This code wouldn't compile, since DEH::func_t cannot be converted to EH::func_t. It makes perfect sense to me. In my case the conversion is safe since the object under this is really DEH. So I'd like to have something like that:
void EH::DEH_handle_event_5_wrapper() {
DEH *p = dynamic_cast<DEH *>(this);
assert(p != NULL);
p->handle_event_5();
}
and then instead of
func_t f5 = &DEH::handle_event_5;
register_handler(5, f5); // doesn't compile
in DEH::DEH()
put
register_handler(5, &EH::DEH_handle_event_5_wrapper);
So, finally the question (took me long enough...):
Is there a way to create those wrappers (like EH::DEH_handle_event_5_wrapper) automatically?
Or to do something similar?
What other solutions to this situation are out there?
Thanks.
Instead of creating a wrapper for each handler in all derived classes (not even remotely a viable approach, of course), you can simply use static_cast to convert DEH::func_t to EH::func_t. Member pointers are contravariant: they convert naturally down the hierarchy and they can be manually converted up the hierarchy using static_cast (opposite of ordinary object pointers, which are covariant).
The situation you are dealing with is exactly the reason the static_cast functionality was extended to allow member pointer upcasts. Moreover, the non-trivial internal structure of a member function pointer is also implemented that way specifically to handle such situations properly.
So, you can simply do
DEH() {
func_t f5 = &DEH::handle_event_5;
register_handler(5, static_cast<EH::func_t>(f5));
........
}
I would say that in this case there's no point in defining a typedef name DEH::func_t - it is pretty useless. If you remove the definition of DEH::func_t the typical registration code will look as follows
DEH() {
func_t f5 = static_cast<func_t>(&DEH::handle_event_5);
// ... where `func_t` is the inherited `EH::func_t`
register_handler(5, f5);
........
}
To make it look more elegant you can provide a wrapper for register_handler in DEH or use some other means (a macro? a template?) to hide the cast.
This method does not provide you with any means to verify the validity of the handler pointer at the moment of the call (as you could do with dynamic_cast in the wrapper-based version). I don't know though how much you care to have this check in place. I would say that in this context it is actually unnecessary and excessive.
Why not just use virtual functions? Something like
class EH {
public:
void handle_event(int event_num) {
// Do any pre-processing...
// Invoke subclass hook
subclass_handle_event( event_num );
// Do any post-processing...
}
private:
virtual void subclass_handle_event( int event_num ) {}
};
class DEH : public EH {
public:
DEH() { }
private:
virtual void subclass_handle_event( int event_num ) {
if ( event_num == 5 ) {
// ...
}
}
};
You really shouldn't be doing it this way. Check out boost::bind
http://www.boost.org/doc/libs/1_43_0/libs/bind/bind.html
Elaboration:
First, I urge you to reconsider your design. Most event handler systems I've seen involve an external registrar object that maintains mappings of events to handler objects. You have the registration embedded in the EventHandler class and are doing the mapping based on function pointers, which is much less desirable. You're running into problems because you're making an end run around the built-in virtual function behavior.
The point of boost::bindand the like is to create objects out of function pointers, allowing you to leverage object oriented language features. So an implementation based on boost::bind with your design as a starting point would look something like this:
struct EventCallback
{
virtual ~EventCallback() { }
virtual void handleEvent() = 0;
};
template <class FuncObj>
struct EventCallbackFuncObj : public IEventCallback
{
EventCallbackT(FuncObj funcObj) :
m_funcObj(funcObj) { }
virtual ~EventCallbackT() { }
virtual void handleEvent()
{
m_funcObj();
}
private:
FuncObj m_funcObj;
};
Then your register_handler function looks something like this:
void register_handler(int event_num, EventCallback* pCallback)
{
m_callbacks[event_num] = pCallback;
}
And your register call would like like:
register_handler(event,
new EventCallbackFuncObj(boost::bind(&DEH::DEH_handle_event_5_wrapper, this)));
Now you can create a callback object from an (object, member function) of any type and save that as the event handler for a given event without writing customized function wrapper objects.