Virtual Methods or Function Pointers - c++

When implementing polymorphic behavior in C++ one can either use a pure virtual method or one can use function pointers (or functors). For example an asynchronous callback can be implemented by:
Approach 1
class Callback
{
public:
Callback();
~Callback();
void go();
protected:
virtual void doGo() = 0;
};
//Constructor and Destructor
void Callback::go()
{
doGo();
}
So to use the callback here, you would need to override the doGo() method to call whatever function you want
Approach 2
typedef void (CallbackFunction*)(void*)
class Callback
{
public:
Callback(CallbackFunction* func, void* param);
~Callback();
void go();
private:
CallbackFunction* iFunc;
void* iParam;
};
Callback::Callback(CallbackFunction* func, void* param) :
iFunc(func),
iParam(param)
{}
//Destructor
void go()
{
(*iFunc)(iParam);
}
To use the callback method here you will need to create a function pointer to be called by the Callback object.
Approach 3
[This was added to the question by me (Andreas); it wasn't written by the original poster]
template <typename T>
class Callback
{
public:
Callback() {}
~Callback() {}
void go() {
T t; t();
}
};
class CallbackTest
{
public:
void operator()() { cout << "Test"; }
};
int main()
{
Callback<CallbackTest> test;
test.go();
}
What are the advantages and disadvantages of each implementation?

Approach 1 (Virtual Function)
"+" The "correct way to do it in C++
"-" A new class must be created per callback
"-" Performance-wise an additional dereference through VF-Table compared to Function Pointer. Two indirect references compared to Functor solution.
Approach 2 (Class with Function Pointer)
"+" Can wrap a C-style function for C++ Callback Class
"+" Callback function can be changed after callback object is created
"-" Requires an indirect call. May be slower than functor method for callbacks that can be statically computed at compile-time.
Approach 3 (Class calling T functor)
"+" Possibly the fastest way to do it. No indirect call overhead and may be inlined completely.
"-" Requires an additional Functor class to be defined.
"-" Requires that callback is statically declared at compile-time.
FWIW, Function Pointers are not the same as Functors. Functors (in C++) are classes that are used to provide a function call which is typically operator().
Here is an example functor as well as a template function which utilizes a functor argument:
class TFunctor
{
public:
void operator()(const char *charstring)
{
printf(charstring);
}
};
template<class T> void CallFunctor(T& functor_arg,const char *charstring)
{
functor_arg(charstring);
};
int main()
{
TFunctor foo;
CallFunctor(foo,"hello world\n");
}
From a performance perspective, Virtual functions and Function Pointers both result in an indirect function call (i.e. through a register) although virtual functions require an additional load of the VFTABLE pointer prior to loading the function pointer. Using Functors (with a non-virtual call) as a callback are the highest performing method to use a parameter to template functions because they can be inlined and even if not inlined, do not generate an indirect call.

Approach 1
Easier to read and understand
Less possibility of errors (iFunc cannot be NULL, you're not using a void *iParam, etc
C++ programmers will tell you that this is the "right" way to do it in C++
Approach 2
Slightly less typing to do
VERY slightly faster (calling a virtual method has some overhead, usually the same of two simple arithmetic operations.. So it most likely won't matter)
That's how you would do it in C
Approach 3
Probably the best way to do it when possible. It will have the best performance, it will be type safe, and it's easy to understand (it's the method used by the STL).

The primary problem with Approach 2 is that it simply doesn't scale. Consider the equivalent for 100 functions:
class MahClass {
// 100 pointers of various types
public:
MahClass() { // set all 100 pointers }
MahClass(const MahClass& other) {
// copy all 100 function pointers
}
};
The size of MahClass has ballooned, and the time to construct it has also significantly increased. Virtual functions, however, are O(1) increase in the size of the class and the time to construct it- not to mention that you, the user, must write all the callbacks for all the derived classes manually which adjust the pointer to become a pointer to derived, and must specify function pointer types and what a mess. Not to mention the idea that you might forget one, or set it to NULL or something equally stupid but totally going to happen because you're writing 30 classes this way and violating DRY like a parasitic wasp violates a caterpillar.
Approach 3 is only usable when the desired callback is statically knowable.
This leaves Approach 1 as the only usable approach when dynamic method invocation is required.

It's not clear from your example if you're creating a utility class or not. Is you Callback class intended to implement a closure or a more substantial object that you just didn't flesh out?
The first form:
Is easier to read and understand,
Is far easier to extend: try adding methods pause, resume and stop.
Is better at handling encapsulation (presuming doGo is defined in the class).
Is probably a better abstraction, so easier to maintain.
The second form:
Can be used with different methods for doGo, so it's more than just polymorphic.
Could allow (with additional methods) changing the doGo method at run-time, allowing the instances of the object to mutate their functionality after creation.
Ultimately, IMO, the first form is better for all normal cases. The second has some interesting capabilities, though -- but not ones you'll need often.

One major advantage of the first method is it has more type safety. The second method uses a void * for iParam so the compiler will not be able to diagnose type problems.
A minor advantage of the second method is that it would be less work to integrate with C. But if you're code base is only C++, this advantage is moot.

Function pointers are more C-style I would say. Mainly because in order to use them you usually must define a flat function with the same exact signature as your pointer definition.
When I write C++ the only flat function I write is int main(). Everything else is a class object. Out of the two choices I would choose to define an class and override your virtual, but if all you want is to notify some code that some action happened in your class, neither of these choices would be the best solution.
I am unaware of your exact situation but you might want to peruse design patterns
I would suggest the observer pattern. It is what I use when I need to monitor a class or wait for some sort of notification.

For example, let us look at an interface for adding read functionality to a class:
struct Read_Via_Inheritance
{
virtual void read_members(void) = 0;
};
Any time I want to add another source of reading, I have to inherit from the class and add a specific method:
struct Read_Inherited_From_Cin
: public Read_Via_Inheritance
{
void read_members(void)
{
cin >> member;
}
};
If I want to read from a file, database, or USB, this requires 3 more separate classes. The combinations start to be come very ugly with multiple objects and multiple sources.
If I use a functor, which happens to resemble the Visitor design pattern:
struct Reader_Visitor_Interface
{
virtual void read(unsigned int& member) = 0;
virtual void read(std::string& member) = 0;
};
struct Read_Client
{
void read_members(Reader_Interface & reader)
{
reader.read(x);
reader.read(text);
return;
}
unsigned int x;
std::string& text;
};
With the above foundation, objects can read from different sources just by supplying different readers to the read_members method:
struct Read_From_Cin
: Reader_Visitor_Interface
{
void read(unsigned int& value)
{
cin>>value;
}
void read(std::string& value)
{
getline(cin, value);
}
};
I don't have to change any of the object's code (a good thing because it is already working). I can also apply the reader to other objects.
Generally, I use inheritance when I am performing generic programming. For example, if I have a Field class, then I can create Field_Boolean, Field_Text and Field_Integer. In can put pointers to their instances into a vector<Field *> and call it a record. The record can perform generic operations on the fields, and doesn't care or know what kind of a field is processed.

Change to pure virtual, first off. Then inline it. That should negate any method overhead call at all, so long as inlining doesn't fail (and it won't if you force it).
May as well use C, because this is the only real useful major feature of C++ compared to C. You will always call method and it can't be inlined, so it will be less efficient.

Related

Is there a way to overload classes in a way similar to function overloading?

We can overload functions by giving them a different number of parameters. For example, functions someFunc() and someFunc(int i) can do completely different things.
Is it possible to achieve the same effect on classes? For example, having one class name but creating one class if a function is not called and a different class if that function is not called. For example, If I have a dataStorage class, I want the internal implementation to be a list if only add is called, but want it to be a heap if both add and pop are called.
I am trying to implement this in C++, but I am curious if this is even possible. Examples in other languages would also help. Thanks!
The type of an object must be completely known at the point of definition. The type cannot depend on what is done with the object later.
For the dataStorage example, you could define dataStorage as an abstract class. For example:
struct dataStorage {
virtual ~dataStorage() = default;
virtual void add(dataType data) = 0;
// And anything else necessarily common to all implementations.
};
There could be a "default" implementation that uses a list.
struct dataList : public dataStorage {
void add(dataType data) override;
// And whatever else is needed.
};
There could be another implementation that uses a heap.
struct dataHeap : public dataStorage {
void add(dataType data) override;
void pop(); // Maybe return `dataType`, if desired
// And whatever else is needed.
};
Functions that need only to add data would work on references to dataStorage. Functions that need to pop data would work on references to dataHeap. When you define an object, you would choose dataList if the compiler allows it, dataHeap otherwise. (The compiler would not allow passing a dataList object to a function that requires a dataHeap&.) This is similar to what you asked for, except it does require manual intervention. On the bright side, you can use the compiler to tell you which decision to make.
A downside of this approach is that changes can get messy. There is additional maintenance and runtime overhead compared to simply always using a heap (one class, no inheritance). You should do some performance measurements to ensure that the cost is worth it. Sometimes simplicity is the best design, even if it is not optimal in all cases.

How to write java like argument-level implementation of interface in C++?

One of the nice things in Java is implementing interface. For example consider the following snippet:
interface SimpleInterface()
{
public: void doThis();
}
...
SimpleInterface simple = new SimpleInterface()
{
#Override public doThis(){ /**Do something here*/}
}
The only way I could see this being done is through Lambda in C++ or passing an instance of function<> to a class. But I am actually checking if this is possible somehow? I have classes which implements a particular interface and these interfaces just contain 1-2 methods. I can't write a new file for it or add a method to a class which accepts a function<> or lambda so that it can determine on what to do. Is this strictly C++ limitation? Will it ever be supported?
Somehow, I wanted to write something like this:
thisClass.setAction(int i , new SimpleInterface()
{
protected:
virtual void doThis(){}
});
One thing though is that I haven't check the latest spec for C++14 and I wanted to know if this is possible somehow.
Thank you!
Will it ever be supported?
You mean, will the language designers ever add a dirty hack where the only reason it ever existed in one language was because those designers were too stupid to add the feature they actually needed?
Not in this specific instance.
You can create a derived class that derives from it and then uses a lambda, and then use that at your various call sites. But you'd still need to create one converter for each interface.
struct FunctionalInterfaceImpl : SimpleInterface {
FunctionalInterfaceImpl(std::function<void()> f)
: func(f) {}
std::function<void()> func;
void doThis() { func(); }
};
You seem to think each class needs a separate .h and .cpp file. C++ allows you to define a class at any scope, including local to a function:
void foo() {
struct SimpleInterfaceImpl : SimpleInterface
{
protected:
void doThis() override {}
};
thisClass.setAction(int i , new SimpleInterfaceImpl());
}
Of course, you have a naked new in there which is probably a bad idea. In real code, you'd want to allocate the instance locally, or use a smart pointer.
This is indeed a "limitation" of C++ (and C#, as I was doing some research some time ago). Anonymous java classes are one of its unique features.
The closest way you can emulate this is with function objects and/or local types. C++11 and later offers lambdas which are semantic sugar of those two things, for this reason, and saves us a lot of writing. Thank goodness for that, before c++11 one had to define a type for every little thing.
Please note that for interfaces that are made up of a single method, then function objects/lambdas/delegates(C#) are actually a cleaner approach. Java uses interfaces for this case as a "limitation" of its own. It would be considered a Java-ism to use single-method interfaces as callbacks in C++.
Local types are actually a pretty good approximation, the only drawback being that you are forced to name the types (see edit) (a tiresome obligation, which one takes over when using static languages of the C family).
You don't need to allocate an object with new to use it polymorphically. It can be a stack object, which you pass by reference (or pointer, for extra anachronism). For instance:
struct This {};
struct That {};
class Handler {
public:
virtual ~Handler ();
virtual void handle (This) = 0;
virtual void handle (That) = 0;
};
class Dispatcher {
Handler& handler;
public:
Dispatcher (Handler& handler): handler(handler) { }
template <typename T>
void dispatch (T&& obj) { handler.handle(std::forward<T>(obj)); }
};
void f ()
{
struct: public Handler {
void handle (This) override { }
void handle (That) override { }
} handler;
Dispatcher dispatcher { handler };
dispatcher.dispatch(This {});
dispatcher.dispatch(That {});
}
Also note the override specifier offered by c++11, which has more or less the same purpose as the #Override annotation (generate a compile error in case this member function (method) does not actually override anything).
I have never heard about this feature being supported or even discussed, and I personally don't see it even being considered as a feature in C++ community.
EDIT right after finishing this post, I realised that there is no need to name local types (naturally), so the example becomes even more java-friendly. The only difference being that you cannot define a new type within an expression. I have updated the example accordingly.
In c++ interfaces are classes which has pure virtual functions in them, etc
class Foo{
virtual Function() = 0;
};
Every single class that inherits this class must implement this function.

virtual overloading vs `std::function` member?

I'm in a situation where I have a class, let's call it Generic. This class has members and attributes, and I plan to use it in a std::vector<Generic> or similar, processing several instances of this class.
Also, I want to specialize this class, the only difference between the generic and specialized objects would be a private method, which does not access any member of the class (but is called by other methods). My first idea was to simply declare it virtual and overload it in specialized classes like this:
class Generic
{
// all other members and attributes
private:
virtual float specialFunc(float x) const =0;
};
class Specialized_one : public Generic
{
private:
virtual float specialFunc(float x) const{ return x;}
};
class Specialized_two : public Generic
{
private:
virtual float specialFunc(float x) const{ return 2*x; }
}
And thus I guess I would have to use a std::vector<Generic*>, and create and destroy the objects dynamically.
A friend suggested me using a std::function<> attribute for my Generic class, and give the specialFunc as an argument to the constructor but I am not sure how to do it properly.
What would be the advantages and drawbacks of these two approaches, and are there other (better ?) ways to do the same thing ? I'm quite curious about it.
For the details, the specialization of each object I instantiate would be determined at runtime, depending on user input. And I might end up with a lot of these objects (not yet sure how many), so I would like to avoid any unnecessary overhead.
virtual functions and overloading model an is-a relationship while std::function models a has-a relationship.
Which one to use depends on your specific use case.
Using std::function is perhaps more flexible as you can easily modify the functionality without introducing new types.
Performance should not be the main decision point here unless this code is provably (i.e. you measured it) the tight loop bottleneck in your program.
First of all, let's throw performance out the window.
If you use virtual functions, as you stated, you may end up with a lot of classes with the same interface:
class generic {
virtual f(float x);
};
class spec1 : public generic {
virtual f(float x);
};
class spec2 : public generic {
virtual f(float x);
};
Using std::function<void(float)> as a member would allow you to avoid all the specializations:
class meaningful_class_name {
std::function<void(float)> f;
public:
meaningful_class_name(std::function<void(float)> const& p_f) : f(p_f) {}
};
In fact, if this is the ONLY thing you're using the class for, you might as well just remove it, and use a std::function<void(float)> at the level of the caller.
Advantages of std::function:
1) Less code (1 class for N functions, whereas the virtual method requires N classes for N functions. I'm making the assumption that this function is the only thing that's going to differ between classes).
2) Much more flexibility (You can pass in capturing lambdas that hold state if you want to).
3) If you write the class as a template, you could use it for all kinds of function signatures if needed.
Using std::function solves whatever problem you're attempting to tackle with virtual functions, and it seems to do it better. However, I'm not going to assert that std::function will always be better than a bunch of virtual functions in several classes. Sometimes, these functions have to be private and virtual because their implementation has nothing to do with any outside callers, so flexibility is NOT an advantage.
Disadvantages of std::function:
1) I was about to write that you can't access the private members of the generic class, but then I realized that you can modify the std::function in the class itself with a capturing lambda that holds this. Given the way you outlined the class however, this shouldn't be a problem since it seems to be oblivious to any sort of internal state.
What would be the advantages and drawbacks of these two approaches, and are there other (better ?) ways to do the same thing ?
The issue I can see is "how do you want your class defined?" (as in, what is the public interface?)
Consider creating an API like this:
class Generic
{
// all other members and attributes
explicit Generic(std::function<float(float)> specialFunc);
};
Now, you can create any instance of Generic, without care. If you have no idea what you will place in specialFunc, this is the best alternative ("you have no idea" means that clients of your code may decide in one month to place a function from another library there, an identical function ("receive x, return x"), accessing some database for the value, passing a stateful functor into your function, or whatever else).
Also, if the specialFunc can change for an existing instance (i.e. create instance with specialFunc, use it, change specialFunc, use it again, etc) you should use this variant.
This variant may be imposed on your code base by other constraints. (for example, if want to avoid making Generic virtual, or if you need it to be final for other reasons).
If (on the other hand) your specialFunc can only be a choice from a limited number of implementations, and client code cannot decide later they want something else - i.e. you only have identical function and doubling the value - like in your example - then you should rely on specializations, like in the code in your question.
TLDR: Decide based on the usage scenarios of your class.
Edit: regarding beter (or at least alternative) ways to do this ... You could inject the specialFunc in your class on an "per needed" basis:
That is, instead of this:
class Generic
{
public:
Generic(std::function<float(float> f) : specialFunc{f} {}
void fancy_computation2() { 2 * specialFunc(2.); }
void fancy_computation4() { 4 * specialFunc(4.); }
private:
std::function<float(float> specialFunc;
};
You could write this:
class Generic
{
public:
Generic() {}
void fancy_computation2(std::function<float(float> f) { 2 * f(2.); }
void fancy_computation4(std::function<float(float> f) { 4 * f(4.); }
private:
};
This offers you more flexibility (you can use different special functions with single instance), at the cost of more complicated client code. This may also be a level of flexibility that you do not want (too much).

c++ switch vs. member function pointer vs. virtual inheritance

I am trying to analyze the trade offs between various methods of achieving polymorphism. I need a list of objects with some similarities and some differences in member functions. The options I see are as follows:
have a flag in each object, and a switch statement in each function.
The value of the flag directs each object to its specific section of
each function.
have an array of member function pointers in the object, which are
assigned upon construction. Then, I call that function pointer to
get the correct member function.
have an virtual base class with several derived classes. One
drawback to this is that my list will now have to contain pointers,
and not the objects themselves.
My understanding is that the pointer lookups from the list in option 3 will take longer than the member function lookups of option 2 because of the guaranteed proximity of member functions.
What are some of the benefits/drawbacks of these options? My priority is performance over readability.
Is there any other method for polymorphism?
have a flag in each object, and a switch statement in each function. The value of the flag directs each object to its specific section of each function
OK, so this could make sense if very little code varies based on the flag.
This minimises the amount of (duplicated) code which has to fit in cache, and avoids any function call indirection. Under some circumstances these benefits could outweigh the extra cost of the switch statement.
have an array of member function pointers in the object, which are assigned upon construction. Then, I call that function pointer to get the correct member function
You save one indirection (to the vtable), but also make your objects bigger so fewer fit in cache. It's impossible to say which will dominate, so you'll just have to profile, but it isn't an obvious win
have an virtual base class with several derived classes. One drawback to this is that my list will now have to contain pointers, and not the objects themselves
If the your code paths are different enough that separating them completely is reasonable, this is the cleanest solution. If you need to optimise it, you can either use a specialised allocator to ensure they're sequential (even if not sequential in your container), or move the objects directly into your container using a clever wrapper similar to Boost.Any. You'll still get the vtable indirection, but I'd prefer this to #2 unless profiling shows it's really a problem.
So, there are several questions you should answer before you can decide:
how much code is shared, and how much varies?
how big are the objects, and will a table of inline function pointers materially affect your cache miss stats?
and, after you've answered those, you should just profile anyway.
One way to achieve faster polymorphism is through the CRTP idiom and static polymorphism:
template<typename T>
struct base
{
void f()
{
static_cast<T*>( this )->f_impl();
}
};
struct foo : public base<foo>
{
void f_impl()
{
std::cout << "foo!" << std::endl;
}
};
struct bar : public base<bar>
{
void f_impl()
{
std::cout << "bar!" << std::endl;
}
};
struct quux : public base<quux>
{
void f_impl()
{
std::cout << "quux!" << std::endl;
}
};
template<typename T>
void call_f( const base<T>& something )
{
something.f();
}
int main()
{
foo my_foo;
bar my_bar;
quux my_quux;
call_f( my_foo );
call_f( my_bar );
call_f( my_quux );
}
This outputs:
foo!
bar!
quux!
Static-polymorphism performs far better than virtual dispatch, because the compiler knows which function will be called at compile-time, and it could inline everything.
Even if it provides dynamic binding, it cannot perform polymorphism in the common heterogeneous-container way, because every instance of the base class is a different type.
However, that could be achieved with something like boost::any.
With a switch statement, if you want to add a new class then you need to modify everywhere where the class is switched on, which may be in various places in your code base. There may also be places outside your code base that need to be modified, but perhaps you know this isn't the case in this scenario.
With an array of member function pointers within each member, the only downside is that you duplicate that memory for every object. If you know there's only one or two "virtual" functions though then it's a good option.
As for virtual functions, you are right in that you have to heap allocate them (or manual manage the memory), but it is the most extensible option.
If you aren't after extensible, then (1) or (2) may be your best option. As always, the only way to tell is to measure. I know that many compilers will implement a switch statement in some cases by a jump table, which essentially comes out the same as a virtual function table. For small numbers of case statement they may just use binary search branching.
Measure!

Dynamically construct function

I fear something like this is answered somewhere on this site, but I can't find it because I don't even know how to formulate the question. So here's the problem:
I have a voxel drowing function. First I calculate offsets, angles and stuff and after I do drowing. But I make few versions of every function because sometimes I want to copy pixel, sometimes blit, sometimes blit 3*3 square for every pixel for smoothing effect, sometimes just copy pixel to n*n pixels on the screen if object is resized. And there's tons of versions for that small part in the center of a function.
What can I do instead of writing 10 of same functions which differ only by central part of code? For performance reasons, passing a function pointer as an argument is not an option. I'm not sure making them inline will do the trick, because arguments I send differ: sometimes I calculate volume(Z value), sometimes I know pixels are drawn from bottom to top.
I assume there's some way of doing this stuff in C++ everybody knows about.
Please tell me what I need to learn to do this. Thanks.
The traditional OO approaches to this are the template method pattern and the strategy pattern.
Template Method
The first is an extension of the technique described in Vincenzo's answer: instead of writing a simple non-virtual wrapper, you write a non-virtual function containing the whole algorithm. Those parts that might vary, are virtual function calls.
The specific arguments needed for a given implementation, are stored in the derived class object that provides that implementation.
eg.
class VoxelDrawer {
protected:
virtual void copy(Coord from, Coord to) = 0;
// any other functions you might want to change
public:
virtual ~VoxelDrawer() {}
void draw(arg) {
for (;;) {
// implement full algorithm
copy(a,b);
}
}
};
class SmoothedVoxelDrawer: public VoxelDrawer {
int radius; // algorithm-specific argument
void copy(Coord from, Coord to) {
blit(from.dx(-radius).dy(-radius),
to.dx(-radius).dy(-radius),
2*radius, 2*radius);
}
public:
SmoothedVoxelDrawer(int r) : radius(r) {}
};
Strategy
This is similar but instead of using inheritance, you pass a polymorphic Copier object as an argument to your function. Its more flexible in that it decouples your various copying strategies from the specific function, and you can re-use your copying strategies in other functions.
struct VoxelCopier {
virtual void operator()(Coord from, Coord to) = 0;
};
struct SmoothedVoxelCopier: public VoxelCopier {
// etc. as for SmoothedVoxelDrawer
};
void draw_voxels(arguments, VoxelCopier &copy) {
for (;;) {
// implement full algorithm
copy(a,b);
}
}
Although tidier than passing in a function pointer, neither the template method nor the strategy are likely to have better performance than just passing a function pointer: runtime polymorphism is still an indirect function call.
Policy
The modern C++ equivalent of the strategy pattern is the policy pattern. This simply replaces run-time polymorphism with compile-time polymorphism to avoid the indirect function call and enable inlining
// you don't need a common base class for policies,
// since templates use duck typing
struct SmoothedVoxelCopier {
int radius;
void copy(Coord from, Coord to) { ... }
};
template <typename CopyPolicy>
void draw_voxels(arguments, CopyPolicy cp) {
for (;;) {
// implement full algorithm
cp.copy(a,b);
}
}
Because of type deduction, you can simply call
draw_voxels(arguments, SmoothedVoxelCopier(radius));
draw_voxels(arguments, OtherVoxelCopier(whatever));
NB. I've been slightly inconsistent here: I used operator() to make my strategy call look like a regular function, but a normal method for my policy. So long as you choose one and stick with it, this is just a matter of taste.
CRTP Template Method
There's one final mechanism, which is the compile-time polymorphism version of the template method, and uses the Curiously Recurring Template Pattern.
template <typename Impl>
class VoxelDrawerBase {
protected:
Impl& impl() { return *static_cast<Impl*>(this); }
void copy(Coord from, Coord to) {...}
// *optional* default implementation, is *not* virtual
public:
void draw(arg) {
for (;;) {
// implement full algorithm
impl().copy(a,b);
}
}
};
class SmoothedVoxelDrawer: public VoxelDrawerBase<SmoothedVoxelDrawer> {
int radius; // algorithm-specific argument
void copy(Coord from, Coord to) {
blit(from.dx(-radius).dy(-radius),
to.dx(-radius).dy(-radius),
2*radius, 2*radius);
}
public:
SmoothedVoxelDrawer(int r) : radius(r) {}
};
Summary
In general I'd prefer the strategy/policy patterns for their lower coupling and better reuse, and choose the template method pattern only where the top-level algorithm you're parameterizing is genuinely set in stone (ie, when you're either refactoring existing code or are really sure of your analysis of the points of variation) and reuse is genuinely not an issue.
It's also really painful to use the template method if there is more than one axis of variation (that is, you have multiple methods like copy, and want to vary their implementations independently). You either end up with code duplication or mixin inheritance.
I suggest using the NVI idiom.
You have your public method which calls a private function that implements the logic that must differ from case to case.
Derived classes will have to provide an implementation of that private function that specializes them for their particular task.
Example:
class A {
public:
void do_base() {
// [pre]
specialized_do();
// [post]
}
private:
virtual void specialized_do() = 0;
};
class B : public A {
private:
void specialized_do() {
// [implementation]
}
};
The advantage is that you can keep a common implementation in the base class and detail it as required for any subclass (which just need to reimplement the specialized_do method).
The disadvantage is that you need a different type for each implementation, but if your use case is drawing different UI elements, this is the way to go.
You could simply use the strategy pattern
So, instead of something like
void do_something_one_way(...)
{
//blah
//blah
//blah
one_way();
//blah
//blah
}
void do_something_another_way(...)
{
//blah
//blah
//blah
another_way();
//blah
//blah
}
You will have
void do_something(...)
{
//blah
//blah
//blah
any_which_way();
//blah
//blah
}
any_which_way could be a lambda, a functor, a virtual member function of a strategy class passed in. There are many options.
Are you sure that
"passing a function pointer as an argument is not an option"
Does it really slow it down?
You could use higher order functions, if your 'central part' can be parameterized nicely.
Here is a simple example of a function that returns a function which adds n to its argument:
#include <iostream>
#include<functional>
std::function<int(int)> n_adder(int n)
{
return [=](int x){return x+n;};
}
int main()
{
auto add_one = n_adder(1);
std::cout<<add_one(5);
}
You can use either Template Method pattern or Strategy pattern.
Usually Template method pattern is used in white-box frameworks, when you need to know about the internal structure of a framework to correctly subclass a class.
Strategy pattern is usually used in black-box frameworks, when you should not know about the implementation of the framework, since you only need to understand the contract of the methods you should implement.
For performance reasons, passing a function pointer as an argument is not an option.
Are you sure that passing one additional parameter and will cause performance problems? In this case you may have similar performance penalties if you use OOP techniques, like Template method or Strategy. But it is usually necessary to use profilier to determine what is the source of the performance degradation. Virtual calls, passing additional parameters, calling function through a pointer are usually very cheap, comparing to complex algorithms. You may find that these techniques consumes insignificant percent of CPU resources comparing to other code.
I'm not sure making them inline will do the trick, because arguments I send differ: sometimes I calculate volume(Z value), sometimes I know pixels are drawn from bottom to top.
You could pass all the parameter required for drawing in all cases. Alternatively if use Tempate method pattern a base class could provide methods that can return the data that could be required for drawing in different cases. In Strategy pattern, you could pass an instance of an object that could provide this kind of data to a Strategy implementation.