C++ w/ static template methods - c++

Template methods as in NOT C++ templates.
So, say that you would like to do some searching with different algorithms - Linear and Binary for instance. And you would also like to run those searches through some common routines so that you could, for instance, automatically record the time that a given search took and so on.
The template method pattern fills the bill beautifully. The only problem is that as far as I've managed to dig around, you can't actually implement this behaviour via static methods with C++, 'cause you would also need to make the methods virtual(?) Which is of course a bit of a bummer because I don't have any need to alter the state of the search object. I would just like to pin all the searching-thingies to its own namespace.
So the question is: Would one want to use something like function/method pointers instead? Or would one just use namespaces to do the job?
It's pretty hard to live with this kind of (dare I say) limitations with C++, as something like this would be a breeze with Java.
Edit:
Oh yeah, and since this is a school assignment, the use of external libraries (other than STL) isn't really an option. Sorry for the hassle.

I don't see why you'd need the template method pattern.
Why not just define those algorithms as functors that can be passed to your benchmarking function?
struct BinarySearch { // functor implementing a specific search algorithm
template <typename iter_type>
void operator()(iter_type first, iter_type last){ ....}
};
template <typename data_type, typename search_type>
void BenchmarkSearch(data_type& data, search_type search){ // general benchmarking/bookkeeping function
// init timer
search(data);
// compute elapsed time
}
and then call it like this:
int main(){
std::vector<int> vec;
vec.push_back(43);
vec.push_back(2);
vec.push_back(8);
vec.push_back(13);
BenchmarkSearch(vec, BinarySearch());
BenchmarkSearch(vec, LinearSearch()); // assuming more search algorithms are defined
BenchmarkSearch(vec, SomeOtherSearch();
}
Of course, another approach, which is a bit closer to what you initially wanted, could be to use CRTP (A pretty clever pattern for emulating virtual functions at compile-time - and it works with static methods too):
template <typename T>
struct SearchBase { // base class implementing general bookkeeping
static void Search() {
// do general bookkeeping, initialize timers, whatever you need
T::Search(); // Call derived search function
// Wrap up, do whatever bookkeeping is left
}
}
struct LinearSearch : SearchBase<LinearSearch> // derived class implementing the specific search algorithms
{
static void Search(){
// Perform the actual search
}
};
Then you can call the static functions:
SearchBase<LinearSearch>::Search();
SearchBase<BinarySearch>::Search();
SearchBase<SomeOtherSearch>::Search();
As a final note, it might be worth mentioning that both of these approaches should carry zero overhead. Unlike anything involving virtual functions, the compiler is fully aware of which functions are called here, and can and will inline them, resulting in code that is just as efficient as if you'd hand-coded each case.

Here's a simple templatized version that would work
template <typename F, typename C>
clock_t timer(F f, C c)
{
clock_t begin = clock();
f(c);
return clock() - begin;
}
void mySort(std::vector<int> vec)
{ std::sort(vec.begin(), vec.end()); }
int main()
{
std::vector<int> vec;
std::cout << timer(mySort, vec) << std::endl;
return 0;
}

static doesn't say "I don't need alter an object's state" is says, "I don't need an object". If you need virtual dispatch then you need an object on which to perform virtual dispatch as virtual dispatch is polymorphism based on the runtime type of an object. const would be "I don't need to alter an object's state" and you can have methods which are virtual and const.

You can always implement it the same way you would in Java - pass in an abstract class ISearchable that has a search() method, and override that in LinearSearch and BinarySearch...
You can also use a function pointer (which would be my preffered solution) or a boost::function, or templatize your function and pass in a functor.

Related

Genericity VS Polymorphic data structures [duplicate]

I am trying to get my head around applying template programming (and at some future point, template metaprogramming) to real-world scenarios. One problem I am finding is that C++ Templates and Polymorphism don't always play together the way I want.
My question is if the way I'm trying to apply template programming is improper (and I should use plain old OOP) or if I'm still stuck in the OOP mindset.
In this particular case, I am trying to solve a problem using the strategy-pattern. I keep running into the problem where I end up wanting something to behave polymorphically which templates don't seem to support.
OOP Code using composition:
class Interpolator {
public:
Interpolator(ICacheStrategy* const c, IDataSource* const d);
Value GetValue(const double);
}
void main(...) {
Interpolator* i;
if (param == 1)
i = new Interpolator(new InMemoryStrategy(...), new TextFileDataSource(...));
else if (param == 2)
i = new Interpolator(new InMemoryStrategy(...), new OdbcDataSource(...));
else if (param == 3)
i = new Interpolator(new NoCachingStrategy(...), new RestDataSource(...));
while (run) {
double input = WaitForRequest();
SendRequest(i->GetValue(input));
}
}
Potential Template Version:
class Interpolator<class TCacheStrategy, class TDataSource> {
public:
Interpolator();
Value GetValue(const double); // may not be the best way but
void ConfigCache(const& ConfigObject); // just to illustrate Cache/DS
void ConfigDataSource(const& ConfigObject); // need to configured
}
//Possible way of doing main?
void main(...) {
if(param == 1)
DoIt(Interpolator<InMemoryStrategy, TextFileDataSource>(), c, d);
else if(param == 2)
DoIt(Interpolator<InMemoryStrategy, OdbcDataSource>(), c, d)
else if(param == 3)
DoIt(Interpolator<NoCachingStrategy, RestDataSource>(), c, d)
}
template<class T>
void DoIt(const T& t, ConfigObject c, ConfigObject d) {
t.ConfigCache(c);
t.ConfigDataSource(c);
while(run) {
double input = WaitForRequest();
SendRequest(t.GetValue(input));
}
}
When I try to convert the OOP implementation to a template-based implementation, the Interpolator code can be translated without a lot of pain. Basically, replace the "interfaces" with Template type parameters, and add a mechanism to either pass in an instance of Strategy/DataSource or configuration parameters.
But when I get down to the "main", it's not clear to me how that should be written to take advantage of templates in the style of template meta programming. I often want to use polymorphism, but it doesn't seem to play well with templates (at times, it feels like I need Java's type-erasure generics... ugh).
When I often find I want to do is have something like TemplateType<?, ?> x = new TemplateType<X, Y>() where x doesn't care what X, Y is.
In fact, this is often my problem when using templates.
Do I need to apply one more level of
templates?
Am I trying to use my shiny new power template wrench to
install a OOP nail into a PCI slot?
Or am I just thinking of this all
wrong when it comes to template
programming?
[Edit] A few folks have pointed out this is not actually template metaprogramming so I've reworded the question slightly. Perhaps that's part of the problem--I have yet grok what TMP really is.
Templates provide static polymorphism: you specify a template parameter at compile time implementing the strategy. They don't provide dynamic polymorphism, where you supply an object at runtime with virtual member functions that implement the strategy.
Your example template code will create three different classes, each of which contains all the Interpolator code, compiled using different template parameters and possibly inlining code from them. That probably isn't what you want from the POV of code size, although there's nothing categorically wrong with it. Supposing that you were optimising to avoid function call overhead, then it might be an improvement on dynamic polymorphism. More likely it's overkill. If you want to use the strategy pattern dynamically, then you don't need templates, just make virtual calls where relevant.
You can't have a variable of type MyTemplate<?> (except appearing in another template before it's instantiated). MyTemplate<X> and MyTemplate<Y> are completely unrelated classes (even if X and Y are related), which perhaps just so happen to have similar functions if they're instantiated from the same template (which they needn't be - one might be a specialisation). Even if they are, if the template parameter is involved in the signatures of any of the member functions, then those functions aren't the same, they just have the same names. So from the POV of dynamic polymorphism, instances of the same template are in the same position as any two classes - they can only play if you give them a common base class with some virtual member functions.
So, you could define a common base class:
class InterpolatorInterface {
public:
virtual Value GetValue(const double) = 0;
virtual void ConfigCache(const& ConfigObject) = 0;
virtual void ConfigDataSource(const& ConfigObject) = 0;
virtual ~InterpolatorInterface() {}
};
Then:
template <typename TCacheStrategy, typename TDataSource>
class Interpolator: public InterpolatorInterface {
...
};
Now you're using templates to create your different kinds of Interpolator according to what's known at compile time (so calls from the interpolator to the strategies are non-virtual), and you're using dynamic polymorphism to treat them the same even though you don't know until runtime which one you want (so calls from the client to the interpolator are virtual). You just have to remember that the two are pretty much completely independent techniques, and the decisions where to use each are pretty much unrelated.
Btw, this isn't template meta-programming, it's just using templates.
Edit. As for what TMP is, here's the canonical introductory example:
#include <iostream>
template<int N>
struct Factorial {
static const int value = N*Factorial<N-1>::value;
};
template<>
struct Factorial<0> {
static const int value = 1;
};
int main() {
std::cout << "12! = " << Factorial<12>::value << "\n";
}
Observe that 12! has been calculated by the compiler, and is a compile-time constant. This is exciting because it turns out that the C++ template system is a Turing-complete programming language, which the C preprocessor is not. Subject to resource limits, you can do arbitrary computations at compile time, avoiding runtime overhead in situations where you know the inputs at compile time. Templates can manipulate their template parameters like a functional language, and template parameters can be integers or types. Or functions, although those can't be "called" at compile time. Or other templates, although those can't be "returned" as static members of a struct.
I find templates and polymorphism work well toegther. In your example, if the client code doesn't care what template parameters Interpolator is using then introduce an abstract base class which the template sub-classes. E.g.:
class Interpolator
{
public:
virtual Value GetValue (const double) = 0;
};
template<class TCacheStrategy, class TDataSource>
class InterpolatorImpl : public Interpolator
{
public:
InterpolatorImpl ();
Value GetValue(const double);
};
void main()
{
int param = 1;
Interpolator* interpolator = 0;
if (param==1)
interpolator = new InterpolatorImpl<InMemoryStrategy,TextFileDataSource> ();
else if (param==2)
interpolator = new InterpolatorImpl<InMemoryStrategy,OdbcDataSource> ();
else if (param==3)
interpolator = new InterpolatorImpl<NoCachingStrategy,RestDataSource> ();
while (true)
{
double input = WaitForRequest();
SendRequest( interpolator->GetValue (input));
}
}
I use this idiom quite a lot. It quite nicely hides the templatey stuff from client code.
Note, i'm not sure this use of templates really classes as "meta-programming" though. I usually reserve that grandiose term for the use of more sophisticated compile-time template tricks, esp the use of conditionals, recursive defintions etc to effectively compute stuff at compile time.
Templates are sometimes called static (or compile-time) polymorphism, so yes, they can sometimes be used instead of OOP (dynamic) polymorphism. Of course, it requires the types to be determined at compile-time, rather than runtime, so it can't completely replace dynamic polymorphism.
When I often find I want to do is have something like TemplateType x = new TemplateType() where x doesn't care what X,Y is.
Yeah, that's not possible. You have to do something similar to what you have with the DoIt() function. Often, I think that ends up a cleaner solution anyway (you end up with smaller functions that do just one thing each -- usually a good thing). But if the types are only determined at runtime (as with i in the OOP version of your main function), then templates won't work.
But In this case, I think your template version solves the problem well, and is a nice solution in its own right. (Although as onebyone mentions, it does mean code gets instantiated for all three templates, which might in some cases be a problem)

What is a good design to use external class on member functions?

I have the following design problem and am seeking for the most elegant and even more important most efficient solution as this problem comes from a context where performance is an issue.
Simply spoken I have a class "Function_processor" that does some calculations for real functions (e.g. calculates the roots of a real function) and I have another class "A" that has different such functions and needs to use the Function_processor to perform calculations on them.
The Function_processor should be as generic as possible (e.g. do not provide interfaces for all sorts of different objects), but merely stick to its own task (do calculations for any functions).
#include "function_processor.h"
class A {
double a;
public:
A(double a) : a(a) {}
double function1(double x) {
return a*x;
}
double function2(double x){
return a*x*x;
}
double calculate_sth() {
Function_processor function_processor(3*a+1, 7);
return function_processor.do_sth(&function1);
}
};
class Function_processor {
double p1, p2;
public:
Function_processor(double parameter1, double parameter2);
double do_sth(double (*function)(double));
double do_sth_else(double (*function)(double));
};
Clearly I can not pass the member functions A::function1/2 as in the following example (I know that, but this is roughly what I would consider readable code).
Also I can not make function1/2 static because they use the non-static member a.
I am sure I could use sth like std::bind or templates (even though I have hardly any experience with these things) but then I am mostly concerned about the performance I would get.
What is the best (nice code and fast performance) solution to my problem ?
Thanks for your help !
This is not really the best way to do this, either from a pure OO point of view or a functional or procedural POV. First of all, your class A is really nothing more than a namespace that has to be instantiated. Personally, I'd just put its functions as free floating C-style ones - maybe in a namespace somewhere so that you get some kind of classification.
Here's how you'd do it in pure OO:
class Function
{
virtual double Execute(double value);
};
class Function1 : public Function
{
virtual double Execute(double value) { ... }
};
class FunctionProcessor
{
void Process(Function & f)
{
...
}
}
This way, you could instantiate Function1 and FunctionProcessor and send the Function1 object to the Process method. You could derive anything from Function and pass it to Process.
A similar, but more generic way to do it is to use templates:
template <class T>
class FunctionProcessor
{
void Process()
{
T & function;
...
}
}
You can pass anything at all as T, but in this case, T becomes a compile-time dependency, so you have to pass it in code. No dynamic stuff allowed here!
Here's another templated mechanism, this time using simple functions instead of classes:
template <class T>
void Process(T & function)
{
...
double v1 = function(x1);
double v2 = function(x2);
...
}
You can call this thing like this:
double function1(double val)
{
return blah;
}
struct function2
{
double operator()(double val) { return blah; }
};
// somewhere else
FunctionProcessor(function1);
FunctionProcessor(function2());
You can use this approach with anything that can be called with the right signature; simple functions, static methods in classes, functors (like struct function2 above), std::mem_fun objects, new-fangled c++11 lambdas,... And if you use functors, you can pass them parameters in the constructor, just like any object.
That last is probably what I'd do; it's the fastest, if you know what you're calling at compile time, and the simplest while reading the client code. If it has to be extremely loosely coupled for some reason, I'd go with the first class-based approach. I personally think that circumstance is quite rare, especially as you describe the problem.
If you still want to use your class A, make all the functions static if they don't need member access. Otherwise, look at std::mem_fun. I still discourage this approach.
If I understood correctly, what you're searching for seems to be pointer to member functions:
double do_sth(double (A::*function)(double));
For calling, you would however also need an object of class A. You could also pass that into function_processor in the constructor.
Not sure about the performance of this, though.

What is the motivation behind static polymorphism in C++?

I understand the mechanics of static polymorphism using the Curiously Recurring Template Pattern. I just do not understand what is it good for.
The declared motivation is:
We sacrifice some flexibility of dynamic polymorphism for speed.
But why bother with something so complicated like:
template <class Derived>
class Base
{
public:
void interface()
{
// ...
static_cast<Derived*>(this)->implementation();
// ...
}
};
class Derived : Base<Derived>
{
private:
void implementation();
};
When you can just do:
class Base
{
public:
void interface();
}
class Derived : public Base
{
public:
void interface();
}
My best guess is that there is no semantic difference in the code and that it is just a matter of good C++ style.
Herb Sutter wrote in Exceptional C++ style: Chapter 18 that:
Prefer to make virtual functions private.
Accompanied of course with a thorough explanation why this is good style.
In the context of this guideline the first example is good, because:
The void implementation() function in the example can pretend to be virtual, since it is here to perform customization of the class. It therefore should be private.
And the second example is bad, since:
We should not meddle with the public interface to perform customization.
My question is:
What am I missing about static polymorphism? Is it all about good C++ style?
When should it be used? What are some guidelines?
What am I missing about static polymorphism? Is it all about good C++ style?
Static polymorphism and runtime polymorphism are different things and accomplish different goals. They are both technically polymorphism, in that they decide which piece of code to execute based on the type of something. Runtime polymorphism defers binding the type of something (and thus the code that runs) until runtime, while static polymorphism is completely resolved at compile time.
This results in pros and cons for each. For instance, static polymorphism can check assumptions at compile time, or select among options which would not compile otherwise. It also provides tons of information to the compiler and optimizer, which can inline knowing fully the target of calls and other information. But static polymorphism requires that implementations be available for the compiler to inspect in each translation unit, can result in binary code size bloat (templates are fancy pants copy paste), and don't allow these determinations to occur at runtime.
For instance, consider something like std::advance:
template<typename Iterator>
void advance(Iterator& it, ptrdiff_t offset)
{
// If it is a random access iterator:
// it += offset;
// If it is a bidirectional iterator:
// for (; offset < 0; ++offset) --it;
// for (; offset > 0; --offset) ++it;
// Otherwise:
// for (; offset > 0; --offset) ++it;
}
There's no way to get this to compile using runtime polymorphism. You have to make the decision at compile time. (Typically you would do this with tag dispatch e.g.)
template<typename Iterator>
void advance_impl(Iterator& it, ptrdiff_t offset, random_access_iterator_tag)
{
// Won't compile for bidirectional iterators!
it += offset;
}
template<typename Iterator>
void advance_impl(Iterator& it, ptrdiff_t offset, bidirectional_iterator_tag)
{
// Works for random access, but slow
for (; offset < 0; ++offset) --it; // Won't compile for forward iterators
for (; offset > 0; --offset) ++it;
}
template<typename Iterator>
void advance_impl(Iterator& it, ptrdiff_t offset, forward_iterator_tag)
{
// Doesn't allow negative indices! But works for forward iterators...
for (; offset > 0; --offset) ++it;
}
template<typename Iterator>
void advance(Iterator& it, ptrdiff_t offset)
{
// Use overloading to select the right one!
advance_impl(it, offset, typename iterator_traits<Iterator>::iterator_category());
}
Similarly, there are cases where you really don't know the type at compile time. Consider:
void DoAndLog(std::ostream& out, int parameter)
{
out << "Logging!";
}
Here, DoAndLog doesn't know anything about the actual ostream implementation it gets -- and it may be impossible to statically determine what type will be passed in. Sure, this can be turned into a template:
template<typename StreamT>
void DoAndLog(StreamT& out, int parameter)
{
out << "Logging!";
}
But this forces DoAndLog to be implemented in a header file, which may be impractical. It also requires that all possible implementations of StreamT are visible at compile time, which may not be true -- runtime polymorphism can work (although this is not recommended) across DLL or SO boundaries.
When should it be used? What are some guidelines?
This is like someone coming to you and saying "when I'm writing a sentence, should I use compound sentences or simple sentences"? Or perhaps a painter saying "should I always use red paint or blue paint?" There is no right answer, and there is no set of rules that can be blindly followed here. You have to look at the pros and cons of each approach, and decide which best maps to your particular problem domain.
As for the CRTP, most use cases for that are to allow the base class to provide something in terms of the derived class; e.g. Boost's iterator_facade. The base class needs to have things like DerivedClass operator++() { /* Increment and return *this */ } inside -- specified in terms of derived in the member function signatures.
It can be used for polymorphic purposes, but I haven't seen too many of those.
The link you provide mentions boost iterators as an example of static polymorphism. STL iterators also exhibit this pattern. Lets take a look at an example and consider why the authors of those types decided this pattern was appropriate:
#include <vector>
#include <iostream>
using namespace std;
void print_ints( vector<int> const& some_ints )
{
for( vector<int>::const_iterator i = some_ints.begin(), end = some_ints.end(); i != end; ++i )
{
cout << *i;
}
}
Now, how would we implement int vector<int>::const_iterator::operator*() const; Can we use polymprhism for this? Well, no. What would the signature of our virtual function be? void const* operator*() const? That's useless! The type has been erased (degraded from int to void*). Instead, the curiously recurring template pattern steps in to help us generate the iterator type. Here is a rough approximation of the iterator class we would need to implement the above:
template<typename T>
class const_iterator_base
{
public:
const_iterator_base():{}
T::contained_type const& operator*() const { return Ptr(); }
T::contained_type const& operator->() const { return Ptr(); }
// increment, decrement, etc, can be implemented and forwarded to T
// ....
private:
T::contained_type const* Ptr() const { return static_cast<T>(this)->Ptr(); }
};
Traditional dynamic polymorphism could not provide the above implementation!
A related and important term is parametric polymorphism. This allows you to implement similar APIs in, say, python that you can using the curiously recurring template pattern in C++. Hope this is helpful!
I think it's worth taking a stab at the source of all this complexity, and why languages like Java and C# mostly try to avoid it: type erasure! In c++ there is no useful all containing Object type with useful information. Instead we have void* and once you have void* you truely have nothing! If you have an interface that decays to void* the only way to recover is by making dangerous assumptions or keeping extra type information around.
While there may be cases where static polymorphism is useful (the other answers have listed a few), I would generally see it as a bad thing. Why? Because you cannot actually use a pointer to the base class anymore, you always have to provide a template argument providing the exact derived type. And in that case, you could just as well use the derived type directly. And, to put it bluntly, static polymorphism is not what object orientation is about.
The runtime difference between static and dynamic polymorphism is exactly two pointer dereferenciations (iff the compiler really inlines the dispatch method in the base class, if it doesn't for some reason, static polymorphism is slower). That's not really expensive, especially since the second lookup should virtually always hit the cache. All in all, those lookups are usually cheaper than the function call itself, and are certainly worth it to get the real flexibility provided by dynamic polymorphism.

Virtual Methods or Function Pointers

When implementing polymorphic behavior in C++ one can either use a pure virtual method or one can use function pointers (or functors). For example an asynchronous callback can be implemented by:
Approach 1
class Callback
{
public:
Callback();
~Callback();
void go();
protected:
virtual void doGo() = 0;
};
//Constructor and Destructor
void Callback::go()
{
doGo();
}
So to use the callback here, you would need to override the doGo() method to call whatever function you want
Approach 2
typedef void (CallbackFunction*)(void*)
class Callback
{
public:
Callback(CallbackFunction* func, void* param);
~Callback();
void go();
private:
CallbackFunction* iFunc;
void* iParam;
};
Callback::Callback(CallbackFunction* func, void* param) :
iFunc(func),
iParam(param)
{}
//Destructor
void go()
{
(*iFunc)(iParam);
}
To use the callback method here you will need to create a function pointer to be called by the Callback object.
Approach 3
[This was added to the question by me (Andreas); it wasn't written by the original poster]
template <typename T>
class Callback
{
public:
Callback() {}
~Callback() {}
void go() {
T t; t();
}
};
class CallbackTest
{
public:
void operator()() { cout << "Test"; }
};
int main()
{
Callback<CallbackTest> test;
test.go();
}
What are the advantages and disadvantages of each implementation?
Approach 1 (Virtual Function)
"+" The "correct way to do it in C++
"-" A new class must be created per callback
"-" Performance-wise an additional dereference through VF-Table compared to Function Pointer. Two indirect references compared to Functor solution.
Approach 2 (Class with Function Pointer)
"+" Can wrap a C-style function for C++ Callback Class
"+" Callback function can be changed after callback object is created
"-" Requires an indirect call. May be slower than functor method for callbacks that can be statically computed at compile-time.
Approach 3 (Class calling T functor)
"+" Possibly the fastest way to do it. No indirect call overhead and may be inlined completely.
"-" Requires an additional Functor class to be defined.
"-" Requires that callback is statically declared at compile-time.
FWIW, Function Pointers are not the same as Functors. Functors (in C++) are classes that are used to provide a function call which is typically operator().
Here is an example functor as well as a template function which utilizes a functor argument:
class TFunctor
{
public:
void operator()(const char *charstring)
{
printf(charstring);
}
};
template<class T> void CallFunctor(T& functor_arg,const char *charstring)
{
functor_arg(charstring);
};
int main()
{
TFunctor foo;
CallFunctor(foo,"hello world\n");
}
From a performance perspective, Virtual functions and Function Pointers both result in an indirect function call (i.e. through a register) although virtual functions require an additional load of the VFTABLE pointer prior to loading the function pointer. Using Functors (with a non-virtual call) as a callback are the highest performing method to use a parameter to template functions because they can be inlined and even if not inlined, do not generate an indirect call.
Approach 1
Easier to read and understand
Less possibility of errors (iFunc cannot be NULL, you're not using a void *iParam, etc
C++ programmers will tell you that this is the "right" way to do it in C++
Approach 2
Slightly less typing to do
VERY slightly faster (calling a virtual method has some overhead, usually the same of two simple arithmetic operations.. So it most likely won't matter)
That's how you would do it in C
Approach 3
Probably the best way to do it when possible. It will have the best performance, it will be type safe, and it's easy to understand (it's the method used by the STL).
The primary problem with Approach 2 is that it simply doesn't scale. Consider the equivalent for 100 functions:
class MahClass {
// 100 pointers of various types
public:
MahClass() { // set all 100 pointers }
MahClass(const MahClass& other) {
// copy all 100 function pointers
}
};
The size of MahClass has ballooned, and the time to construct it has also significantly increased. Virtual functions, however, are O(1) increase in the size of the class and the time to construct it- not to mention that you, the user, must write all the callbacks for all the derived classes manually which adjust the pointer to become a pointer to derived, and must specify function pointer types and what a mess. Not to mention the idea that you might forget one, or set it to NULL or something equally stupid but totally going to happen because you're writing 30 classes this way and violating DRY like a parasitic wasp violates a caterpillar.
Approach 3 is only usable when the desired callback is statically knowable.
This leaves Approach 1 as the only usable approach when dynamic method invocation is required.
It's not clear from your example if you're creating a utility class or not. Is you Callback class intended to implement a closure or a more substantial object that you just didn't flesh out?
The first form:
Is easier to read and understand,
Is far easier to extend: try adding methods pause, resume and stop.
Is better at handling encapsulation (presuming doGo is defined in the class).
Is probably a better abstraction, so easier to maintain.
The second form:
Can be used with different methods for doGo, so it's more than just polymorphic.
Could allow (with additional methods) changing the doGo method at run-time, allowing the instances of the object to mutate their functionality after creation.
Ultimately, IMO, the first form is better for all normal cases. The second has some interesting capabilities, though -- but not ones you'll need often.
One major advantage of the first method is it has more type safety. The second method uses a void * for iParam so the compiler will not be able to diagnose type problems.
A minor advantage of the second method is that it would be less work to integrate with C. But if you're code base is only C++, this advantage is moot.
Function pointers are more C-style I would say. Mainly because in order to use them you usually must define a flat function with the same exact signature as your pointer definition.
When I write C++ the only flat function I write is int main(). Everything else is a class object. Out of the two choices I would choose to define an class and override your virtual, but if all you want is to notify some code that some action happened in your class, neither of these choices would be the best solution.
I am unaware of your exact situation but you might want to peruse design patterns
I would suggest the observer pattern. It is what I use when I need to monitor a class or wait for some sort of notification.
For example, let us look at an interface for adding read functionality to a class:
struct Read_Via_Inheritance
{
virtual void read_members(void) = 0;
};
Any time I want to add another source of reading, I have to inherit from the class and add a specific method:
struct Read_Inherited_From_Cin
: public Read_Via_Inheritance
{
void read_members(void)
{
cin >> member;
}
};
If I want to read from a file, database, or USB, this requires 3 more separate classes. The combinations start to be come very ugly with multiple objects and multiple sources.
If I use a functor, which happens to resemble the Visitor design pattern:
struct Reader_Visitor_Interface
{
virtual void read(unsigned int& member) = 0;
virtual void read(std::string& member) = 0;
};
struct Read_Client
{
void read_members(Reader_Interface & reader)
{
reader.read(x);
reader.read(text);
return;
}
unsigned int x;
std::string& text;
};
With the above foundation, objects can read from different sources just by supplying different readers to the read_members method:
struct Read_From_Cin
: Reader_Visitor_Interface
{
void read(unsigned int& value)
{
cin>>value;
}
void read(std::string& value)
{
getline(cin, value);
}
};
I don't have to change any of the object's code (a good thing because it is already working). I can also apply the reader to other objects.
Generally, I use inheritance when I am performing generic programming. For example, if I have a Field class, then I can create Field_Boolean, Field_Text and Field_Integer. In can put pointers to their instances into a vector<Field *> and call it a record. The record can perform generic operations on the fields, and doesn't care or know what kind of a field is processed.
Change to pure virtual, first off. Then inline it. That should negate any method overhead call at all, so long as inlining doesn't fail (and it won't if you force it).
May as well use C, because this is the only real useful major feature of C++ compared to C. You will always call method and it can't be inlined, so it will be less efficient.

Nesting C++ Template Definitions

I'm abusing C++ templates a little and I'm having trouble figuring something out. Let's say I have two types that really should be inherited from a base type, but for speed reasons, I can't afford to have the virtual function overhead (I've benchmarked it, and virtual calls ruin things for me!).
First, here are the two classes I have
template<class DataType> class Class1
{
//Lots of stuff here
}
template<Class DataType> class Class2
{
//The same stuff as in Class1, but implemented differently
}
In a typical oo design, Class1 and Class2 would inherit from IInterface and I could have a function that looks like this
DoStuff(IInterface& MyInterface)
{
}
But I can't do that, so I've done this
template <class C>
DoStuff(C& c)
{
}
I know it's not pretty, as there's nothing (at the compiler level) to enforce that Class1 and Class2 implement the same interface, but for speed reasons, I'm breaking some of the rules.
What I'd love to do is create a call back function on DoStuff, but I can't figure out how to make it work with the templates (especially since there's the hidden in there.
For example this works right now
DoStuff(char* filename)
{
switch (//figure out the type i need to make)
{
case 1: return DoStuff(Class1<int>(filename));
case 2: return DoStuff(Class1<double>(filename));
}
}
template<class DataType>
DoStuff(DataType* pdata)
{
return DoStuff(Class2<DataType>(pdata));
}
template<class C>
DoStuff(C c)
{
c.Print();
}
Now I know you're asking, why use Class1 and Class2? Well the underlying difference between dealing with a file and dealing with memory is so big, that it makes sense to have different classes for the different type of input (rather than just overloading the constructor and having it behave differently for the different inputs). Again, I did benchmark this and it's much faster to have the special cases handled in their own classes rather than having cases/ifs in every function.
So what I'd like to do is hide a lot of this implementation from the junior developers, I don't want them to have to create three different overloaded DoStuffs to handle the different inputs. Ideally, I'd just set up some type of callback with #defines and all they'd need to do is something like create a class called DoStuff and overload the () operator and have the functor do the work.
The trouble I'm having is that the DoStuff function that does the work is only templatized by <class C> but C itself is templatized by <class DataType> and everything I can't figure out how to pass everything around in a generic way. E.g., I cannot use template <class C<DataType>> or template<template< class DataType> class C>. It just won't compile.
Does anyone have a good trick to have a generic call back, either a function or a functor (I don't care), with this nested templated class? Basically I want something where I can write a generic function that doesn't care about the class that's storing the data and have that called by a mostly common function that figures out which class to use.
BigSwitch(CallBack,Inputs)
{
switch(//something)
{
case 1: return CallBack(Class1<Type>(Inputs))
case 2: return CallBack(Class2<Type>(Inputs))
}
}
This way I can write one BigSwitch function and have other people write the CallBack functions.
Any Ideas?
EDIT for clarification for Jalf:
I have two very similar classes, Class1 and Class2 which represent basically the same type of data, however the data store is vastly different. To make it more concrete, I'll use a simple example: Class1 is a simple array and Class2 looks like an array however rather than storing in memory is stores in a file (because it's too big to fit in memory). So I'll call them MemArray and FileArray right now. So let's say I wanted the Sum of the arrays. I can do something like this
template <class ArrayType, class ReturnType>
ReturnType Sum(ArrayType A)
{
ReturnType S=0;
for (int i=A.begin();i<A.end();++i)
{
S+=A[i];
}
return S;
}
But now, I need a way to load real data into the array. If it's a memory-based array, I'd do this
MemArray<DataType> M(pData);
and if it's file-baaed, I'd do this
FileArray<DataType> F(filename);
and both of these calls are valid (because the compiler generates both code paths at compile time)
double MS=Sum<MemArray<DataType>,double>(M);
double FS=Sum<FileArray<DataType>,double>(F);
All of this assumes that I know what the DataType is, but for a file based array, I may not know the data type until I open the file and query the header to know what kind of data is in the array.
double GetSum(char* filename)
{
int DataTypeCode=GetDataTypeCode(filename);
switch (DataTypeCode)
{
case 1: return Sum<FileArray<int>,double>(FileArray<int>(filename));
case 2: return Sum<FileArray<double>,double>(FileArray<double>(filename));
}
}
template <class DataType>
double GetSum(DataType* pData)
{
return Sum<MemArray<DataType>,double>(MemArray<DataType>(pData));
}
All of this works, but it requires writing two overloaded GetX functions and a X function for everything that I'd want to do. the GetX functions are basically the same code everytime except for the X that it calls. So I'd love to be able to write something like
double GetX(CallBackType X, char* filename)
{
int DataTypeCode=GetDataTypeCode(filename);
switch (DataTypeCode)
{
case 1: return X<FileArray<int>,double>(FileArray<int>(filename));
case 2: return X<FileArray<double>,double>(FileArray<double>(filename));
}
}
template <class DataType>
double GetX(CallBackType, DataType* pData)
{
return X<MemArray<DataType>,double>(MemArray<DataType>(pData));
}
so that I could call
GetX(Sum,filename)
then later when someone else wants to add a new function, all they need to do is write the function and call
GetX(NewFunction,filename)
I'm just looking for a way to write my overloaded GetX functions and my X functions so that I can abstract way the input/storage from the actual algorithms. Normally, this isn't a hard problem, it's just that I'm having trouble because the X function contains a template argument that itself is templated. The template<class ArrayType> also has an implicit ArrayType<DataType> hidden in there. The compiler is unhappy about that.
Focusing on the initial part of your question (why you're not just using inheritance):
A common way to do compile-time polymorphism and give access to the derived class' members through the base class is through the CRTP pattern.
template <typename T>
class IInterface {
void DoStuff() {
void static_cast<T*>(this)->DoStuff()
}
};
class Class1 : IInterface<Class1> {
void DoStuff(){...}
}
Would that solve your problem?
Edit:
By the way, I'm glad I could help, but next time please try to structure your question a bit more.
I really had no clue what you were asking, so this was just a stab in the dark, based on the first 3 lines of your question. ;)
You never really explain what you're trying to achieve, only what your non-functioning workaround looks like. Start out stating the problem, since that's what we really need to know. Then you can provide details about your current workarounds. And when posting code, add some context. Where are DoStuff() called from, and why would junior developers need to define them? (You've already done that, haven't you?)
What would said junior developers be doing with this code in the first place?
And it's confusing that you provide the specific cases (1 and 2), but not the switch statement itself (//something)
You'll get a lot more (and better and faster) answers next time if you try to make it easy for the person answering. :)
As to your question about a "generalized callback" you can use a boost::function but that essentially uses virtual functions under the covers (it may not - but at least a similar concept) so the performance difference you are looking for won't be there (in fact boost::function will probably be slower because of heap allocation).