I am trying to get my head around applying template programming (and at some future point, template metaprogramming) to real-world scenarios. One problem I am finding is that C++ Templates and Polymorphism don't always play together the way I want.
My question is if the way I'm trying to apply template programming is improper (and I should use plain old OOP) or if I'm still stuck in the OOP mindset.
In this particular case, I am trying to solve a problem using the strategy-pattern. I keep running into the problem where I end up wanting something to behave polymorphically which templates don't seem to support.
OOP Code using composition:
class Interpolator {
public:
Interpolator(ICacheStrategy* const c, IDataSource* const d);
Value GetValue(const double);
}
void main(...) {
Interpolator* i;
if (param == 1)
i = new Interpolator(new InMemoryStrategy(...), new TextFileDataSource(...));
else if (param == 2)
i = new Interpolator(new InMemoryStrategy(...), new OdbcDataSource(...));
else if (param == 3)
i = new Interpolator(new NoCachingStrategy(...), new RestDataSource(...));
while (run) {
double input = WaitForRequest();
SendRequest(i->GetValue(input));
}
}
Potential Template Version:
class Interpolator<class TCacheStrategy, class TDataSource> {
public:
Interpolator();
Value GetValue(const double); // may not be the best way but
void ConfigCache(const& ConfigObject); // just to illustrate Cache/DS
void ConfigDataSource(const& ConfigObject); // need to configured
}
//Possible way of doing main?
void main(...) {
if(param == 1)
DoIt(Interpolator<InMemoryStrategy, TextFileDataSource>(), c, d);
else if(param == 2)
DoIt(Interpolator<InMemoryStrategy, OdbcDataSource>(), c, d)
else if(param == 3)
DoIt(Interpolator<NoCachingStrategy, RestDataSource>(), c, d)
}
template<class T>
void DoIt(const T& t, ConfigObject c, ConfigObject d) {
t.ConfigCache(c);
t.ConfigDataSource(c);
while(run) {
double input = WaitForRequest();
SendRequest(t.GetValue(input));
}
}
When I try to convert the OOP implementation to a template-based implementation, the Interpolator code can be translated without a lot of pain. Basically, replace the "interfaces" with Template type parameters, and add a mechanism to either pass in an instance of Strategy/DataSource or configuration parameters.
But when I get down to the "main", it's not clear to me how that should be written to take advantage of templates in the style of template meta programming. I often want to use polymorphism, but it doesn't seem to play well with templates (at times, it feels like I need Java's type-erasure generics... ugh).
When I often find I want to do is have something like TemplateType<?, ?> x = new TemplateType<X, Y>() where x doesn't care what X, Y is.
In fact, this is often my problem when using templates.
Do I need to apply one more level of
templates?
Am I trying to use my shiny new power template wrench to
install a OOP nail into a PCI slot?
Or am I just thinking of this all
wrong when it comes to template
programming?
[Edit] A few folks have pointed out this is not actually template metaprogramming so I've reworded the question slightly. Perhaps that's part of the problem--I have yet grok what TMP really is.
Templates provide static polymorphism: you specify a template parameter at compile time implementing the strategy. They don't provide dynamic polymorphism, where you supply an object at runtime with virtual member functions that implement the strategy.
Your example template code will create three different classes, each of which contains all the Interpolator code, compiled using different template parameters and possibly inlining code from them. That probably isn't what you want from the POV of code size, although there's nothing categorically wrong with it. Supposing that you were optimising to avoid function call overhead, then it might be an improvement on dynamic polymorphism. More likely it's overkill. If you want to use the strategy pattern dynamically, then you don't need templates, just make virtual calls where relevant.
You can't have a variable of type MyTemplate<?> (except appearing in another template before it's instantiated). MyTemplate<X> and MyTemplate<Y> are completely unrelated classes (even if X and Y are related), which perhaps just so happen to have similar functions if they're instantiated from the same template (which they needn't be - one might be a specialisation). Even if they are, if the template parameter is involved in the signatures of any of the member functions, then those functions aren't the same, they just have the same names. So from the POV of dynamic polymorphism, instances of the same template are in the same position as any two classes - they can only play if you give them a common base class with some virtual member functions.
So, you could define a common base class:
class InterpolatorInterface {
public:
virtual Value GetValue(const double) = 0;
virtual void ConfigCache(const& ConfigObject) = 0;
virtual void ConfigDataSource(const& ConfigObject) = 0;
virtual ~InterpolatorInterface() {}
};
Then:
template <typename TCacheStrategy, typename TDataSource>
class Interpolator: public InterpolatorInterface {
...
};
Now you're using templates to create your different kinds of Interpolator according to what's known at compile time (so calls from the interpolator to the strategies are non-virtual), and you're using dynamic polymorphism to treat them the same even though you don't know until runtime which one you want (so calls from the client to the interpolator are virtual). You just have to remember that the two are pretty much completely independent techniques, and the decisions where to use each are pretty much unrelated.
Btw, this isn't template meta-programming, it's just using templates.
Edit. As for what TMP is, here's the canonical introductory example:
#include <iostream>
template<int N>
struct Factorial {
static const int value = N*Factorial<N-1>::value;
};
template<>
struct Factorial<0> {
static const int value = 1;
};
int main() {
std::cout << "12! = " << Factorial<12>::value << "\n";
}
Observe that 12! has been calculated by the compiler, and is a compile-time constant. This is exciting because it turns out that the C++ template system is a Turing-complete programming language, which the C preprocessor is not. Subject to resource limits, you can do arbitrary computations at compile time, avoiding runtime overhead in situations where you know the inputs at compile time. Templates can manipulate their template parameters like a functional language, and template parameters can be integers or types. Or functions, although those can't be "called" at compile time. Or other templates, although those can't be "returned" as static members of a struct.
I find templates and polymorphism work well toegther. In your example, if the client code doesn't care what template parameters Interpolator is using then introduce an abstract base class which the template sub-classes. E.g.:
class Interpolator
{
public:
virtual Value GetValue (const double) = 0;
};
template<class TCacheStrategy, class TDataSource>
class InterpolatorImpl : public Interpolator
{
public:
InterpolatorImpl ();
Value GetValue(const double);
};
void main()
{
int param = 1;
Interpolator* interpolator = 0;
if (param==1)
interpolator = new InterpolatorImpl<InMemoryStrategy,TextFileDataSource> ();
else if (param==2)
interpolator = new InterpolatorImpl<InMemoryStrategy,OdbcDataSource> ();
else if (param==3)
interpolator = new InterpolatorImpl<NoCachingStrategy,RestDataSource> ();
while (true)
{
double input = WaitForRequest();
SendRequest( interpolator->GetValue (input));
}
}
I use this idiom quite a lot. It quite nicely hides the templatey stuff from client code.
Note, i'm not sure this use of templates really classes as "meta-programming" though. I usually reserve that grandiose term for the use of more sophisticated compile-time template tricks, esp the use of conditionals, recursive defintions etc to effectively compute stuff at compile time.
Templates are sometimes called static (or compile-time) polymorphism, so yes, they can sometimes be used instead of OOP (dynamic) polymorphism. Of course, it requires the types to be determined at compile-time, rather than runtime, so it can't completely replace dynamic polymorphism.
When I often find I want to do is have something like TemplateType x = new TemplateType() where x doesn't care what X,Y is.
Yeah, that's not possible. You have to do something similar to what you have with the DoIt() function. Often, I think that ends up a cleaner solution anyway (you end up with smaller functions that do just one thing each -- usually a good thing). But if the types are only determined at runtime (as with i in the OOP version of your main function), then templates won't work.
But In this case, I think your template version solves the problem well, and is a nice solution in its own right. (Although as onebyone mentions, it does mean code gets instantiated for all three templates, which might in some cases be a problem)
Related
This question might fall into "wanting the best of all worlds" but it is a real design problem that needs at least a better solution.
Structure needed:
In order of importance, here's the requirements that have me stuck
We need templates, whether on the class or function level. We are highly dependent on template objects in arguments of functions at this point. So if anything leaves the model below, its virtual functions (to my knowledge).
We want to decouple the call from selection. By that we want the user to declare a Math Object and have the background figure it out, preferably at runtime.
We want there to be a default, like shown in the above diagram.
In my company's program, we have a crucial algorithm generator that is dependent on both compile-time and runtime polymorphism, namely template classes and virtual inheritance. We have it working, but it is fragile, hard to read and develop and has certain features that won't work on higher optimization levels (meaning we are relying on undefined behavior somewhere). A brief outline of the code is as follows.
// Math.hpp
#include <dataTypes.hpp>
// Base class. Actually handles CPU Version of execution
template <typename T>
class Math {
// ...
// Example function. Parameters vary in type and number
// Variable names commented out to avoid compile warnings
virtual void exFunc ( DataType<T> /*d*/, float /*f*/ )
{
ERROR_NEED_CODE; // Macro defined to throw error with message
}
// 50+ other functions...
};
//============================================================
// exampleFuncs.cpp
#include<Math.hpp>
template <> void Math<float>::exFunc ( DataType<float> d, float f)
{
// Code Here.
}
Already, we can see some problems, and we haven't gotten to the main issue. Due to the sheer number of functions in this class, we don't want to define all in the header file. Template functionality is lost as a result. Second, with the virtual functions with the template class, we need to define each function in the class anyways, but we just shoot an error and return garbage (if return needed).
//============================================================
// GpuMath.hpp
#include <Math.hpp>
// Derived class. Using CUDA to resolve same math issues
GpuMath_F : Math<float> { ... };
The functionality here is relatively simple, but I noticed that again, we give up template features. I'm not sure it needs to be that way, but the previous developers felt constrained to declare a new class for each needed type (3 currently. Times that by 50 or so functions, and we have severe level of overhead).
Finally, When functionality is needed. We use a Factory to create the right template type object and stores it in a Math pointer.
// Some other class, normally template
template <typename T>
class OtherObject {
Math<T>* math_;
OtherObject() {
math_ = Factory::get().template createMath<T> ();
// ...
}
// ...
};
The factory is omitted here. It gets messy and doesn't help us much. The point is that we store all versions of Math Objects in the base class.
Can you point me in the right direction for other techniques that are alternative to inheritance? Am I looking for a variation of Policy Design? Is There a template trick?
Thanks for reading and thanks in advance for your input.
As has been discussed many times before, templates with virtual features don't jive well together. It is best to choose one or the other.
Approach 1 : Helper Class
The first and best option we have so far does just that, opting out of the virtual features for a wrapper class.
class MathHelper {
Math cpuMath;
GpuMath gpuMath;
bool cuda_; //True if gpuMath is wanted
template <typename T>
void exFunc ( DataType<T> d, float f )
{
if (cuda_)
gpuMath.exFunc( d, f );
else
cpuMath.exFunc( d, f );
}
// 50+ functions...
};
First, you might have noticed that the functions are templated rather than the class. It structurally is more convenient.
Pros
Gains full access to templates in both CPU and GPU classes.
Improved customization for each and every function. Choice of what is default.
Non-invasive changes to previous structure. For example, if this MathHelper was just called Math and we had CpuMath and GpuMath as the implementation, the instantiation and use can almost be the same as above, and stay exactly the same if we let Factory handle the MathHelper.
Cons
Explicit if/else and declaration of every function.
Mandatory definition of every function in MathHelper AND at least one of the other Math objects.
As a result, repeated code everywhere.
Approach 2: Macro
This one attempts to reduce the repeated code above. Somewhere, we have a Math function.
class Math {
CpuMath cpuMath;
GpuMath gpuMath;
// Some sort of constructor
static Math& math() { /*static getter*/ }
};
This math helper uses a static getter function similar to Exam 1 shown here. We have base class CpuMath that contains no virtual functions and derived class GpuMath. Again, templating is on function level.
Then from there, any time we want a math function we use this macro:
#define MATH (func, ret...) \
do { \
if (math.cuda_) \
ret __VA_OPT__(=) math().cuda.func; \
else \
ret __VA_OPT__(=) math().cpu.func; \
} while (0)
Pros
Remove repeat code of previous wrapper.
Again, full power of templates unlocked
Cons
Not as customizable as above wrapper
Initially much more invasive. Every time a Math function is accessed, it has to change from val = math_.myFunc(...), to MATH (myFunc(...), val). Because editors don't do good error checking on macros, this has potentially to cause many errors in the editing process.
Base class must have every function derived class have, since it is default.
Again, if any other creative ways around to implement this design would be appreciated. I found this to be a fun exercise either way, and would love to continue learning from it.
I'm pushing IMO the limits of C++template programming. The system is an Arduino but my attempt is applicable to any microcontroller system.
I define Pins using a template class with an 'int' parameters
template<const int pin>
struct Pin {
Pin() { mode(pin, 0); }
};
template<const int pin>
class PinOut : public Pin<pin> {};
I can create template classes to use PinOut like:
template<typename F>
class M {
public:
M() { }
F mF;
};
M<PinOut<1>> m1;
template<int F>
class N {
public:
N() { }
Pin<F> mF;
};
N<1> n1;
But I'd like to not use templates in the classes that use PinOut. This is illustrative of my thinking showing possible approaches but clearly doesn't work.
class R {
public:
R(const int i) {
}
PinOut<i> mF; // create template instance here
};
R r1(1); // what I'd like to able to do
I recognize the problem is creating a type inside class R.
The other possibility is instantiating a PinOut variable and passing it in but again passing and creating a type inside the class is a problem. Something like this:
class S {
public:
S(PinOut<int>& p) { } // how to pass the type and instance
PinOut<p>& mF; // and use it here
};
PinOut<1> pp;
S s1(pp);
Sorry if this sound abrupt but please don't ask why or what I'm trying to do. This is an experiment and I'm pushing my understanding of C++ especially templates. I know there are other approaches.
Yes, any function that takes that type must itself be a template.
But is the entire family of Pin related in a way that some thing are meaningful without knowing T? This can be handled with a base class that's a non-template. The base class idea is especially handy because it can contain virtual functions that do know about T. This lets you switch between compile-time and run-time polymorphism on the fly as desired. Taken to an extreme, that becomes the weaker idea with the same syntax of "Generics" as seen in Java and .NET.
More generally, this is a concept known as type erasure. You might search for that term to find out more. It is designed into libraries in order to keep common code common and prevent gratuitous multiplication of the same passage though multiple instantiations.
In your case, pin is a non-type argument, which is something Generics don't even do. But it may not really affect the type much at all: what about the members change depending on pin? This might be an array bound, or a compile-time constant used to provide compile-time knowledge and optimization, or there for the sole purpose of making the type distinct.
All of these cases are things can be dealt with at run-time, too. If it's for the sole purpose of making the type distinct (e.g. make the compiler check that you pass time values and distance values to the correct parameters) then the real guts are all in a base class that omits the distinctiveness.
If it's an array bound or other type difference that can be managed at run-time, then again the base class or an adapter/proxy can do it at run-time. More generally a compile-time constant that doesn't affect the class layout can be known at run-time with the same effect, just less optimization.
From your example, that it is sensible to make the pin a constructor argument, the class could be implemented in the normal way with run-time configuration. Why is it a template? Presumably for compile-time checking to keep separate things separate. That doesn't cause them to work in different ways, so you want that compile-time part to be optional. So, this is a case where a base class does the trick:
class AnyPin
{
public:
AnyPin (int pin); // run-time configuration
};
template <int pin>
class Pin : public AnyPin { ⋯ };
Now you can write functions that take AnyPin, or write functions that take Pin<5> and get compile-time checking.
So just what does pin do to the class, in terms of its layout and functionality? Does it do anything that makes it unacceptable to just implement it as a run-time constructor value?
You ask that we don't inquire as to what you're trying to do, but I must say that templates have certain features and benefits, and there must be some reason for making it a template. Speaking simply in language-centric terms, did I miss something with the above analysis? Can you give a C++-programming reason for wanting it to be a template, if my summary didn't cover it? That may be why you didn't get any answers thus far.
Here is a simple code in C++:
#include <iostream>
#include <typeinfo>
template<typename T>
void function()
{
std::cout << typeid(T).name() << std::endl;
}
int main()
{
function<int>();
function<double>();
return 0;
}
I have read that templates in C++ is a compile-time feature, which is not like generics in C#/Java.
So as I understood, the C++ compiler will divide a single defined function into various number (depends on calls count with different type) of functions.
Am I right or not? I'm not an expert in C++ compilers, so I'm asking a piece of advice from you.
If my suggestion about compiler output is correct, I want to know if I can describe the code above as static polymorphism?
Because it seems to be not overriding, but just calling a copy from executable or... it doesn't matter what does application have in output binary image, but only the important part is in C++ code level and I musn't look at how does compiler produce an output.
Is there real static polymorhism in C++?
Absolutely - there are three mechanisms for static polymorphism: templates, macros and function overloading.
So as I understood, the C++ compiler will divide a single defined function into various number (depends on calls count with different type) of functions. Am I right or not?
That's the general idea. The number of functions that get instantiated depends on the number of permutations of template parameters, which may be explicitly specified as in function<int> and function<double> or - for templates that use the template parameters to match function arguments - automatically derived from the function arguments, for example:
template <typename T, size_t N>
void f(T (&array)[N])
{ }
double d[2];
f(d); // instantiates/uses f<double, 2>()
You should end up with a single copy of each instantiated template in the executable binary image.
I want to know if I can describe the code above as static polymorphism?
Not really.
template<> function is instantiated for two types
crucially, polymorphism is not used to choose which of the two instantiations of function to dispatch to at the call sites
trivially, during such instantiations typeid(T) is evaluated for int and double and effectively behaves polymorphically from a programmer perspective (it's a compiler keyword though - implementation unknown)
trivially, a mix of static and nominally dynamic (but here likely optimisable to static) polymorphism supports your use of std::cout
Background - polymorphism and code generation
The requirement I consider crucial for polymorphism is:
when code is compiled (be it "normal" code or per template instantiation or macro substitution), the compiler automatically chooses (creates if necessary) - and either inlines or calls - distinct type-appropriate behaviour (machine code)
i.e. code selection/creation is done by the compiler based only on the type(s) of variable(s) involved, rather than being explicitly hard-coded by the programmer's choice between distinct function names / instantiations each only capable of handling one type or permutation of types
for example, std::cout << x; polymorphically invokes different code as the type of x is varied but still outputs x's value, whereas the non-polymorphic printf("%d", x) handles ints but needs to be manually modified to printf("%c", x); if x becomes a char.
But, what we're trying to achieve with polymorphism is a bit more general:
reuse of algorithmic code for multiple types of data without embedding explicit type-detection and branching code
that is, without the program source code containing if (type == X) f1(x) else f2(x);-style code
reduced maintenance burden as after explicitly changing a variable's type fewer consequent changes need to be manually made throughout the source code
These bigger-picture aspects are supported in C++ as follows:
instantiation of the same source code to generate distinct behaviours (machine code) for some other type or permutation of types (this is an aspect of parametric polymorphism),
actually known as "instantiation" for templates and "substitution" for preprocessor macros, but I'll use "instantiation" hereafter for convenience; conceptually, re-compilation or re-interpretation...
implicit dispatch (static or dynamic) to distinct behaviour (machine code) appropriate to the distinct type(s) of data being processed.
...and in some minor ways per my answer at Polymorphism in c++
Different types of polymorphism involve either or both of these:
dispatch (2) can happen during instantiation (1) for templates and preprocessor macros,
instantiation (1) normally happens during dispatch (2) for templates (with no matching full specialisation) and function-like macros (kind of cyclic, though macros don't expand recursively)
dispatch (2) can be happen without instantiation (1) when the compiler selects a pre-existing function overload or template specialisation, or when the compiler triggers virtual/dynamic dispatch.
What does your code actually use?
function<int> and function<double> reuse the function template code to create distinct code for each of those types, so you are getting instantiation (1) as above. But, you are hard-coding which instantiation to call rather than having the compiler implicitly select an instantiation based on the type of some parameter, i.e. so you don't directly utilise implicit dispatch ala (2) when calling function. Indeed, function lacks a parameter that the compiler could use for implicit selection of a template instantiation.
Instantiation (1) alone is not enough to consider your code to have used polymorphism. Still, you've achieved convenient code re-use.
So what would be unambiguously polymorphic?
To illustrate how templates can support dispatch (2) as well as instantiation (1) and unarguably provide "polymorphism", consider:
template<typename T>
void function(T t)
{
std::cout << typeid(T).name() << std::endl;
}
function(4); // note: int argument, use function<int>(...)
function(12.3); // note: double argument, use function<double>(...)
The above code also utilises the implicit dispatch to type-appropriate code - aspect "2." above - of polymorphism.
Non type parameters
Interestingly, C++ provides the ability to instantiate templates with integral parameters such as boolean, int and pointer constants, and use them for all manner of things without varying your data types, and therefore without any polymorphism involved. Macros are even more flexible.
Note that using a template in a C.R.T.P. style is NOT a requirement for static polymorphism - it's an example application thereof. During instantiation, the compiler exhibits static polymorphism when matching operations to implementations in the parameter-specified type.
Discussion on terminology
Getting a definitive definition of polymorphism is difficult. wikipedia quotes Bjarne Stroustrup's online Glossary "providing a single interface to entities of different types": this implies struct X { void f(); }; struct Y { void f(); }; already manifests polymorphism, but IMHO we only get polymorphism when we use the correspondence of interface from client code, e.g. template <typename T> void poly(T& t) { t.f(); } requires static polymorphic dispatch to t.f() for each instantiation.
Wikipedia lists three types of polymorphism:
If a function denotes different and potentially heterogeneous implementations depending on a limited range of individually specified
types and combinations, it is called ad hoc polymorphism. Ad hoc
polymorphism is supported in many languages using function
overloading.
If the code is written without mention of any specific type and thus can be used transparently with any number of new types, it is
called parametric polymorphism. In the object-oriented programming
community, this is often known as generics or generic programming. In
the functional programming community, this is often simply called
polymorphism.
Subtyping (or inclusion polymorphism) is a concept wherein a name may denote instances of many different classes as long as they are
related by some common superclass. In object-oriented programming,
this is often referred to simply as polymorphism.
The first one refers to function overloading. The third type refers to late binding or runtime polymorphism, the kind you would see for example in inheritance. The second one is what we're interested in.
Templates are a compile-time construct and type deduction is a process when the compiler automatically figures out the template arguments. This is where static polymorphism comes in.
For example:
template <typename T, typename U>
auto func(const T& t, const U& u) -> decltype(t + u)
{
return (t + u);
}
This will work for any two types with compatible plus operators. There's no need to specify the template argument if the compiler can figure it out. It would be ad hoc polymorphism if you wrote function overloads that performed different behavior, for example string concatenation vs integer addition.
However, in your example, you have instantiations for your functions that are distinct, function<int> and function<double>. Here's a quote:
To be polymorphic, [a()] must be able to operate with values of at
least two distinct types (e.g. int and double), finding and executing
type-appropriate code.
In that case, the instantiations are specific for the type in which they were instantiated, so there is no polymorphism involved.
There is no static polymorphism in your example because there is no polymorphism. This is because function<int>() does not look the same as function<double>().
Examples of static polymorphism would include simple function overloading, function templates that can work with type deduction, type traits, and the curiously recurring template pattern (CRTP). So this variation on your example would qualify as static polymorphism:
#include <iostream>
#include <typeinfo>
template<typename T>
void function(T)
{
std::cout << typeid(T).name() << std::endl;
}
int main()
{
function(0); // T is int
function(0.0); // T is double
return 0;
}
Here is another example:
template<typename T>
void function(T t)
{
t.foo();
}
struct Foo()
{
void foo() const {}
};
struct Bar()
{
void foo() const {}
};
int main()
{
Foo f;
Bar b;
function(f); // T is Foo
function(b); // T is Bar
}
For c++ the term 'static polymorphism' is normally used for e.g. the CRTP type design patterns:
template<typename Derived>
class Base
{
void someFunc() {
static_cast<Derived*>(this)->someOtherFunc();
};
};
class ADerived : public Base<ADerived>
{
void someOtherFunc() {
// ...
}
};
It generally means that types and inheritance constraints are deduced and verified at compile/link time. The compiler will emit error messages if operations are missing or invalid on the specified types. In that sense it's not really polymorphism.
While it can be argued that the example in the OP does not exhibit static polymorphism, the use of specialization can make a more compelling case:
template<class T>
class Base
{
public:
int a() { return 7; }
};
template<>
class Base<int>
{
public:
int a() { return 42; }
};
template<>
class Base<double>
{
public:
int a() { return 121; }
};
We see here that for most classes a() will return 7; The specialized (derived) instanciations for int and double can have radically different behaviors, demonstrated in the simple case by the differing return values, the same can be done for templates with for example int parameters, and can exhibit what can be oddly termed as static recursive polymoprphism.
While the term polymorphic is possibly being stretched, the concept is definately there. What is missing is not the ability to redefine functions, but the ability for specialized classes to automatically inherit functions that are not changing behavior.
I am wrapping a library which I did not write to make it more user friendly. There are a huge number of functions which are very basic so it's not ideal to have to wrap all of these when all that is really required is type conversion of the results.
A contrived example:
Say the library has a class QueryService, it has among others this method:
WeirdInt getId() const;
I'd like a standard int in my interface however, I can get an int out of WeirdInt no problem as I know how to do this. In this case lets say that WeirdInt has:
int getValue() const;
This is a very simple example, often the type conversion is more complicated and not always just a call to getValue().
There are literally hundreds of function calls that return types likes these and more are added all the time, so I'd like to try and reduce the burden on myself having to constantly add a bajillion methods every time the library does just to turn WeirdType into type.
I want to end up with a QueryServiceWrapper which has all the same functionality as QueryService, but where I've converted the types. Am I going to have to write an identically names method to wrap every method in QueryService? Or is there some magic I'm missing? There is a bit more to it as well, but not relevant to this question.
Thanks
The first approach I'd think is by trying with templates such that
you provide a standard implementation for all the wrapper types which have a trivial getValue() method
you specialize the template for all the others
Something like:
class WeirdInt
{
int v;
public:
WeirdInt(int v) : v(v) { }
int getValue() { return v; }
};
class ComplexInt
{
int v;
public:
ComplexInt(int v) : v(v) { }
int getValue() { return v; }
};
template<typename A, typename B>
A wrap(B type)
{
return type.getValue();
}
template<>
int wrap(ComplexInt type)
{
int v = type.getValue();
return v*2;
};
int x = wrap<int, WeirdInt>(WeirdInt(5));
int y = wrap<int, ComplexInt>(ComplexInt(10));
If the wrapper methods for QueryService have a simple pattern, you could also think of generating QueryServiceWrapper with some perl or python script, using some heuristics. Then you need to define some input parameters at most.
Even defining some macros would help in writing this wrapper class.
Briefly, If your aim is to encapsulate the functionality completely so that WeirdInt and QueryService are not exposed to the 'client' code such that you don't need to include any headers which declare them in the client code, then I doubt the approach you take will be able to benefit from any magic.
When I've done this before, my first step has been to use the pimpl idiom so that your header contains no implementation details as follows:
QueryServiceWrapper.h
class QueryServiceWrapperImpl;
class QueryServiceWrapper
{
public:
QueryServiceWrapper();
virtual ~QueryServiceWrapper();
int getId();
private:
QueryServiceWrapperImpl impl_;
};
and then in the definition, you can put the implementation details, safe in the knowledge that it will not leach out to any downstream code:
QueryServiceWrapper.cpp
struct QueryServiceWrapperImpl
{
public:
QueryService svc_;
};
// ...
int QueryServiceWrapper::getValue()
{
return impl_->svc_.getId().getValue();
}
Without knowing what different methods need to be employed to do the conversion, it's difficult add too much more here, but you could certainly use template functions to do conversion of the most popular types.
The downside here is that you'd have to implement everything yourself. This could be a double edged sword as it's then possible to implement only that functionality that you really need. There's generally no point in wrapping functionality that is never used.
I don't know of a 'silver bullet' that will implement the functions - or even empty wrappers on the functions. I've normally done this by a combination of shell scripts to either create the empty classes that I want or taking a copy of the header and using text manipulation using sed or Perl to change original types to the new types for the wrapper class.
It's tempting in these cases to use public inheritance to enable access to the base functions while allowing functions to be overridden. However, this is not applicable in your case as you want to change return types (not sufficient for an overload) and (presumably) you want to prevent exposure of the original Weird types.
The way forward here has to be to use aggregation although in such as case there is no way you can easily avoid re-implementing (some of) the interfaces unless you are prepared to automate the creation of the class (using code generation) to some extent.
more complex approach is to introduce a required number of facade classes over original QueryService, each of which has a limited set of functions for one particular query or query-type. I don't know that your particular QueryService do, so here is an imaginary example:
suppose the original class have a lot of weired methods worked with strange types
struct OriginQueryService
{
WeirdType1 query_for_smth(...);
WeirdType1 smth_related(...);
WeirdType2 another_query(...);
void smth_related_to_another_query(...);
// and so on (a lot of other function-members)
};
then you may write some facade classes like this:
struct QueryFacade
{
OriginQueryService& m_instance;
QueryFacade(OriginQueryService* qs) : m_instance(*qs) {}
// Wrap original query_for_smth(), possible w/ changed type of
// parameters (if you'd like to convert 'em from C++ native types to
// some WeirdTypeX)...
DesiredType1 query_for_smth(...);
// more wrappers related to this particular query/task
DesiredType1 smth_related(...);
};
struct AnotherQueryFacade
{
OriginQueryService& m_instance;
AnotherQueryFacade(OriginQueryService* qs) : m_instance(*qs) {}
DesiredType2 another_query(...);
void smth_related_to_another_query(...);
};
every method delegate call to m_instance and decorated w/ input/output types conversion in a way you want it. Types conversion can be implemented as #Jack describe in his post. Or you can provide a set of free functions in your namespace (like Desired fromWeird(const Weired&); and Weired toWeired(const Desired&);) which would be choosen by ADL, so if some new type arise, all that you have to do is to provide overloads for this 2 functions... such approach work quite well in boost::serialization.
Also you may provide a generic (template) version for that functions, which would call getValue() for example, in case if lot of your Weired types has such member.
I'm abusing C++ templates a little and I'm having trouble figuring something out. Let's say I have two types that really should be inherited from a base type, but for speed reasons, I can't afford to have the virtual function overhead (I've benchmarked it, and virtual calls ruin things for me!).
First, here are the two classes I have
template<class DataType> class Class1
{
//Lots of stuff here
}
template<Class DataType> class Class2
{
//The same stuff as in Class1, but implemented differently
}
In a typical oo design, Class1 and Class2 would inherit from IInterface and I could have a function that looks like this
DoStuff(IInterface& MyInterface)
{
}
But I can't do that, so I've done this
template <class C>
DoStuff(C& c)
{
}
I know it's not pretty, as there's nothing (at the compiler level) to enforce that Class1 and Class2 implement the same interface, but for speed reasons, I'm breaking some of the rules.
What I'd love to do is create a call back function on DoStuff, but I can't figure out how to make it work with the templates (especially since there's the hidden in there.
For example this works right now
DoStuff(char* filename)
{
switch (//figure out the type i need to make)
{
case 1: return DoStuff(Class1<int>(filename));
case 2: return DoStuff(Class1<double>(filename));
}
}
template<class DataType>
DoStuff(DataType* pdata)
{
return DoStuff(Class2<DataType>(pdata));
}
template<class C>
DoStuff(C c)
{
c.Print();
}
Now I know you're asking, why use Class1 and Class2? Well the underlying difference between dealing with a file and dealing with memory is so big, that it makes sense to have different classes for the different type of input (rather than just overloading the constructor and having it behave differently for the different inputs). Again, I did benchmark this and it's much faster to have the special cases handled in their own classes rather than having cases/ifs in every function.
So what I'd like to do is hide a lot of this implementation from the junior developers, I don't want them to have to create three different overloaded DoStuffs to handle the different inputs. Ideally, I'd just set up some type of callback with #defines and all they'd need to do is something like create a class called DoStuff and overload the () operator and have the functor do the work.
The trouble I'm having is that the DoStuff function that does the work is only templatized by <class C> but C itself is templatized by <class DataType> and everything I can't figure out how to pass everything around in a generic way. E.g., I cannot use template <class C<DataType>> or template<template< class DataType> class C>. It just won't compile.
Does anyone have a good trick to have a generic call back, either a function or a functor (I don't care), with this nested templated class? Basically I want something where I can write a generic function that doesn't care about the class that's storing the data and have that called by a mostly common function that figures out which class to use.
BigSwitch(CallBack,Inputs)
{
switch(//something)
{
case 1: return CallBack(Class1<Type>(Inputs))
case 2: return CallBack(Class2<Type>(Inputs))
}
}
This way I can write one BigSwitch function and have other people write the CallBack functions.
Any Ideas?
EDIT for clarification for Jalf:
I have two very similar classes, Class1 and Class2 which represent basically the same type of data, however the data store is vastly different. To make it more concrete, I'll use a simple example: Class1 is a simple array and Class2 looks like an array however rather than storing in memory is stores in a file (because it's too big to fit in memory). So I'll call them MemArray and FileArray right now. So let's say I wanted the Sum of the arrays. I can do something like this
template <class ArrayType, class ReturnType>
ReturnType Sum(ArrayType A)
{
ReturnType S=0;
for (int i=A.begin();i<A.end();++i)
{
S+=A[i];
}
return S;
}
But now, I need a way to load real data into the array. If it's a memory-based array, I'd do this
MemArray<DataType> M(pData);
and if it's file-baaed, I'd do this
FileArray<DataType> F(filename);
and both of these calls are valid (because the compiler generates both code paths at compile time)
double MS=Sum<MemArray<DataType>,double>(M);
double FS=Sum<FileArray<DataType>,double>(F);
All of this assumes that I know what the DataType is, but for a file based array, I may not know the data type until I open the file and query the header to know what kind of data is in the array.
double GetSum(char* filename)
{
int DataTypeCode=GetDataTypeCode(filename);
switch (DataTypeCode)
{
case 1: return Sum<FileArray<int>,double>(FileArray<int>(filename));
case 2: return Sum<FileArray<double>,double>(FileArray<double>(filename));
}
}
template <class DataType>
double GetSum(DataType* pData)
{
return Sum<MemArray<DataType>,double>(MemArray<DataType>(pData));
}
All of this works, but it requires writing two overloaded GetX functions and a X function for everything that I'd want to do. the GetX functions are basically the same code everytime except for the X that it calls. So I'd love to be able to write something like
double GetX(CallBackType X, char* filename)
{
int DataTypeCode=GetDataTypeCode(filename);
switch (DataTypeCode)
{
case 1: return X<FileArray<int>,double>(FileArray<int>(filename));
case 2: return X<FileArray<double>,double>(FileArray<double>(filename));
}
}
template <class DataType>
double GetX(CallBackType, DataType* pData)
{
return X<MemArray<DataType>,double>(MemArray<DataType>(pData));
}
so that I could call
GetX(Sum,filename)
then later when someone else wants to add a new function, all they need to do is write the function and call
GetX(NewFunction,filename)
I'm just looking for a way to write my overloaded GetX functions and my X functions so that I can abstract way the input/storage from the actual algorithms. Normally, this isn't a hard problem, it's just that I'm having trouble because the X function contains a template argument that itself is templated. The template<class ArrayType> also has an implicit ArrayType<DataType> hidden in there. The compiler is unhappy about that.
Focusing on the initial part of your question (why you're not just using inheritance):
A common way to do compile-time polymorphism and give access to the derived class' members through the base class is through the CRTP pattern.
template <typename T>
class IInterface {
void DoStuff() {
void static_cast<T*>(this)->DoStuff()
}
};
class Class1 : IInterface<Class1> {
void DoStuff(){...}
}
Would that solve your problem?
Edit:
By the way, I'm glad I could help, but next time please try to structure your question a bit more.
I really had no clue what you were asking, so this was just a stab in the dark, based on the first 3 lines of your question. ;)
You never really explain what you're trying to achieve, only what your non-functioning workaround looks like. Start out stating the problem, since that's what we really need to know. Then you can provide details about your current workarounds. And when posting code, add some context. Where are DoStuff() called from, and why would junior developers need to define them? (You've already done that, haven't you?)
What would said junior developers be doing with this code in the first place?
And it's confusing that you provide the specific cases (1 and 2), but not the switch statement itself (//something)
You'll get a lot more (and better and faster) answers next time if you try to make it easy for the person answering. :)
As to your question about a "generalized callback" you can use a boost::function but that essentially uses virtual functions under the covers (it may not - but at least a similar concept) so the performance difference you are looking for won't be there (in fact boost::function will probably be slower because of heap allocation).