Should I use Concept or Interface? (C++) [duplicate] - c++

I have 2 solutions for the same problem - to make some kind of callbacks from one "controller" to the used object and I don't know what to chose.
Solution 1: Use interfaces
struct AInterface
{
virtual void f() = 0;
};
struct A : public AInterface
{
void f(){std::cout<<"A::f()"<<std::endl;}
};
struct UseAInterface
{
UseAInterface(AInterface* a) : _a(a){}
void f(){_a->f();}
AInterface* _a;
};
Solution 2: Use templates
struct A
{
void f(){std::cout<<"A::f()"<<std::endl;}
};
template<class T>
struct UseA
{
UseA(T* a) : _a(a){}
void f(){_a->f();}
T* _a;
};
This is just a simple sample to illustrate my problem. In real world the interface will have several functions and one class may(and will!) implement multiple interfaces.
The code will not be used as a library for external projects and I don't have to hide the template implementation - I say this because first case will be better if I need to hide "controller" implementation.
Can you please tell me the advantages/disadvantages for each case and what is better to use?

In my opinion performance should be ignored (not really, but micro optimizations should) until you have a reason for that. Without some hard requirements (this is in a tight loop that takes most of the CPU, the actual implementations of the interface member functions is very small...) it would be very hard if not impossible to notice the difference.
So I would focus on a higher design level. Does it make sense that all types used in UseA share a common base? Are they really related? Is there a clear is-a relationship between the types? Then the OO approach might work. Are they unrelated? That is, do they share some traits but there is no direct is-a relationship that you can model? Go for the template approach.
The main advantage of the template is that you can use types that don't conform to a particular and exact inheritance hierarchy. For example, you can store anything in a vector that is copy-constructible (move-constructible in C++11), but an int and a Car are not really related in any ways. This way, you reduce the coupling between the different types used with your UseA type.
One of the disadvantages of templates is that each template instantiation is a different type that is unrelated to the rest of the template instantiations generated out of the same base template. This means that you cannot store UseA<A> and UseA<B> inside the same container, there will be code-bloat (UseA<int>::foo and UseA<double>::foo both are generated in the binary), longer compile times (even without considering the extra functions, two translation units that use UseA<int>::foo will both generate the same function, and the linker will have to discard one of them).
Regarding the performance that other answers claim, they are somehow right, but most miss the important points. The main advantage of choosing templates over dynamic dispatch is not the extra overhead of the dynamic dispatch, but the fact that small functions can be inlined by the compiler (if the function definition itself is visible).
If the functions are not inlined, unless the function takes just very few cycles to execute, the overall cost of the function will trump the extra cost of dynamic dispatch (i.e. the extra indirection in the call and the possible offset of the this pointer in the case of multiple/virtual inheritance). If the functions do some actual work, and/or they cannot be inlined they will have the same performance.
Even in the few cases where the difference in performance of one approach from the other could be measurable (say that the functions only take two cycles, and that dispatch thus doubles the cost of each function) if this code is part of the 80% of the code that takes less than 20% of the cpu time, and say that this particular piece of code takes 1% of the cpu (which is a huge amount if you consider the premise that for performance to be noticeable the function itself must take just one or two cycles!) then you are talking about 30 seconds out of 1 hour program run. Checking the premise again, on a 2GHz cpu, 1% of the time means that the function would have to be called over 10 million times per second.
All of the above is hand waving, and it is falling on the opposite direction as the other answers (i.e. there are some imprecisions could make it seem as if the difference is smaller than it really is, but reality is closer to this than it is to the general answer dynamic dispatch will make your code slower.

There are pros and cons to each. From the C++ Programming Language:
Prefer a template over derived classes when run-time efficiency is at a premium.
Prefer derived classes over a template if adding new variants without recompilation is important.
Prefer a template over derived classes when no common base can be defined.
Prefer a template over derived classes when built-in types and structures with compatibility constraints are important.
However, templates have their drawbacks
Code that use OO interfaces can be hidden in .cpp/.CC files, whenever templates force to expose the whole code in the header file;
Templates will cause code bloat;
OO interfaces are explicit, whenever requirements to template parameters are implicit and exist only in developer's head;
Heavy usage of templates hurts compilation speed.
Which to use depends on your situation and somewhat on you preferences. Templated code can produce some obtuse compilation errors which has lead to tools such as STL Error decrypt. Hopefully, concepts will be implemented soon.

The template case will have slightly better performance, because no virtual call is involved. If the callback is used extremely frequently, favour the template solution. Note that "extremely frequently" doesn't really kick in until thousands per second are involved, probably even later.
On the other hand, the template has to be in a header file, meaning each change to it will force recompiling all sites which call it, unlike in the interface scenario, where the implementation could be in a .cpp and be the only file needing recompilation.

You could consider an interface like a contract. Any class deriving from it must implement the methods of the interface.
Templates on the other hand implicitly have some constraints. For example your T template parameter must have a method f. These implicit requirements should be documented carefully, error messages involving templates can be quite confusing.
Boost Concept can be used for concept checking, which makes the implcit template requirements easier to understand.

The choice you describe is the choice between static polymorphism versus dynamic polymorphism. You'll find many discussions of this topic if you search for that.
It's hard to give a specific answer to such a general question. In general static polymorphism may give you better performance, but with the lack of Concepts in the C++11 standard also mean that you could get interesting compiler error messages when a class does not model the required concept.

I would go with the template version. If you think about this in terms of performance then it makes sense.
Virtual Interface - Using virtual means that the memory for the method is dynamic and is decided at runtime. This has overhead in that it has to consult the vlookup table to locate that method in memory.
Templates - You get static mapping. This means when your method is called it does not have to consult the lookup table and is already aware of the location of the method in memory.
If you are interested in performance then templates are almost always the choice to go with.

How about option 3?
template<auto* operation, class Sig = void()>
struct can_do;
template<auto* operation, class R, class...Args>
struct can_do<operation, R(Args...)> {
void* pstate = 0;
R(*poperation)(void*, Args&&...) = 0;
template<class T,
std::enable_if_t<std::is_convertible_v<
std::invoke_result_t<decltype(*operation), T&&, Args&&...>,
R>,
bool> = true,
std::enable_if_t<!std::is_same_v<can_do, std::decay_t<T>>, bool> =true
>
can_do(T&& t):
pstate((void*)std::addressof(t)),
poperation(+[](void* pstate, Args&&...args)->R {
return (*operation)( std::forward<T>(*static_cast<std::remove_reference_t<T>*>(pstate)), std::forward<Args>(args)... );
})
{}
can_do(can_do const&)=default;
can_do(can_do&&)=default;
can_do& operator=(can_do const&)=default;
can_do& operator=(can_do&&)=default;
~can_do()=default;
auto operator->*( decltype(operation) ) const {
return [this](auto&&...args)->R {
return poperation( pstate, decltype(args)(args)... );
};
}
};
now you can do
auto invoke_f = [](auto&& elem)->void { elem.f(); };
struct UseA
{
UseA(can_do<&invoke_f> a) : m_a(a){}
void f(){(m_a->*&invoke_f)();}
can_do<&invoke_f> m_a;
};
Test code:
struct A {
void f() { std::cout << "hello world"; }
};
struct A2 {
void f() { std::cout << "goodbye"; }
};
A a;
UseA b(a);
b.f();
A2 a2;
UseA b2(a2);
b2.f();
Live example.
Having a richer multi-operation interface on can_do is left as an exercise.
UseA is not a template. A and A2 have no common base interface class.
Yet it works.

Related

C++: How is this technique of compile-time polymorphism called and what are the pros and cons?

At work, I have come across code which basically looks like that:
#include <iostream>
using namespace std;
enum e_Specialization {
Specialization_A,
Specialization_B
};
template<e_Specialization>
class TemplatedBase {
public:
string foo() { return "TemplatedBase::foo"; }
};
template<>
string TemplatedBase<Specialization_A>::foo() { return "TemplatedBase<Specialization_A>:foo"; }
int main() {
TemplatedBase<Specialization_A> o;
cout << o.foo() << endl;
return 0;
}
which outputs
TemplatedBase<Specialization_A>:foo
I haven't been able to find any discussion on this technique anywhere.
The code's creator argued mostly from the optimization side of things, that no virtual dispatch happens. In our case this optimization is not necessary, but I see how it could be useful.
My questions are:
Is this technique documented anywhere and does it have a name?
In comparison to specialization by inheritance, are there any advantages to this at all?
3. How does this relate to CRTP? To me it seems that the same is achieved, with all pros and cons of CRTP.
As far as whether the technique is documented (and if you are doing this with C++11 or later, please use an enum class), it's a fairly common technique to template on an enum or a boolean and then do your specializations.
One clear difference is that with this technique, you obviously can't add more specializations without modifying the primary code. An enum (or enum class) only has so many values. That could be either a good or bad thing, depending on whether you want it to be centrally tracked. But that could easily be changed by templating on a class and encapsulating it, a third option which isn't quite this technique nor does it involve public inheritance.
This technique has its biggest advantage, IMHO, in that you have the option to implement things inline. For example:
template<e_Specialization e>
class TemplatedBase {
public:
void bar() {
// code
if (e == Specialization_A) {
...
}
// code
}
};
I see this a lot with classes that are known from the outset to be in a performance critical path. There could be a boolean variable that controls whether or not intrusive performance profiling occurs. Because these branches are known at compile time, they are trivially optimized. This is a good way to do it because you can still use both versions of the class in the same build (e.g. run unit tests on both).
Another difference compared to inheritance, is that derived classes can easily add state if they need to. This technique as it stands would require specializing the whole class to add state. Again, this could be good or bad; a more constrained design is good if you don't need to break those constraints. And you can easily change the design to enable adding extra state:
template <e_Specialization e>
struct ExtraState {};
template <e_Specialization e>
class TemplatedBase : private ExtraState<e> {
...
A third, minor example, is that you aren't exposing any inheritance relationship. This is mostly small, but remember that you can get things like slicing or even implicit reference/pointer conversion. This is a pretty strict win for this technique.
In sum I would say that:
It's a win if you utilize the ability to implement something once and write the differences inline with no perf penalty
It's a win if you want to be explicit in your design about there being a limited number of implementations.
If neither of those are true then the design is a bit unorthodox and a little more complex compared to just using inheritance, although it doesn't really have any strong technical disadvantage. So if you have a good number of junior devs around, this code might be harder to read.

Dyamic vs Static Polymorphism in C++ : which is preferable?

I understand that dynamic/static polymorphism depends on the application design and requirements. However, is it advisable to ALWAYS choose static polymorphism over dynamic if possible? In particular, I can see the following 2 design choice in my application, both of which seem to be advised against:
Implement Static polymorphism using CRTP: No vtable lookup overhead while still providing an interface in form of template base class. But, uses a Lot of switch and static_cast to access the correct class/method, which is hazardous
Dynamic Polymorphism: Implement interfaces (pure virtual classes), associating lookup cost for even trivial functions like accessors/mutators
My application is very time critical, so am in favor of static polymorphism. But need to know if using too many static_cast is an indication of poor design, and how to avoid that without incurring latency.
EDIT: Thanks for the insight. Taking a specific case, which of these is a better approach?
class IMessage_Type_1
{
virtual long getQuantity() =0;
...
}
class Message_Type_1_Impl: public IMessage_Type_1
{
long getQuantity() { return _qty;}
...
}
OR
template <class T>
class TMessage_Type_1
{
long getQuantity() { return static_cast<T*>(this)->getQuantity(); }
...
}
class Message_Type_1_Impl: public TMessage_Type_1<Message_Type_1_Impl>
{
long getQuantity() { return _qty; }
...
}
Note that there are several mutators/accessors in each class, and I do need to specify an interface in my application. In static polymorphism, I switch just once - to get the message type. However, in dynamic polymorphism, I am using virtual functions for EACH method call. Doesnt that make it a case to use static poly? I believe static_cast in CRTP is quite safe and no performance penalty (compile time bound) ?
Static and dynamic polymorphism are designed to solve different
problems, so there are rarely cases where both would be appropriate. In
such cases, dynamic polymorphism will result in a more flexible and
easier to manage design. But most of the time, the choice will be
obvious, for other reasons.
One rough categorisation of the two: virtual functions allow different
implementations for a common interface; templates allow different
interfaces for a common implementation.
A switch is nothing more than a sequence of jumps that -after optimized- becomes a jump to an address looked-up by a table. Exactly like a virtual function call is.
If you have to jump depending on a type, you must first select the type. If the selection cannot be done at compile time (essentially because it depends on the input) you must always perform two operation: select & jump. The syntactic tool you use to select doesn't change the performance, since optimize the same.
In fact you are reinventing the v-table.
You see the design issues associated with purely template based polymorphism. While a looking virtual base class gives you a pretty good idea what is expected from a derived class, this gets much harder in heavily templated designs. One can easily demonstrate that by introducing a syntax error while using one of the boost libraries.
On the other hand, you are fearful of performance issues when using virtual functions. Proofing that this will be a problem is much harder.
IMHO this is a non-question. Stick with virtual functions until indicated otherwise. Virtual function calls are a lot faster than most people think (Calling a function from a dynamically linked library also adds a layer of indirection. No one seems to think about that).
I would only consider a templated design if it makes the code easier to read (generic algorithms), you use one of the few cases known to be slow with virtual functions (numeric algorithms) or you already identified it as a performance bottleneck.
Static polimorphism may provide significant advantage if the called method may be inlined by compiler.
For example, if the virtual method looks like this:
protected:
virtual bool is_my_class_fast_enough() override {return true;}
then static polimophism should be the preferred way (otherwise, the method should be honest and return false :).
"True" virtual call (in most cases) can't be inlined.
Other differences(such as additional indirection in the vtable call) are neglectable
[EDIT]
However, if you really need runtime polymorphism
(if the caller shouldn't know the method's implementation and, therefore, the method can't be inlined on the caller's side) then
do not reinvent vtable (as Emilio Garavaglia mentioned), just use it.

Why are template mixins in C++ not more of a mainstay?

I use template mixins in C++ a lot, but I'm wondering why the technique isn't used more. It seems like the ultimate in reuse. This mix of power and efficiency is one of the reasons I really love C++ and can't see myself moving to a JIT language.
This article: http://www.thinkbottomup.com.au/site/blog/C%20%20_Mixins_-_Reuse_through_inheritance_is_good is a good backgrounder if you don't know what they are, and puts the case so clearly in terms of reuse and performance.
The problem with mixins is... construction.
class Base1 { public: Base1(Dummy volatile&, int); };
class Base2 { public: Base2(Special const&, Special const&); };
And now, my super mixin:
template <typename T>
struct Mixin: T {};
Do you notice the issue here ? How the hell am I supposed to pass the arguments to the constructor of the base class ? What kind of constructor should Mixin propose ?
It's a hard problem, and it has not been solved until C++11 which enhanced the language to get perfect forwarding.
// std::foward is in <utility>
template <typename T>
struct Mixin: T {
template <typename... Args>
explicit Mixin(Args&&... args): T(std::forward<Args>(args...)) {}
};
Note: double checks are welcome
So now we can really use mixins... and just have to change people habits :)
Of course, whether we actually want to is a totally different subject.
One of the issues with mixins (that the poor article you reference happily skip over) is the dependency isolation you completely lose... and the fact that users of LoggingTask are then bound to write template methods. In very large code bases, more attention is given to dependencies than to performance, because dependencies burn human cycles while performance only burn CPU cycles... and those are usually cheaper.
Templates require implementation to be visible in the translation unit, not just at link time (C++11 addresses that if you'll only use a pointer or reference to instantiations). This is a major issue for low-level code in enterprise environments: changes to the implementation will trigger (might or might not be automatically) massive numbers of libraries and clients to recompile, rather than just need relinking.
Also, each template instantiation creates a distinct type, which means functions intended to work on any of the template instantions have to be able to accept them - either themselves being forced to be templated, or they need a form of handover to runtime polymorphism (which is often easy enough to do: just need an abstract base class expressing the set of supported operations, and some "get me a accessor" function that returns a derived object with a pointer to the template instantiation and related entires in the virtual dispatch table).
Anyway, these issues are typically manageable, but the techniques to manage the coupling, dependencies and interfaces involved are a lot less publicised, understood and readily available than the simple mixin technique itself. Same is true of templates and policy class BTW.

C++ template duck-typing vs pure virtual base class inheritance

Which are the guidelines for choosing between template duck-typing and pure virtual base class inheritance? Examples:
// templates
class duck {
void sing() { std::cout << "quack\n"; }
};
template<typename bird>
void somefunc(const bird& b) {
b.sing();
}
// pure virtual base class
class bird {
virtual void sing() = 0;
};
class duck : public bird {
void sing() { std::cout << "quack\n"; }
}
void somefunc(const bird& b) {
b.sing();
}
With template duck-typing, you are doing static polymorphism. Thus, you cannot do things like
std::vector<bird*> birds;
birds.push_back(new duck());
However, since you are relying on compile time typing, you are a little more efficient (no virtual call implies no dynamic dispatch (base on the dynamic type)).
If having the "template nature" of things propagate widely is OK with you, templates ("compile-time duck typing") can give you blazing speed (avoiding the "level of indirection" that's implicit in a virtual-function call) though maybe at some cost in memory footprint (in theory, good C++ implementations could avoid that memory overhead related to templates, but I don't feel very confident that such high-quality compilers will necessarily be available on all platforms where you need to port;-). So, at least pragmatically, it's something of a speed/memory trade-off. If the operations you're doing are so super-slow as I/O, then maybe the relatively tiny speed gain from avoiding a virtual call isn't really material to your use case.
Compile time vs. Runtime. If you want compile time binding you need to use templates. If you don't know the types at compile time, you should use virtual inheritence.
They are two completely different things. One is not an alternative to the other. The template function provides a general operation somefunc() which applies to a whole class of types, not just birds. The type of its parameter must be known at compile-time. The virtual method provides a runtime polymorphic operation specific to birds. The exact type of the parameter (this) need not be known at compile-time.
Since they provide different functionality, and are not in conflict with each other, it's rare that you ever need to decide between the two approaches. Decide what functionality you need, and the sensible approach will be obvious. It may even be a combination of the two.
(btw, the term "duck typing" is misused here. Neither approach is duck typing. You should drop the phrase from your C++ lexicon. )
#John is right. If you have two covariant type parameters you have no choice, you have to use templates. Object oriented techniques provide run-time dispatch but it is only available for types whose methods have at most one variant argument (the object).
Most interesting problems involve relations which are N-ary with N>1 therefore you will usually have no choice but to use templates. Please examine the standard library to see which technique is used most.

Template or abstract base class?

If I want to make a class adaptable, and make it possible to select different algorithms from the outside -- what is the best implementation in C++?
I see mainly two possibilities:
Use an abstract base class and pass concrete object in
Use a template
Here is a little example, implemented in the various versions:
Version 1: Abstract base class
class Brake {
public: virtual void stopCar() = 0;
};
class BrakeWithABS : public Brake {
public: void stopCar() { ... }
};
class Car {
Brake* _brake;
public:
Car(Brake* brake) : _brake(brake) { brake->stopCar(); }
};
Version 2a: Template
template<class Brake>
class Car {
Brake brake;
public:
Car(){ brake.stopCar(); }
};
Version 2b: Template and private inheritance
template<class Brake>
class Car : private Brake {
using Brake::stopCar;
public:
Car(){ stopCar(); }
};
Coming from Java, I am naturally inclined to always use version 1, but the templates versions seem to be preferred often, e.g. in STL code? If that's true, is it just because of memory efficiency etc (no inheritance, no virtual function calls)?
I realize there is not a big difference between version 2a and 2b, see C++ FAQ.
Can you comment on these possibilities?
This depends on your goals. You can use version 1 if you
Intend to replace brakes of a car (at runtime)
Intend to pass Car around to non-template functions
I would generally prefer version 1 using the runtime polymorphism, because it is still flexible and allows you to have the Car still have the same type: Car<Opel> is another type than Car<Nissan>. If your goals are great performance while using the brakes frequently, i recommend you to use the templated approach. By the way, this is called policy based design. You provide a brake policy. Example because you said you programmed in Java, possibly you are not yet too experienced with C++. One way of doing it:
template<typename Accelerator, typename Brakes>
class Car {
Accelerator accelerator;
Brakes brakes;
public:
void brake() {
brakes.brake();
}
}
If you have lots of policies you can group them together into their own struct, and pass that one, for example as a SpeedConfiguration collecting Accelerator, Brakes and some more. In my projects i try to keep a good deal of code template-free, allowing them to be compiled once into their own object files, without needing their code in headers, but still allowing polymorphism (via virtual functions). For example, you might want to keep common data and functions that non-template code will probably call on many occasions in a base-class:
class VehicleBase {
protected:
std::string model;
std::string manufacturer;
// ...
public:
~VehicleBase() { }
virtual bool checkHealth() = 0;
};
template<typename Accelerator, typename Breaks>
class Car : public VehicleBase {
Accelerator accelerator;
Breaks breaks;
// ...
virtual bool checkHealth() { ... }
};
Incidentally, that is also the approach that C++ streams use: std::ios_base contains flags and stuff that do not depend on the char type or traits like openmode, format flags and stuff, while std::basic_ios then is a class template that inherits it. This also reduces code bloat by sharing the code that is common to all instantiations of a class template.
Private Inheritance?
Private inheritance should be avoided in general. It is only very rarely useful and containment is a better idea in most cases. Common case where the opposite is true when size is really crucial (policy based string class, for example): Empty Base Class Optimization can apply when deriving from an empty policy class (just containing functions).
Read Uses and abuses of Inheritance by Herb Sutter.
The rule of thumb is:
1) If the choice of the concrete type is made at compile time, prefer a template. It will be safer (compile time errors vs run time errors) and probably better optimized.
2) If the choice is made at run-time (i.e. as a result of a user's action) there is really no choice - use inheritance and virtual functions.
Other options:
Use the Visitor Pattern (let external code work on your class).
Externalize some part of your class, for example via iterators, that generic iterator-based code can work on them. This works best if your object is a container of other objects.
See also the Strategy Pattern (there are c++ examples inside)
Templates are a way to let a class use a variable of which you don't really care about the type. Inheritance is a way to define what a class is based on its attributes. Its the "is-a" versus "has-a" question.
Most of your question has already been answered, but I wanted to elaborate on this bit:
Coming from Java, I am naturally
inclined to always use version 1, but
the templates versions seem to be
preferred often, e.g. in STL code? If
that's true, is it just because of
memory efficiency etc (no inheritance,
no virtual function calls)?
That's part of it. But another factor is the added type safety. When you treat a BrakeWithABS as a Brake, you lose type information. You no longer know that the object is actually a BrakeWithABS. If it is a template parameter, you have the exact type available, which in some cases may enable the compiler to perform better typechecking. Or it may be useful in ensuring that the correct overload of a function gets called. (if stopCar() passes the Brake object to a second function, which may have a separate overload for BrakeWithABS, that won't be called if you'd used inheritance, and your BrakeWithABS had been cast to a Brake.
Another factor is that it allows more flexibility. Why do all Brake implementations have to inherit from the same base class? Does the base class actually have anything to bring to the table? If I write a class which exposes the expected member functions, isn't that good enough to act as a brake? Often, explicitly using interfaces or abstract base classes constrain your code more than necessary.
(Note, I'm not saying templates should always be the preferred solution. There are other concerns that might affect this, ranging from compilation speed to "what programmers on my team are familiar with" or just "what I prefer". And sometimes, you need runtime polymorphism, in which case the template solution simply isn't possible)
this answer is more or less correct. When you want something parametrized at compile time - you should prefer templates. When you want something parametrized at runtime, you should prefer virtual functions being overridden.
However, using templates does not preclude you from doing both (making the template version more flexible):
struct Brake {
virtual void stopCar() = 0;
};
struct BrakeChooser {
BrakeChooser(Brake *brake) : brake(brake) {}
void stopCar() { brake->stopCar(); }
Brake *brake;
};
template<class Brake>
struct Car
{
Car(Brake brake = Brake()) : brake(brake) {}
void slamTheBrakePedal() { brake.stopCar(); }
Brake brake;
};
// instantiation
Car<BrakeChooser> car(BrakeChooser(new AntiLockBrakes()));
That being said, I would probably NOT use templates for this... But its really just personal taste.
Abstract base class has on overhead of virtual calls but it has an advantage that all derived classes are really base classes. Not so when you use templates – Car<Brake> and Car<BrakeWithABS> are unrelated to each other and you'll have to either dynamic_cast and check for null or have templates for all the code that deals with Car.
Use interface if you suppose to support different Break classes and its hierarchy at once.
Car( new Brake() )
Car( new BrakeABC() )
Car( new CoolBrake() )
And you don't know this information at compile time.
If you know which Break you are going to use 2b is right choice for you to specify different Car classes. Brake in this case will be your car "Strategy" and you can set default one.
I wouldn't use 2a. Instead you can add static methods to Break and call them without instance.
Personally I would allways prefer to use Interfaces over templates because of several reasons:
Templates Compiling&linking errors are sometimes cryptic
It is hard to debug a code that based on templates (at least in visual studio IDE)
Templates can make your binaries bigger.
Templates require you to put all its code in the header file , that makes the template class a bit harder to understand.
Templates are hard to maintained by novice programmers.
I Only use templates when the virtual tables create some kind of overhead.
Ofcourse , this is only my self opinion.