Why are template mixins in C++ not more of a mainstay? - c++

I use template mixins in C++ a lot, but I'm wondering why the technique isn't used more. It seems like the ultimate in reuse. This mix of power and efficiency is one of the reasons I really love C++ and can't see myself moving to a JIT language.
This article: http://www.thinkbottomup.com.au/site/blog/C%20%20_Mixins_-_Reuse_through_inheritance_is_good is a good backgrounder if you don't know what they are, and puts the case so clearly in terms of reuse and performance.

The problem with mixins is... construction.
class Base1 { public: Base1(Dummy volatile&, int); };
class Base2 { public: Base2(Special const&, Special const&); };
And now, my super mixin:
template <typename T>
struct Mixin: T {};
Do you notice the issue here ? How the hell am I supposed to pass the arguments to the constructor of the base class ? What kind of constructor should Mixin propose ?
It's a hard problem, and it has not been solved until C++11 which enhanced the language to get perfect forwarding.
// std::foward is in <utility>
template <typename T>
struct Mixin: T {
template <typename... Args>
explicit Mixin(Args&&... args): T(std::forward<Args>(args...)) {}
};
Note: double checks are welcome
So now we can really use mixins... and just have to change people habits :)
Of course, whether we actually want to is a totally different subject.
One of the issues with mixins (that the poor article you reference happily skip over) is the dependency isolation you completely lose... and the fact that users of LoggingTask are then bound to write template methods. In very large code bases, more attention is given to dependencies than to performance, because dependencies burn human cycles while performance only burn CPU cycles... and those are usually cheaper.

Templates require implementation to be visible in the translation unit, not just at link time (C++11 addresses that if you'll only use a pointer or reference to instantiations). This is a major issue for low-level code in enterprise environments: changes to the implementation will trigger (might or might not be automatically) massive numbers of libraries and clients to recompile, rather than just need relinking.
Also, each template instantiation creates a distinct type, which means functions intended to work on any of the template instantions have to be able to accept them - either themselves being forced to be templated, or they need a form of handover to runtime polymorphism (which is often easy enough to do: just need an abstract base class expressing the set of supported operations, and some "get me a accessor" function that returns a derived object with a pointer to the template instantiation and related entires in the virtual dispatch table).
Anyway, these issues are typically manageable, but the techniques to manage the coupling, dependencies and interfaces involved are a lot less publicised, understood and readily available than the simple mixin technique itself. Same is true of templates and policy class BTW.

Related

Should I use Concept or Interface? (C++) [duplicate]

I have 2 solutions for the same problem - to make some kind of callbacks from one "controller" to the used object and I don't know what to chose.
Solution 1: Use interfaces
struct AInterface
{
virtual void f() = 0;
};
struct A : public AInterface
{
void f(){std::cout<<"A::f()"<<std::endl;}
};
struct UseAInterface
{
UseAInterface(AInterface* a) : _a(a){}
void f(){_a->f();}
AInterface* _a;
};
Solution 2: Use templates
struct A
{
void f(){std::cout<<"A::f()"<<std::endl;}
};
template<class T>
struct UseA
{
UseA(T* a) : _a(a){}
void f(){_a->f();}
T* _a;
};
This is just a simple sample to illustrate my problem. In real world the interface will have several functions and one class may(and will!) implement multiple interfaces.
The code will not be used as a library for external projects and I don't have to hide the template implementation - I say this because first case will be better if I need to hide "controller" implementation.
Can you please tell me the advantages/disadvantages for each case and what is better to use?
In my opinion performance should be ignored (not really, but micro optimizations should) until you have a reason for that. Without some hard requirements (this is in a tight loop that takes most of the CPU, the actual implementations of the interface member functions is very small...) it would be very hard if not impossible to notice the difference.
So I would focus on a higher design level. Does it make sense that all types used in UseA share a common base? Are they really related? Is there a clear is-a relationship between the types? Then the OO approach might work. Are they unrelated? That is, do they share some traits but there is no direct is-a relationship that you can model? Go for the template approach.
The main advantage of the template is that you can use types that don't conform to a particular and exact inheritance hierarchy. For example, you can store anything in a vector that is copy-constructible (move-constructible in C++11), but an int and a Car are not really related in any ways. This way, you reduce the coupling between the different types used with your UseA type.
One of the disadvantages of templates is that each template instantiation is a different type that is unrelated to the rest of the template instantiations generated out of the same base template. This means that you cannot store UseA<A> and UseA<B> inside the same container, there will be code-bloat (UseA<int>::foo and UseA<double>::foo both are generated in the binary), longer compile times (even without considering the extra functions, two translation units that use UseA<int>::foo will both generate the same function, and the linker will have to discard one of them).
Regarding the performance that other answers claim, they are somehow right, but most miss the important points. The main advantage of choosing templates over dynamic dispatch is not the extra overhead of the dynamic dispatch, but the fact that small functions can be inlined by the compiler (if the function definition itself is visible).
If the functions are not inlined, unless the function takes just very few cycles to execute, the overall cost of the function will trump the extra cost of dynamic dispatch (i.e. the extra indirection in the call and the possible offset of the this pointer in the case of multiple/virtual inheritance). If the functions do some actual work, and/or they cannot be inlined they will have the same performance.
Even in the few cases where the difference in performance of one approach from the other could be measurable (say that the functions only take two cycles, and that dispatch thus doubles the cost of each function) if this code is part of the 80% of the code that takes less than 20% of the cpu time, and say that this particular piece of code takes 1% of the cpu (which is a huge amount if you consider the premise that for performance to be noticeable the function itself must take just one or two cycles!) then you are talking about 30 seconds out of 1 hour program run. Checking the premise again, on a 2GHz cpu, 1% of the time means that the function would have to be called over 10 million times per second.
All of the above is hand waving, and it is falling on the opposite direction as the other answers (i.e. there are some imprecisions could make it seem as if the difference is smaller than it really is, but reality is closer to this than it is to the general answer dynamic dispatch will make your code slower.
There are pros and cons to each. From the C++ Programming Language:
Prefer a template over derived classes when run-time efficiency is at a premium.
Prefer derived classes over a template if adding new variants without recompilation is important.
Prefer a template over derived classes when no common base can be defined.
Prefer a template over derived classes when built-in types and structures with compatibility constraints are important.
However, templates have their drawbacks
Code that use OO interfaces can be hidden in .cpp/.CC files, whenever templates force to expose the whole code in the header file;
Templates will cause code bloat;
OO interfaces are explicit, whenever requirements to template parameters are implicit and exist only in developer's head;
Heavy usage of templates hurts compilation speed.
Which to use depends on your situation and somewhat on you preferences. Templated code can produce some obtuse compilation errors which has lead to tools such as STL Error decrypt. Hopefully, concepts will be implemented soon.
The template case will have slightly better performance, because no virtual call is involved. If the callback is used extremely frequently, favour the template solution. Note that "extremely frequently" doesn't really kick in until thousands per second are involved, probably even later.
On the other hand, the template has to be in a header file, meaning each change to it will force recompiling all sites which call it, unlike in the interface scenario, where the implementation could be in a .cpp and be the only file needing recompilation.
You could consider an interface like a contract. Any class deriving from it must implement the methods of the interface.
Templates on the other hand implicitly have some constraints. For example your T template parameter must have a method f. These implicit requirements should be documented carefully, error messages involving templates can be quite confusing.
Boost Concept can be used for concept checking, which makes the implcit template requirements easier to understand.
The choice you describe is the choice between static polymorphism versus dynamic polymorphism. You'll find many discussions of this topic if you search for that.
It's hard to give a specific answer to such a general question. In general static polymorphism may give you better performance, but with the lack of Concepts in the C++11 standard also mean that you could get interesting compiler error messages when a class does not model the required concept.
I would go with the template version. If you think about this in terms of performance then it makes sense.
Virtual Interface - Using virtual means that the memory for the method is dynamic and is decided at runtime. This has overhead in that it has to consult the vlookup table to locate that method in memory.
Templates - You get static mapping. This means when your method is called it does not have to consult the lookup table and is already aware of the location of the method in memory.
If you are interested in performance then templates are almost always the choice to go with.
How about option 3?
template<auto* operation, class Sig = void()>
struct can_do;
template<auto* operation, class R, class...Args>
struct can_do<operation, R(Args...)> {
void* pstate = 0;
R(*poperation)(void*, Args&&...) = 0;
template<class T,
std::enable_if_t<std::is_convertible_v<
std::invoke_result_t<decltype(*operation), T&&, Args&&...>,
R>,
bool> = true,
std::enable_if_t<!std::is_same_v<can_do, std::decay_t<T>>, bool> =true
>
can_do(T&& t):
pstate((void*)std::addressof(t)),
poperation(+[](void* pstate, Args&&...args)->R {
return (*operation)( std::forward<T>(*static_cast<std::remove_reference_t<T>*>(pstate)), std::forward<Args>(args)... );
})
{}
can_do(can_do const&)=default;
can_do(can_do&&)=default;
can_do& operator=(can_do const&)=default;
can_do& operator=(can_do&&)=default;
~can_do()=default;
auto operator->*( decltype(operation) ) const {
return [this](auto&&...args)->R {
return poperation( pstate, decltype(args)(args)... );
};
}
};
now you can do
auto invoke_f = [](auto&& elem)->void { elem.f(); };
struct UseA
{
UseA(can_do<&invoke_f> a) : m_a(a){}
void f(){(m_a->*&invoke_f)();}
can_do<&invoke_f> m_a;
};
Test code:
struct A {
void f() { std::cout << "hello world"; }
};
struct A2 {
void f() { std::cout << "goodbye"; }
};
A a;
UseA b(a);
b.f();
A2 a2;
UseA b2(a2);
b2.f();
Live example.
Having a richer multi-operation interface on can_do is left as an exercise.
UseA is not a template. A and A2 have no common base interface class.
Yet it works.

Mechanics of multiple inheritance compared to templates wrt building flexible designs

This is a narrower version of the question put on hold due to being too broad.
On pages 6-7 of Modern C++ Design, Andrei Alexandrescu lists three ways in which the multiple inheritance is weaker than templates with respect to building flexible designs. In particular, he states that the mechanics provided by multiple inheritance is poor (the text in square brackets and formatting are mine as per my understanding of the context):
In such a setting [i.e. multiple inheritance], [to build a flexible SmartPtr,] the user would build a multithreaded, reference-counted smart pointer class by inheriting some BaseSmartPtr class and two classes: MultiThreaded and RefCounted. Any experienced class designer knows
that such a naïve design does not work.
...
Mechanics. There is no boilerplate code to assemble the inherited components in a controlled
manner. The only tool that combines BaseSmartPtr, MultiThreaded, and RefCounted
is a language mechanism called multiple inheritance. The language applies
simple superposition in combining the base classes and establishes a set of simple rules
for accessing their members. This is unacceptable except for the simplest cases. Most
of the time, you need to orchestrate the workings of the inherited classes carefully to
obtain the desired behavior.
When using multiple inheritance, one can achieve some pretty flexible orchestration by writing member functions that call member functions of several base classes. So, what is the orchestration that is missing from multiple inheritance and present in templates?
Please note that not every disadvantage of multiple inheritance compared to templates goes as an answer here, but only a disadvantage in what Andei calls mechanics in the above quote. In particular, please make sure that you are not talking about one of the other two weaknesses of multiple inheritance listed by Andrei:
Type information. The base classes do not have enough type information to carry on
their tasks. For example, imagine you try to implement deep copy for your smart
pointer class by deriving from a DeepCopy base class. But what interface would DeepCopy
have? It must create objects of a type it doesn’t know yet.
State manipulation. Various behavioral aspects implemented with base classes must manipulate
the same state. This means that they must use virtual inheritance to inherit a
base class that holds the state. This complicates the design and makes it more rigid because
the premise was that user classes inherit library classes, not vice versa.
I think that what Alexandrescu is referring to in the "Mechanics" paragraph is expounded upon in the rest of the chapter. He's referring to how much more flexible policy-based class design is than inheritance-based class design, particularly with respect to the various ways in which policies can be implemented and combined - this in comparison to the single implementation and combination allowed through multiple inheritance.
For instance, when discussing the Creator policy he points out that the policy requires only a Create() method that returns a pointer to the class being created, but doesn't specify that it be virtual or non-static. And he shows several ways in which each policy could be created: a straightforward policy class such as (from section 1.5, skipping the MallocCreator and PrototypeCreator policies)
template<class T>
struct OpNewCreator
{
static T* Create()
{
return new T;
}
};
...
> //Library code
> template <class CreationPolicy>
> class WidgetManager:public CreationPolicy
> {
> ...
> };
...
// Application Code
typedef WidgetManager<OpNewCreator<Widget> > MyWidgetMgr;
or it could be implemented with template template parameters (section 1.5.1) as
//Library Code
template <template <class> class Creation Policy>
class WidgetManager : public CreationPolicy <Widget>
{
...
}
// Application Code
typedef WidgetManager<OpNewCreator> MyWidgetMgr
or (section 1.5.2) - implemented as a template member function:
struct OpNewCreator
{
template <class T>
static T* Create()
{
return new T;
}
}
These are examples of the flexible mechanics that are available in a template-based policy class solution and not available in a multiple inheritance solution. These particular examples are not maybe all that exciting, probably because they have to be short and simple for pedagogical reasons.

Inheritance & virtual functions Vs Generic Programming

I need to Understand that whether really Inheritance & virtual functions not necessary in C++ and one can achieve everything using Generic programming. This came from Alexander Stepanov and Lecture I was watching is Alexander Stepanov: STL and Its Design Principles
I always like to think of templates and inheritance as two orthogonal concepts, in the very literal sense: To me, inheritance goes "vertically", starting with a base class at the top and going "down" to more and more derived classes. Every (publically) derived class is a base class in terms of its interface: A poodle is a dog is an animal.
On the other hand, templates go "horizontal": Each instance of a template has the same formal code content, but two distinct instances are entirely separate, unrelated pieces that run in "parallel" and don't see each other. Sorting an array of integers is formally the same as sorting an array of floats, but an array of integers is not at all related to an array of floats.
Since these two concepts are entirely orthogonal, their application is, too. Sure, you can contrive situations in which you could replace one by another, but when done idiomatically, both template (generic) programming and inheritance (polymorphic) programming are independent techniques that both have their place.
Inheritance is about making an abstract concept more and more concrete by adding details. Generic programming is essentially code generation.
As my favourite example, let me mention how the two technologies come together beautifully in a popular implementation of type erasure: A single handler class holds a private polymorphic pointer-to-base of an abstract container class, and the concrete, derived container class is determined a templated type-deducing constructor. We use template code generation to create an arbitrary family of derived classes:
// internal helper base
class TEBase { /* ... */ };
// internal helper derived TEMPLATE class (unbounded family!)
template <typename T> class TEImpl : public TEBase { /* ... */ }
// single public interface class
class TE
{
TEBase * impl;
public:
// "infinitely many" constructors:
template <typename T> TE(const T & x) : impl(new TEImpl<T>(x)) { }
// ...
};
They serve different purpose. Generic programming (at least in C++) is about compile time polymorphisim, and virtual functions about run-time polymorphisim.
If the choice of the concrete type depends on user's input, you really need runtime polymorphisim - templates won't help you.
Polymorphism (i.e. dynamic binding) is crucial for decisions that are based on runtime data. Generic data structures are great but they are limited.
Example: Consider an event handler for a discrete event simulator: It is very cheap (in terms of programming effort) to implement this with a pure virtual function, but is verbose and quite inflexible if done purely with templated classes.
As rule of thumb: If you find yourself switching (or if-else-ing) on the value of some input object, and performing different actions depending on its value, there might exist a better (in the sense of maintainability) solution with dynamic binding.
Some time ago I thought about a similar question and I can only dream about giving you such a great answer I received. Perhaps this is helpful: interface paradigm performance (dynamic binding vs. generic programming)
It seems like a very academic question, like with most things in life there are lots of ways to do things and in the case of C++ you have a number of ways to solve things. There is no need to have an XOR attitude to things.
In the ideal world, you would use templates for static polymorphism to give you the best possible performance in instances where the type is not determined by user input.
The reality is that templates force most of your code into headers and this has the consequence of exploding your compile times.
I have done some heavy generic programming leveraging static polymorphism to implement a generic RPC library (https://github.com/bytemaster/mace (rpc_static_poly branch) ). In this instance the protocol (JSON-RPC, the transport (TCP/UDP/Stream/etc), and the types) are all known at compile time so there is no reason to do a vtable dispatch... or is there?
When I run the code through the pre-processor for a single.cpp it results in 250,000 lines and takes 30+ seconds to compile a single object file. I implemented 'identical' functionality in Java and C# and it compiles in about a second.
Almost every stl or boost header you include adds thousands or 10's of thousands of lines of code that must be processed per-object-file, most of it redundant.
Do compile times matter? In most cases they have a more significant impact on the final product than 'maximally optimized vtable elimination'. The reason being that every 'bug' requires a 'try fix, compile, test' cycle and if each cycle takes 30+ seconds development slows to a crawl (note motivation for Google's go language).
After spending a few days with java and C# I decided that I needed to 're-think' my approach to C++. There is no reason a C++ program should compile much slower than the underlying C that would implement the same function.
I now opt for runtime polymorphism unless profiling shows that the bottleneck is in vtable dispatches. I now use templates to provide 'just-in-time' polymorphism and type-safe interface on top of the underlying object which deals with 'void*' or an abstract base class. In this way users need not derive from my 'interfaces' and still have the 'feel' of generic programming, but they get the benefit of fast compile times. If performance becomes an issue then the generic code can be replaced with static polymorphism.
The results are dramatic, compile times have fallen from 30+ seconds to about a second. The post-preprocessor source code is now a couple thousand lines instead of 250,000 lines.
On the other side of the discussion, I was developing a library of 'drivers' for a set of similar but slightly different embedded devices. In this instance the embedded device had little room for 'extra code' and no use for 'vtable' dispatch. With C our only option was 'separate object files' or runtime 'polymorphism' via function pointers. Using generic programming and static polymorphism we were able to create maintainable software that ran faster than anything we could produce in C.

Designing a C++ library

I am in the process of designing a C++ static library.
I want to make the classes generic/configuarable so that they can support a number of data types(and I don't want to write any data type specific code in my library).
So I have templatized the classes.
But since the C++ "export" template feature is not supported by the compiler I am currently using, I am forced to provide the implementation of the classes in the header file.
I dont want to expose the implementation details of my Classes to the client code which is going to use my library.
Can you please provide me with some design alternatives to the above problem??
Prior to templates, type-agnostic C++ code had to be written using runtime polymorphism. But with templates as well, you can combine the two techniques.
For example, suppose you wanted to store values of any type, for later retrieval. Without templates, you'd have to do this:
struct PrintableThing
{
// declare abstract operations needed on the type
virtual void print(std::ostream &os) = 0;
// polymorphic base class needs virtual destructor
virtual ~PrintableThing() {}
};
class PrintableContainer
{
PrintableThing *printableThing;
public:
// various other secret stuff
void store(PrintableThing *p);
};
The user of this library would have to write their own derived version of PrintableThing by hand to wrap around their own data and implement the print function on it.
But you can wrap a template-based layer around such a system:
template <T>
struct PrintableType : PrintableThing
{
T instance;
virtual void print(std::ostream &os)
{ os << instance; }
PrintableType(const T &i)
: instance(i) {}
};
And also add a method in the header of the library, in the declaration of the PrintableContainer class:
template <class T>
void store(const T &p)
{
store(new PrintableType(p));
}
This acts as the bridge between templates and runtime polymorphism, compile-time binding to the << operator to implement print, and to the copy-constructor also (and of course also forwarding to the nested instance's destructor).
In this way, you can write a library entirely based on runtime polymorphism, with the implementation capable of being hidden away in the source of the library, but with a little bit of template "sugar" added to make it convenient to use.
Whether this is worth the trouble will depend on your needs. It has a purely technical benefit in that runtime polymorphism is sometimes exactly what you need, in itself. On the downside, you will undoubtedly reduce the compiler's ability to inline effectively. On the upside, your compile times and binary code bloat may go down.
Examples are std::tr1::function and boost::any, which have a very clean, modern C++ template-based front end but work behind the scenes as runtime polymorphic containers.
I've got some news for you, buddy. Even with export, you'd still have to release all of your template code -- export just makes it that you don't have to put the definitions in a header file. You're totally stuck. The only technique you can use is split off some functions that are non-templates and put them into a different class. But that's ugly, and usually involves void* and placement new and delete. That's just the nature of the beast.
You can try to obfuscate your code - but you have little choice in C++03 asides from including template code in header files.
Vandevoorde does describe another technique in his book: Explicit instantiation - but that entails having to explicitly instantiate all possible useful combinations.
But for the most comprehensive review of this topic read chapter 6 from C++ Templates: The Complete Guide.
Edit (in response to your comment): You have two options for writing generic code without using templates:
1) Preprocessor - still requires header files
2) using void* - yuk - incredibly unsafe
So no, i do not recommend not using templates for solving problems that templates were specifically designed (albeit somewhat flawed) for.
One problem with templates is that they require compiled code. You never know how the end-user will specialize/instantiate your templates, so your dll-file would have to contain all possible template specializations in compiled form. This means that to export pair<X,Y> template you would have to force the compilication of pair<int,float>, pair<int,string>, pair<string,HWND> and so on... to infinity..
I guess more practical solution for you would be to un-template private/hidden code. You can create special internal functions that would only be compiled for single template specialization. In the following example internal_foo-function is never called from MyClass where A is not int.
template<class A>
class MyClass
{
int a;
float b;
A c;
int foo(string param1);
{
((MyClass<int>*)this)->internal_foo(param1);
}
int internal_foo(string param1); // only called on MyClass<int> instances
};
template<>
__declspec(dllexport) int MyClass<int>::internal_foo(string param1)
{
... secret code ...
}
This of course is a hack. When using it you should be extra careful not to use member variable "c", because it's not always integer (even though internal_foo thinks that it is). And you can't even guard yourself with assertions. C++ allows you to shoot yourself in the foot, and gives you no indications about it until it's too late.
PS. I haven't tested this code so it might require some fine tuning. Not sure for example if __declspec(dllimport) is needed in order for compiler to find internal_foo function from dll-file...
With templates you cannot avoid shipping the code (unless your code only works with a fixed set of types, in which case you can explicitly instantiate). Where I work we have a library that must work on POD types (CORBA/DDS/HLA data definitions), so at the end we ship templates.
The templates delegate most of the code to non-templated code that is shipped in binary form. In some cases, work must be done directly in the types that were passed to the template, and cannot thus be delegated to non-templated code, so it is not a perfect solution, but it hides enough part of the code to make our CEO happy (the people in charge of the project would gladly provide all the code in templates).
As Neil points in a comment to the question, in the vast majority of cases there is nothing magical in the code that could not be rewritten by others.

Template or abstract base class?

If I want to make a class adaptable, and make it possible to select different algorithms from the outside -- what is the best implementation in C++?
I see mainly two possibilities:
Use an abstract base class and pass concrete object in
Use a template
Here is a little example, implemented in the various versions:
Version 1: Abstract base class
class Brake {
public: virtual void stopCar() = 0;
};
class BrakeWithABS : public Brake {
public: void stopCar() { ... }
};
class Car {
Brake* _brake;
public:
Car(Brake* brake) : _brake(brake) { brake->stopCar(); }
};
Version 2a: Template
template<class Brake>
class Car {
Brake brake;
public:
Car(){ brake.stopCar(); }
};
Version 2b: Template and private inheritance
template<class Brake>
class Car : private Brake {
using Brake::stopCar;
public:
Car(){ stopCar(); }
};
Coming from Java, I am naturally inclined to always use version 1, but the templates versions seem to be preferred often, e.g. in STL code? If that's true, is it just because of memory efficiency etc (no inheritance, no virtual function calls)?
I realize there is not a big difference between version 2a and 2b, see C++ FAQ.
Can you comment on these possibilities?
This depends on your goals. You can use version 1 if you
Intend to replace brakes of a car (at runtime)
Intend to pass Car around to non-template functions
I would generally prefer version 1 using the runtime polymorphism, because it is still flexible and allows you to have the Car still have the same type: Car<Opel> is another type than Car<Nissan>. If your goals are great performance while using the brakes frequently, i recommend you to use the templated approach. By the way, this is called policy based design. You provide a brake policy. Example because you said you programmed in Java, possibly you are not yet too experienced with C++. One way of doing it:
template<typename Accelerator, typename Brakes>
class Car {
Accelerator accelerator;
Brakes brakes;
public:
void brake() {
brakes.brake();
}
}
If you have lots of policies you can group them together into their own struct, and pass that one, for example as a SpeedConfiguration collecting Accelerator, Brakes and some more. In my projects i try to keep a good deal of code template-free, allowing them to be compiled once into their own object files, without needing their code in headers, but still allowing polymorphism (via virtual functions). For example, you might want to keep common data and functions that non-template code will probably call on many occasions in a base-class:
class VehicleBase {
protected:
std::string model;
std::string manufacturer;
// ...
public:
~VehicleBase() { }
virtual bool checkHealth() = 0;
};
template<typename Accelerator, typename Breaks>
class Car : public VehicleBase {
Accelerator accelerator;
Breaks breaks;
// ...
virtual bool checkHealth() { ... }
};
Incidentally, that is also the approach that C++ streams use: std::ios_base contains flags and stuff that do not depend on the char type or traits like openmode, format flags and stuff, while std::basic_ios then is a class template that inherits it. This also reduces code bloat by sharing the code that is common to all instantiations of a class template.
Private Inheritance?
Private inheritance should be avoided in general. It is only very rarely useful and containment is a better idea in most cases. Common case where the opposite is true when size is really crucial (policy based string class, for example): Empty Base Class Optimization can apply when deriving from an empty policy class (just containing functions).
Read Uses and abuses of Inheritance by Herb Sutter.
The rule of thumb is:
1) If the choice of the concrete type is made at compile time, prefer a template. It will be safer (compile time errors vs run time errors) and probably better optimized.
2) If the choice is made at run-time (i.e. as a result of a user's action) there is really no choice - use inheritance and virtual functions.
Other options:
Use the Visitor Pattern (let external code work on your class).
Externalize some part of your class, for example via iterators, that generic iterator-based code can work on them. This works best if your object is a container of other objects.
See also the Strategy Pattern (there are c++ examples inside)
Templates are a way to let a class use a variable of which you don't really care about the type. Inheritance is a way to define what a class is based on its attributes. Its the "is-a" versus "has-a" question.
Most of your question has already been answered, but I wanted to elaborate on this bit:
Coming from Java, I am naturally
inclined to always use version 1, but
the templates versions seem to be
preferred often, e.g. in STL code? If
that's true, is it just because of
memory efficiency etc (no inheritance,
no virtual function calls)?
That's part of it. But another factor is the added type safety. When you treat a BrakeWithABS as a Brake, you lose type information. You no longer know that the object is actually a BrakeWithABS. If it is a template parameter, you have the exact type available, which in some cases may enable the compiler to perform better typechecking. Or it may be useful in ensuring that the correct overload of a function gets called. (if stopCar() passes the Brake object to a second function, which may have a separate overload for BrakeWithABS, that won't be called if you'd used inheritance, and your BrakeWithABS had been cast to a Brake.
Another factor is that it allows more flexibility. Why do all Brake implementations have to inherit from the same base class? Does the base class actually have anything to bring to the table? If I write a class which exposes the expected member functions, isn't that good enough to act as a brake? Often, explicitly using interfaces or abstract base classes constrain your code more than necessary.
(Note, I'm not saying templates should always be the preferred solution. There are other concerns that might affect this, ranging from compilation speed to "what programmers on my team are familiar with" or just "what I prefer". And sometimes, you need runtime polymorphism, in which case the template solution simply isn't possible)
this answer is more or less correct. When you want something parametrized at compile time - you should prefer templates. When you want something parametrized at runtime, you should prefer virtual functions being overridden.
However, using templates does not preclude you from doing both (making the template version more flexible):
struct Brake {
virtual void stopCar() = 0;
};
struct BrakeChooser {
BrakeChooser(Brake *brake) : brake(brake) {}
void stopCar() { brake->stopCar(); }
Brake *brake;
};
template<class Brake>
struct Car
{
Car(Brake brake = Brake()) : brake(brake) {}
void slamTheBrakePedal() { brake.stopCar(); }
Brake brake;
};
// instantiation
Car<BrakeChooser> car(BrakeChooser(new AntiLockBrakes()));
That being said, I would probably NOT use templates for this... But its really just personal taste.
Abstract base class has on overhead of virtual calls but it has an advantage that all derived classes are really base classes. Not so when you use templates – Car<Brake> and Car<BrakeWithABS> are unrelated to each other and you'll have to either dynamic_cast and check for null or have templates for all the code that deals with Car.
Use interface if you suppose to support different Break classes and its hierarchy at once.
Car( new Brake() )
Car( new BrakeABC() )
Car( new CoolBrake() )
And you don't know this information at compile time.
If you know which Break you are going to use 2b is right choice for you to specify different Car classes. Brake in this case will be your car "Strategy" and you can set default one.
I wouldn't use 2a. Instead you can add static methods to Break and call them without instance.
Personally I would allways prefer to use Interfaces over templates because of several reasons:
Templates Compiling&linking errors are sometimes cryptic
It is hard to debug a code that based on templates (at least in visual studio IDE)
Templates can make your binaries bigger.
Templates require you to put all its code in the header file , that makes the template class a bit harder to understand.
Templates are hard to maintained by novice programmers.
I Only use templates when the virtual tables create some kind of overhead.
Ofcourse , this is only my self opinion.