C++, the term/idiom for programming using template - c++

I keep reading the term :
template programming
generic programming
meta-programming
maybe another idiom/term..
for any c++ code that use template, which one is the correct or more accurate term of this?

AFAIK:
Template programming is just referring to the classic "programming with templates", i.e. "I have a function/class that I want to make usable with any type, I'll just make it template".
It can also be can also be seen as the "catch-all" category that includes any programming technique that employs templates.
Generic programming can be synthetically described as the programming paradigm used by the STL.
Wikipedia defines it as
a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters
IMHO, it's better to say that all the containers are designed to be used with any type (without sacrificing type safety) and algorithms are designed to be generic enough to work on any container type (as long as it's sensible to use them, obviously, i.e. it makes no sense to sort an unordered container).
Notice that generic programming (with this definition) does not strictly require the use of templates, in facts it can be achieved with inheritance and dynamic polymorphism (thanks to Ben Voigt).
In general, I'd say that template programming and generic programming partially overlap, and many people use the terms generic programming and template programming interchangeably.
Template metaprogramming is a programming style in which templates are used to perform compile-time computations/decisions/checks normally not doable without templates (statical assertions, compile-time constants computations, ...).
Such code is often quite contrived, since C++ wasn't designed for this style of programming (which was actually "discovered" later), and may look unfamiliar to C++ programmers also because it often gets near to functional programming (without having nice syntax facilities for it) instead of following the imperative paradigm normally used in C++.

It's usually referred to as generic programming.
Template meta programming is something else than normal use of templates, in TMP types are manipulated at compile time (see boost.Mpl).

Related

How are c++ concepts different to Haskell typeclasses?

Concepts for C++ from the Concepts TS have been recently merged into GCC trunk. Concepts allow one to constrain generic code by requiring types to satisfy the conditions of the concept ('Comparable' for instance).
Haskell has type classes. I'm not so familiar with Haskell. How are concepts and type classes related?
Concepts (as defined by the Concepts TS) and type classes are related only in the sense that they restrict the sets of types that can be used with a generic function. Beyond that, I can only think of ways in which the two features differ.
I should note that I am not a Haskell expert. Far from it. However, I am an expert on the Concepts TS (I wrote it, and I implemented it for GCC).
Concepts (and constraints) are predicates that determine whether a type is a member of a set. You do not need to explicitly declare whether a type is a model of concept (an instance of a type class). That's determined by a set of requirements and checked by the compiler. In fact, concepts do not allow you to write "T is a model of C" at all, although this is readily supported using various metaprogramming techniques.
Concepts can be used to constrain non-type arguments, and because of constexpr functions and template metaprogramming, express pretty much any constraint you could ever hope to write (e.g., a hash array whose extent must be a prime number). I don't believe this is true for type classes.
Concepts are not part of the type system. They constrain the use of declarations and, in some cases template argument deduction. Type classes are part of the type system and participate in type checking.
Concepts do not support modular type checking or compilation. Template definitions are not checked against concepts, so you can still get late caught type errors during instantiation, but this does add a certain degree of flexibility for library writers (e.g., adding debugging code to an algorithm won't change the interface). Because type classes are part of the type system, generic algorithms can be checked and compiled modularly.
The Concepts TS supports the specialization of generic algorithms and data structures based based on the ordering of constraints. I am not at all an expert in Haskell, so I don't know if there is an equivalent here or not. I can't find one.
The use of concepts will never add runtime costs. The last time I looked, type classes could impose the same runtime overhead as a virtual function call, although I understand that Haskell is very good at optimizing those away.
I think that those are the major differences when comparing feature (Concepts TS) to feature (Haskell type classes).
But there's an underlying philosophical difference in two languages -- and it isn't functional vs. whatever flavor of C++ you're writing. Haskell wants to be modular: being so has many nice properties. C++ templates refuse to be modular: instantiation-time lookup allows for type-based optimization without runtime overhead. This is why C++ generic libraries offer both broad reuse and unparalleled performance.
You might be interested in the following research paper:
"A comparison of C++ concepts and Haskell type classes", Bernardy et al., WGP 2008. Pdf More details.
Update: as a short summary of the paper: the paper defines a precise mapping between terminology for C++ concepts and terminology for Haskell type classes and uses this mapping to provide a detailed feature comparison between the two.
Their conclusion says:
Out of our 27 criteria, summarised in table 2, 16 are equally supported in both languages, and only one or two are not portable. So, we can safely conclude as we started — C++ concepts and Haskell type classes are very similar.
As noted by T.C. below, it is worth pointing out that the paper is comparing C++0x concepts, not Concepts TS. I am not aware of a good reference describing the differences.

variants, existential polymorphism in c++

I am conducting a research on type systems. For this work I am investigating the usages of Variants, structural subtyping, universal polymorphism and existential polymorphism in popular languages. Functional languages like heskell, ocaml provides such functionaries. But I want to whether a popular language like C++ provide above functionality. That means how C++ implemented
variants
structural subtyping
universal polymorphism
existential polymorphism.
Unions can be viewed as a rudimentary form of variant, but in reality, they are more a primitive mechanism for overlaying memory (and unsafe).
There is no structural typing, let alone subtyping, in C++. All types are nominal.
Templates have some superficial similarity to universal polymorphism, but are actually quite different. In essence, they are glorified macros with little to no type checking (like with macros, both checking and code generation happens after expansion).
There is no form of existential types in C++ (there is a limited form in Java, namely wildcards).
Some of these features can be simulated to some extent using subtyping, but that remains far less expressive (or convenient).

Why were concepts (generic programming) conceived when we already had classes and interfaces?

Also on programmers.stackexchange.com:
I understand that STL concepts had to exist, and that it would be silly to call them "classes" or "interfaces" when in fact they're only documented (human) concepts and couldn't be translated into C++ code at the time, but when given the opportunity to extend the language to accomodate concepts, why didn't they simply modify the capabilities of classes and/or introduced interfaces?
Isn't a concept very similar to an interface (100% abstract class with no data)? By looking at it, it seems to me interfaces only lack support for axioms, but maybe axioms could be introduced into C++'s interfaces (considering an hypothetical adoption of interfaces in C++ to take over concepts), couldn't them? I think even auto concepts could easily be added to such a C++ interface (auto interface LessThanComparable, anyone?).
Isn't a concept_map very similar to the Adapter pattern? If all the methods are inline, the adapter essentially doesn't exist beyond compile time; the compiler simply replaces calls to the interface with the inlined versions, calling the target object directly during runtime.
I've heard of something called Static Object-Oriented Programming, which essentially means effectively reusing the concepts of object-orientation in generic programming, thus permitting usage of most of OOP's power without incurring execution overhead. Why wasn't this idea further considered?
I hope this is clear enough. I can rewrite this if you think I was not; just let me know.
There is a big difference between OOP and Generic Programming, Predestination.
In OOP, when you design the class, you had the interfaces you think will be useful. And it's done.
In Generic Programming, on the other hand, as long as the class conforms to a given set of requirements (mainly methods, but also inner constants or types), then it fits the bill and may be used. The Concept proposal is about formalizing this, so that detection may occur directly when checking the method signature, rather than when instantiating the method body. It also makes checking template methods more easily, since some methods can be rejected without any instantiation if the concepts do not match.
The advantage of Concepts is that you do not suffer from Predestination, you can pick a class from Library1, pick a method from Library2, and if it fits, you're gold (if it does not, you may be able to use a concept map). In OO, you are required to write a full-fledged Adapter, every time.
You are right that both seem similar. The difference is mainly about the time of binding (and the fact that Concept still have static dispatch instead of dynamic dispatch like with interfaces). Concepts are more open, thus easier to use.
Classes are a form of named conformance. You indicate that class Foo conforms with interface I by inheriting from I.
Concepts are a form of structural and/or runtime conformance. A class Foo does not need to state up front which concepts it conforms to.
The result is that named conformance reduces the ability to reuse classes in places that were not expected up front, even though they would be usable.
The concepts are in fact not part of C++, they are just concepts! In C++ there is no way to "define a concept". All you have is, templates and classes (STL being all template classes, as the name says: S tandard T emplate L ibrary).
If you mean C++0x and not C++ (in which case I suggest you change the tag), please read here:
http://en.wikipedia.org/wiki/Concepts_(C++)
Some parts I am going to copy-paste for you:
In the pending C++0x revision of the C++ programming language, concepts and the related notion of axioms were a proposed extension to C++'s template system, designed to improve compiler diagnostics and to allow programmers to codify in the program some formal properties of templates that they write. Incorporating these limited formal specifications into the program (in addition to improving code clarity) can guide some compiler optimizations, and can potentially help improve program reliability through the use of formal verification tools to check that the implementation and specification actually match.
In July 2009, the C++0x committee decided to remove concepts from the draft standard, as they are considered "not ready" for C++0x.
The primary motivation of the introduction of concepts is to improve the quality of compiler error messages.
So as you can see, concepts are not there to replace interfaces etc, they are just there to help the compiler optimize better and produce better errors.
While I agree with all the posted answers, they seem to have missed one point which is performance. Unlike interfaces, concepts are checked in compile-time and therefore don't require virtual function calls.

Why is the STL so heavily based on templates instead of inheritance?

I mean, aside from its name the Standard Template Library (which evolved into the C++ standard library).
C++ initially introduce OOP concepts into C. That is: you could tell what a specific entity could and couldn't do (regardless of how it does it) based on its class and class hierarchy. Some compositions of abilities are more difficult to describe in this manner due to the complexities of multiple inheritance, and the fact that C++ supports interface-only inheritance in a somewhat clumsy way (compared to java, etc), but it's there (and could be improved).
And then templates came into play, along with the STL. The STL seems to take the classical OOP concepts and flush them down the drain, using templates instead.
There should be a distinction between cases when templates are used to generalize types where the types themselves are irrelevant for the operation of the template (containers, for examples). Having a vector<int> makes perfect sense.
However, in many other cases (iterators and algorithms), templated types are supposed to follow a "concept" (Input Iterator, Forward Iterator, etc...) where the actual details of the concept are defined entirely by the implementation of the template function/class, and not by the class of the type used with the template, which is a somewhat anti-usage of OOP.
For example, you can tell the function:
void MyFunc(ForwardIterator<...> *I);
Update: As it was unclear in the original question, ForwardIterator is ok to be templated itself to allow any ForwardIterator type. The contrary is having ForwardIterator as a concept.
expects a Forward Iterator only by looking at its definition, where you'd need either to look at the implementation or the documentation for:
template <typename Type> void MyFunc(Type *I);
Two claims I can make in favor of using templates: 1. Compiled code can be made more efficient, by recompiling the template for each used type, instead of using dynamic dispatch (mostly via vtables). 2. And the fact that templates can be used with native types.
However, I am looking for a more profound reason for abandoning classic OOP in favor of templating for the STL?
The short answer is "because C++ has moved on". Yes, back in the late 70's, Stroustrup intended to create an upgraded C with OOP capabilities, but that is a long time ago. By the time the language was standardized in 1998, it was no longer an OOP language. It was a multi-paradigm language. It certainly had some support for OOP code, but it also had a turing-complete template language overlaid, it allowed compile-time metaprogramming, and people had discovered generic programming. Suddenly, OOP just didn't seem all that important. Not when we can write simpler, more concise and more efficient code by using techniques available through templates and generic programming.
OOP is not the holy grail. It's a cute idea, and it was quite an improvement over procedural languages back in the 70's when it was invented. But it's honestly not all it's cracked up to be. In many cases it is clumsy and verbose and it doesn't really promote reusable code or modularity.
That is why the C++ community is today far more interested in generic programming, and why everyone is finally starting to realize that functional programming is quite clever as well. OOP on its own just isn't a pretty sight.
Try drawing a dependency graph of a hypothetical "OOP-ified" STL. How many classes would have to know about each other? There would be a lot of dependencies. Would you be able to include just the vector header, without also getting iterator or even iostream pulled in? The STL makes this easy. A vector knows about the iterator type it defines, and that's all. The STL algorithms know nothing. They don't even need to include an iterator header, even though they all accept iterators as parameters. Which is more modular then?
The STL may not follow the rules of OOP as Java defines it, but doesn't it achieve the goals of OOP? Doesn't it achieve reusability, low coupling, modularity and encapsulation?
And doesn't it achieve these goals better than an OOP-ified version would?
As for why the STL was adopted into the language, several things happened that led to the STL.
First, templates were added to C++. They were added for much the same reason that generics were added to .NET. It seemed a good idea to be able to write stuff like "containers of a type T" without throwing away type safety. Of course, the implementation they settled on was quite a lot more complex and powerful.
Then people discovered that the template mechanism they had added was even more powerful than expected. And someone started experimenting with using templates to write a more generic library. One inspired by functional programming, and one which used all the new capabilities of C++.
He presented it to the C++ language committee, who took quite a while to grow used to it because it looked so strange and different, but ultimately realized that it worked better than the traditional OOP equivalents they'd have to include otherwise. So they made a few adjustments to it, and adopted it into the standard library.
It wasn't an ideological choice, it wasn't a political choice of "do we want to be OOP or not", but a very pragmatic one. They evaluated the library, and saw that it worked very well.
In any case, both of the reasons you mention for favoring the STL are absolutely essential.
The C++ standard library has to be efficient. If it is less efficient than, say, the equivalent hand-rolled C code, then people would not use it. That would lower productivity, increase the likelihood of bugs, and overall just be a bad idea.
And the STL has to work with primitive types, because primitive types are all you have in C, and they're a major part of both languages. If the STL did not work with native arrays, it would be useless.
Your question has a strong assumption that OOP is "best". I'm curious to hear why. You ask why they "abandoned classical OOP". I'm wondering why they should have stuck with it. Which advantages would it have had?
The most direct answer to what I think you're asking/complaining about is this: The assumption that C++ is an OOP language is a false assumption.
C++ is a multi-paradigm language. It can be programmed using OOP principles, it can be programmed procedurally, it can be programmed generically (templates), and with C++11 (formerly known as C++0x) some things can even be programmed functionally.
The designers of C++ see this as an advantage, so they would argue that constraining C++ to act like a purely OOP language when generic programming solves the problem better and, well, more generically, would be a step backwards.
My understanding is that Stroustrup originally preferred an "OOP-styled" container design, and in fact didn't see any other way to do it. Alexander Stepanov is the one responsible for the STL, and his goals did not include "make it object oriented":
That is the fundamental point: algorithms are defined on algebraic structures. It took me another couple of years to realize that you have to extend the notion of structure by adding complexity requirements to regular axioms. ... I believe that iterator theories are as central to Computer Science as theories of rings or Banach spaces are central to Mathematics. Every time I would look at an algorithm I would try to find a structure on which it is defined. So what I wanted to do was to describe algorithms generically. That's what I like to do. I can spend a month working on a well known algorithm trying to find its generic representation. ...
STL, at least for me, represents the only way programming is possible. It is, indeed, quite different from C++ programming as it was presented and still is presented in most textbooks. But, you see, I was not trying to program in C++, I was trying to find the right way to deal with software. ...
I had many false starts. For example, I spent years trying to find some use for inheritance and virtuals, before I understood why that mechanism was fundamentally flawed and should not be used. I am very happy that nobody could see all the intermediate steps - most of them were very silly.
(He does explain why inheritance and virtuals -- a.k.a. object oriented design "was fundamentally flawed and should not be used" in the rest of the interview).
Once Stepanov presented his library to Stroustrup, Stroustrup and others went through herculean efforts to get it into the ISO C++ standard (same interview):
The support of Bjarne Stroustrup was crucial. Bjarne really wanted STL in the standard and if Bjarne wants something, he gets it. ... He even forced me to make changes in STL that I would never make for anybody else ... he is the most single minded person I know. He gets things done. It took him a while to understand what STL was all about, but when he did, he was prepared to push it through. He also contributed to STL by standing up for the view that more than one way of programming was valid - against no end of flak and hype for more than a decade, and pursuing a combination of flexibility, efficiency, overloading, and type-safety in templates that made STL possible. I would like to state quite clearly that Bjarne is the preeminent language designer of my generation.
The answer is found in this interview with Stepanov, the author of the STL:
Yes. STL is not object oriented. I
think that object orientedness is
almost as much of a hoax as Artificial
Intelligence. I have yet to see an
interesting piece of code that comes
from these OO people.
Why a pure OOP design to a Data Structure & Algorithms Library would be better ?!
OOP is not the solution for every thing.
IMHO, STL is the most elegant library I have seen ever :)
for your question,
you don't need runtime polymorphism, it is an advantage for STL actually to implement the Library using static polymorphism, that means efficiency.
Try to write a generic Sort or Distance or what ever algorithm that applies to ALL containers!
your Sort in Java would call functions that are dynamic through n-levels to be executed!
You need stupid thing like Boxing and Unboxing to hide nasty assumptions of the so called Pure OOP languages.
The only problem I see with STL, and templates in general is the awful error messages.
Which will be solved using Concepts in C++0X.
Comparing STL to Collections in Java is Like comparing Taj Mahal to my house :)
templated types are supposed to follow
a "concept" (Input Iterator, Forward
Iterator, etc...) where the actual
details of the concept are defined
entirely by the implementation of the
template function/class, and not by
the class of the type used with the
template, which is a somewhat
anti-usage of OOP.
I think you misunderstand the intended use of concepts by templates. Forward Iterator, for example, is a very well-defined concept. To find the expressions which must be valid in order for a class to be a Forward Iterator, and their semantics including computational complexity, you look at the standard or at http://www.sgi.com/tech/stl/ForwardIterator.html (you have to follow the links to Input, Output, and Trivial Iterator to see it all).
That document is a perfectly good interface, and "the actual details of the concept" are defined right there. They are not defined by the implementations of Forward Iterators, and neither are they defined by the algorithms which use Forward Iterators.
The differences in how interfaces are handled between STL and Java are three-fold:
1) STL defines valid expressions using the object, whereas Java defines methods which must be callable on the object. Of course a valid expression might be a method (member function) call, but it doesn't have to be.
2) Java interfaces are runtime objects, whereas STL concepts are not visible at runtime even with RTTI.
3) If you fail to make valid the required valid expressions for an STL concept, you get an unspecified compilation error when you instantiate some template with the type. If you fail to implement a required method of a Java interface, you get a specific compilation error saying so.
This third part is if you like a kind of (compile-time) "duck typing": interfaces can be implicit. In Java, interfaces are somewhat explicit: a class "is" Iterable if and only if it says it implements Iterable. The compiler can check that the signatures of its methods are all present and correct, but the semantics are still implicit (i.e. they're either documented or not, but only more code (unit tests) can tell you whether the implementation is correct).
In C++, like in Python, both semantics and syntax are implicit, although in C++ (and in Python if you get the strong-typing preprocessor) you do get some help from the compiler. If a programmer requires Java-like explicit declaration of interfaces by the implementing class, then the standard approach is to use type traits (and multiple inheritance can prevent this being too verbose). What's lacking, compared with Java, is a single template which I can instantiate with my type, and which will compile if and only if all the required expressions are valid for my type. This would tell me whether I've implemented all the required bits, "before I use it". That's a convenience, but it's not the core of OOP (and it still doesn't test semantics, and code to test semantics would naturally also test the validity of the expressions in question).
STL may or may not be sufficiently OO for your taste, but it certainly separates interface cleanly from implementation. It does lack Java's ability to do reflection over interfaces, and it reports breaches of interface requirements differently.
you can tell the function ... expects a Forward Iterator only by
looking at its definition, where you'd need either to look at the
implementation or the documentation for ...
Personally I think that implicit types are a strength, when used appropriately. The algorithm says what it does with its template parameters, and the implementer makes sure those things work: it's exactly the common denominator of what "interfaces" should do. Furthermore with STL, you're unlikely to be using, say, std::copy based on finding its forward declaration in a header file. Programmers should be working out what a function takes based on its documentation, not just on the function signature. This is true in C++, Python, or Java. There are limitations on what can be achieved with typing in any language, and trying to use typing to do something it doesn't do (check semantics) would be an error.
That said, STL algorithms usually name their template parameters in a way which makes it clear what concept is required. However this is to provide useful extra information in the first line of the documentation, not to make forward declarations more informative. There are more things you need to know than can be encapsulated in the types of the parameters, so you have to read the docs. (For example in algorithms which take an input range and an output iterator, chances are the output iterator needs enough "space" for a certain number of outputs based on the size of the input range and maybe the values therein. Try strongly typing that.)
Here's Bjarne on explicitly-declared interfaces: http://www.artima.com/cppsource/cpp0xP.html
In generics, an argument must be of a
class derived from an interface (the
C++ equivalent to interface is
abstract class) specified in the
definition of the generic. That means
that all generic argument types must
fit into a hierarchy. That imposes
unnecessary constraints on designs
requires unreasonable foresight on the
part of developers. For example, if
you write a generic and I define a
class, people can't use my class as an
argument to your generic unless I knew
about the interface you specified and
had derived my class from it. That's
rigid.
Looking at it the other way around, with duck typing you can implement an interface without knowing that the interface exists. Or someone can write an interface deliberately such that your class implements it, having consulted your docs to see that they don't ask for anything you don't already do. That's flexible.
"OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them." - Alan Kay, creator of Smalltalk.
C++, Java, and most other languages are all pretty far from classical OOP. That said, arguing for ideologies is not terribly productive. C++ is not pure in any sense, so it implements functionality that seems to make pragmatic sense at the time.
STL started off with the intention of provide a large library covering most commonly used algorithm -- with the target of consitent behavior and performance. Template came as a key factor to make that implementation and target feasible.
Just to provide another reference:
Al Stevens Interviews Alex Stepanov, in March 1995 of DDJ:
http://www.sgi.com/tech/stl/drdobbs-interview.html
Stepanov explained his work experience and choice made towards a large library of algorithm, which eventually evolved into STL.
Tell us something about your long-term interest in generic programming
.....Then I was offered a job at Bell Laboratories working in the C++ group on C++ libraries. They asked me whether I could do it in C++. Of course, I didn't know C++ and, of course, I said I could. But I couldn't do it in C++, because in 1987 C++ didn't have templates, which are essential for enabling this style of programming. Inheritance was the only mechanism to obtain genericity and it was not sufficient.
Even now C++ inheritance is not of much use for generic programming. Let's discuss why. Many people have attempted to use inheritance to implement data structures and container classes. As we know now, there were few if any successful attempts. C++ inheritance, and the programming style associated with it are dramatically limited. It is impossible to implement a design which includes as trivial a thing as equality using it. If you start with a base class X at the root of your hierarchy and define a virtual equality operator on this class which takes an argument of the type X, then derive class Y from class X. What is the interface of the equality? It has equality which compares Y with X. Using animals as an example (OO people love animals), define mammal and derive giraffe from mammal. Then define a member function mate, where animal mates with animal and returns an animal. Then you derive giraffe from animal and, of course, it has a function mate where giraffe mates with animal and returns an animal. It's definitely not what you want. While mating may not be very important for C++ programmers, equality is. I do not know a single algorithm where equality of some kind is not used.
The basic problem with
void MyFunc(ForwardIterator *I);
is how do you safely get the type of the thing the iterator returns? With templates, this is done for you at compile time.
For a moment, let's think of the standard library as basically a database of collections and algorithms.
If you've studied the history of databases, you undoubtedly know that back in the beginning, databases were mostly "hierarchical". Hierarchical databases corresponded very closely to classical OOP--specifically, the single-inheritance variety, such as used by Smalltalk.
Over time, it became apparent that hierarchical databases could be used to model almost anything, but in some cases the single-inheritance model was fairly limiting. If you had a wooden door, it was handy to be able to look at it either as a door, or as a piece of some raw material (steel, wood, etc.)
So, they invented network model databases. Network model databases correspond very closely to multiple inheritance. C++ supports multiple inheritance completely, while Java supports a limited form (you can inherit from only one class, but can also implement as many interfaces as you like).
Both hierarchical model and network model databases have mostly faded from general purpose use (though a few remain in fairly specific niches). For most purposes, they've been replaced by relational databases.
Much of the reason relational databases took over was versatility. The relational model is functionally a superset of the network model (which is, in turn, a superset of the hierarchical model).
C++ has largely followed the same path. The correspondence between single inheritance and the hierarchical model and between multiple inheritance and the network model are fairly obvious. The correspondence between C++ templates and the hierarchical model may be less obvious, but it's a pretty close fit anyway.
I haven't seen a formal proof of it, but I believe the capabilities of templates are a superset of those provided by multiple inheritance (which is clearly a superset of single inerhitance). The one tricky part is that templates are mostly statically bound--that is, all the binding happens at compile time, not run time. As such, a formal proof that inheritance provides a superset of the capabilities of inheritance may well be somewhat difficult and complex (or may even be impossible).
In any case, I think that's most of the real reason C++ doesn't use inheritance for its containers--there's no real reason to do so, because inheritance provides only a subset of the capabilities provided by templates. Since templates are basically a necessity in some cases, they might as well be used nearly everywhere.
This question has many great answers. It should also be mentioned that templates supports an open design. With the current state of object oriented programming languages, one has to use the visitor pattern when dealing with such problems, and true OOP should support multiple dynamic binding. See Open Multi-Methods for C++, P. Pirkelbauer, et.al. for very intersting reading.
Another interesting point of templates are that they can be used on for runtime polymorphism as well. For example
template<class Value,class T>
Value euler_fwd(size_t N,double t_0,double t_end,Value y_0,const T& func)
{
auto dt=(t_end-t_0)/N;
for(size_t k=0;k<N;++k)
{y_0+=func(t_0 + k*dt,y_0)*dt;}
return y_0;
}
Notice that this function will also work if Value is a vector of some kind (not std::vector, which should be called std::dynamic_array to avoid confusion)
If func is small, this function will gain a lot from inlining. Example usage
auto result=euler_fwd(10000,0.0,1.0,1.0,[](double x,double y)
{return y;});
In this case, you should know the exact answer (2.718...), but it is easy to construct a simple ODE without elementary solution (Hint: use a polynomial in y).
Now, you have a large expression in func, and you use the ODE solver in many places, so your executable gets polluted with template instantiations everywhere. What to do? First thing to notice is that a regular function pointer works. Then you want to add currying so you write an interface and an explicit instantiation
class OdeFunction
{
public:
virtual double operator()(double t,double y) const=0;
};
template
double euler_fwd(size_t N,double t_0,double t_end,double y_0,const OdeFunction& func);
But the above instantiation only works for double, why not write the interface as template:
template<class Value=double>
class OdeFunction
{
public:
virtual Value operator()(double t,const Value& y) const=0;
};
and specialize for some common value types:
template double euler_fwd(size_t N,double t_0,double t_end,double y_0,const OdeFunction<double>& func);
template vec4_t<double> euler_fwd(size_t N,double t_0,double t_end,vec4_t<double> y_0,const OdeFunction< vec4_t<double> >& func); // (Native AVX vector with four components)
template vec8_t<float> euler_fwd(size_t N,double t_0,double t_end,vec8_t<float> y_0,const OdeFunction< vec8_t<float> >& func); // (Native AVX vector with 8 components)
template Vector<double> euler_fwd(size_t N,double t_0,double t_end,Vector<double> y_0,const OdeFunction< Vector<double> >& func); // (A N-dimensional real vector, *not* `std::vector`, see above)
If the function had been designed around an interface first, then you would have been forced to inherit from that ABC. Now you have this option, as well as function pointer, lambda, or any other function object. The key here is that we must have operator()(), and we must be able to do use some arithmetic operators on its return type. Thus, the template machinery would break in this case if C++ did not have operator overloading.
How do you do comparisons with ForwardIterator*'s? That is, how do you check if the item you have is what you're looking for, or you've passed it by?
Most of the time, I would use something like this:
void MyFunc(ForwardIterator<MyType>& i)
which means I know that i is pointing to MyType's, and I know how to compare those. Though it looks like a template, it isn't really (no "template" keyword).
The concept of separating interface from interface and being able to swap out the implementations is not intrinsic to Object-Oriented Programming. I believe it's an idea that was hatched in Component-Based Development like Microsoft COM. (See my answer on What is Component-Driven Development?) Growing up and learning C++, people were hyped out inheritance and polymorphism. It wasn't until 90s people started to say "Program to an 'interface', not an 'implementation'" and "Favor 'object composition' over 'class inheritance'." (both of which quoted from GoF by the way).
Then Java came along with built-in garbage collector and interface keyword, and all of a sudden it became practical to actually separate interface and implementation. Before you know it the idea became part of the OO. C++, templates, and STL predates all of this.

What are the good and bad points of C++ templates? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been talking with friends and some completely agree that templates in C++ should be used, others disagree entirely.
Some of the good things are:
They are more safe to use (type safety).
They are a good way of doing generalizations for APIs.
What other good things can you tell me about C++ templates?
What bad things can you tell me about C++ templates?
Edit: One of the reasons I'm asking this is that I am studying for an exam and at the moment I am covering the topic of C++ templates. So I am trying to understand a bit more on them.
Templates are a very powerful mechanism which can simplify many things. However to use them properly requires much time and experience - in order to decide when their usage is appropriate.
For me the most important advantages are:
reducing the repetition of code (generic containers, algorithms)
reducing the repetition of code advanced (MPL and Fusion)
static polymorphism (=performance) and other compile time calculations
policy based design (flexibility, reusability, easier changes, etc)
increasing safety at no cost (i.e. dimension analysis via Boost Units, static assertions, concept checks)
functional programming (Phoenix), lazy evaluation, expression templates (we can create Domain-specific embedded languages in C++, we have great Proto library, we have Blitz++)
other less spectacular tools and tricks used in everyday life:
STL and the algorithms (what's the difference between for and for_each)
bind, lambda (or Phoenix) ( write clearer code, simplify things)
Boost Function (makes writing callbacks easier)
tuples (how to genericly hash a tuple? Use Fusion for example...)
TBB (parallel_for and other STL like algorithms and containers)
Can you imagine C++ without templates? Yes I can, in the early times you couldn't use them because of compiler limitations.
Would you write in C++ without templates? No, as I would lose many of the advantages mentioned above.
Downsides:
Compilation time (for example throw in Sprit, Phoenix, MPL and some Fusion and you can go for a coffee)
People who can use and understand templates are not that common (and these people are useful)
People who think that they can use and understand templates are quite common (and these people are dangerous, as they can make a hell out of your code. However most of them after some education/mentoring will join the group mentioned in the previous point)
template export support (lack of)
error messages could be less cryptic (after some learning you can find what you need, but still...)
I highly recommend the following books:
C++ Templates: The Complete Guide by David Vandevoorde and Nicolai Josuttis (thorough introduction to the subject of templates)
Modern C++ Design. Generic Programming and Design Patterns Applied by Andrei Alexandrescu (what is the less known way of using templates to simplify your code, make development easier and result in code robust to changes)
C++ Template Metaprogramming by David Abrahms and Aleksey Gutov (again - different way of using the templates)
More C++ Idioms from Wikibooks presents some nice ideas.
On the positive side, C++ templates:
Allow for generalization of type
Decrease the amount of redundant code you need to type
Help to build type-safe code
Are evaluated at compile-time
Can increase performance (as an alternative to polymorphism)
Help to build very powerful libraries
On the negative side:
Can get complicated quickly if one isn't careful
Most compilers give cryptic error messages
It can be difficult to use/debug highly templated code
Have at least one syntactic quirk ( the >> operator can interfere with templates)
Help make C++ very difficult to parse
All in all, careful consideration should be used as to when to use templates.
My 2c are rather negative.
C++ types were never designed to perform compile time calculations.
The notion of using types to achieve computational goals is very
clearly a hack – and moreover, one that was never sought but rather
stumbled upon
..
The reward for using MP in your code is the moment of satisfaction of
having solved a hard riddle. You did stuff in 100 lines that would
have otherwise taken 200. You grinded your way through
incomprehensible error messages to get to a point where if you needed
to extend the code to a new case, you would know the exact 3-line
template function to overload. Your maintainers, of course, would have
to invest infinitely more to achieve the same.
Good points: powerful; allows you to:
prescribe compile-time attributes and computation
describe generic algorithms and datastructures
do many other things that would otherwise be repetitive, boring, and mistake-prone
does them in-language, without macros (which can be far more hazardous and obscure!)
Bad points: powerful; allows you to:
provoke compile-time errors that are verbose, misleading, and obscure (though not as obscure and misleading as macros...)
create obscure and hazardous misdesigns (though not as readily as macros...)
cause code bloat if you're not careful (just like macros!)
Templates vastly increase the viable design space, which is not necessarily a bad thing, but it does make them that much harder to use well. Template code needs maintainters who understand not just the language features, but the design consequences of the language features; practically speaking, this means many developer groups avoid all but the simplest and most institutionalized applications of C++ templates.
In general, templates make the language much more complicated (and difficult to implement correctly!). Templates were not intentionally designed to be Turing-complete, but they are anyway -- thus, even though they can do just about anything, using them may turn out to be more trouble than it's worth.
Templates should be used sparingly.
"Awful to debug" and "hard to read" aren't great arguments against good template uses with good abstractions.
Better negative arguments would go towards the fact that the STL has a lot of "gotchas", and using templates for purposes the STL already covers is reinventing the wheel. Templates also increase link time, which can be a concern for some projects, and have a lot of idiosyncrasies in their syntax that can be arcane to people.
But the positives with generic code reuse, type traits, reflection, smart pointers, and even metaprograms often outweigh the negatives. The thing you have to be sure of is that templates are always used carefully and sparingly. They're not the best solution in every case, and often not even the second or third best solution.
You need people with enough experience writing them that they can avoid all the pitfalls and have a good radar for when the templates will complicate things more than helping.
One of the disadvantages I haven't seen mentioned yet is the subtle semantic differences between regular classes and instantiations of class templates. I can think of:
typedefed typenames in ancestor types aren't inherited by template classes.
The need to sprinkle typename and template keywords in appropriate places.
Member function templates cannot be virtual.
These things can usually be overcome, but they're a pain.
Some people hate templates (I do) because:
On maintainability pov, the wrong use of templates can have a negative effect ten times stronger than the initial advantage of time they were supposed to bring.
On optimization pov, compiler optimizations they allow are nothing compared to an optimal algorithm and the use of multi threading.
On compiling time pov, wrong use of templates can a very negative effect on parsing, compilation and linking phases, when poorly written templated declaration brings tons of useless parasite declarations in each compilation units (here is how 200 lines of code can produce an .obj of 1Mb).
To me templates are like a chainsaw with an integrated flame thrower that can also launch grenades. One time in my life I may have a specific need of that. But most of the time, I'm using a regular hammer and a simple saw to build things and I'm doing a pretty good job that way.
Advantage: Generic Datatypes can be created.
Disadvantage: Code Bloating
I don't see how they are hard to read. What is unreadable about
vector <string> names;
for example? What would you replace it with?
Reusable code is made with template. Its application is in accordance with the profile of each.