There are many places in boost where I see a templated class and can't help but think why the person who wrote it used templates.
For example, the mutex class(es). All the mutex concepts are implemented as templates where one could simply create a few base classes or abstract classes with an interface that matches the concept.
edit after answers: I thought about the cost of virtual functions but isn't it sometimes worth giving away very little performance penalty for better understanding? I mean sometimes (especially with boost) it's really hard to understand templated code and decrypt compiler errors as a result of misusing templates.
Templates can be highly optimized at compile time, without the need for virtual functions. A lot of template tricks can be thought of as compile-time polymorphism. Since you know at compile time which behaviours you want, why should you pay for a virtual function call everytime you use the class. As a bonus, a lot of templated code can be easily inlined to eliminate even the most basic function-call overhead.
In addition, templates in C++ are extremely powerful and flexible - they have been shown to be a turing complete language in their own right. There are some things that are easy to do with templates that require much more work with runtime polymorphism.
Templates allow you to do a generic version of an algorithm. A generic version of a container. You no longer have to worry about types and what you produce need no longer be tied to a type. Boost is a collection of libraries that tries to address the needs of a wide variety of people using C++ in their day to day lives.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Many nowadays speeches on C++ are about templates and theirs usage for compile-time polymorphism implementation; virtual functions and run-time polymorphism are almost not discussed.
We can use compile-time polymorphism in many situations. And because it gives us compile-time checks instead of possible run-time errors related to runtime polymorphism, and well as some (usually insignificant, however) performance benefit, it looks nowadays most widely use libraries prefer compile-time polymorhism over run-time one.
However, for me it looks like compile-time polymorphism implemented with C++ templates result in much less self-documented and readable code than virtual types hierarchy.
As real life example we can review boost::iostreams. It implements stream as template that accepts device class as an argument. It results a situation when implementation of specific functionality is divided in many classes and files in different folders, so investigation in such code is much more complex than if streams will form classes hierarchy with virtual functions like we have in Java and .NET Framework? What is benefit of compile-time polymorphism here? File stream is something that reads and writes file, stream is something that reads and writes (anything), it is classical example of types hierarchy, so why not to use single FileStream class that overloads some protected functions instead of dividing semantically united functionality into different files and classes?
Another example is boost::process::child class. It uses templated constructor to setup standard i/o and other process parameters. It is not well-documented, and it is not obvious from this function prototype what arguments in what format will this template accept; implementation of member functions similar to SetStandardOutput will be much better self-documented and result in faster compile time, so what is benefit of template usage here? Again, I compare this implementation to .NET Framework here. For member functions similar to SetStandardOutput it is enough to read single header file to understand how to use the class. For templated constructor of boost::process::child we have to read many small files instead.
There are a lot of examples similar to this one. For any reason, well known open source libraries almost never use virtual classes hierarchy and prefer to use compile-time polymorhism (primarily templates-based) like boost does.
The question: are there any clear guidelines what we have to prefer (compile-time or run-time polymorphism) in situations where we can use both ones?
Generally speaking, in 90% of situation templates and virtual functions are interchangeable.
First of all, we need a clarification what we are talking about. If you "compare" something, it must be in some criteria equivalent. My understand of your statement is not comparing virtual functions with templates but within the context of polymorphism!
Your examples are not well selected in that case and also dynamic cast is more a "big hammer" out of the toolbox as if we talk about polymorphism.
Your "template example" did not need to use templates as you have simple overloads which can be used without any templated code at all!
If we are talking of polymorphism and c++ we have at a first selection runtime-polymorphism and compile-time polymorphism. And for both we have standard solutions in c++. For runtime we go with virtual functions, for compile-time polymorphism we have CRTP as a typical implementation and not templates as a general term!
are there any comments or recommendations from C++ committee or any other authoritative source, when we have to prefer ugly syntax of templates over much more understandable and compact virtual functions and inheritance syntax?
The syntax isn't ugly if you are used to use it! If we are talking about implementing things with SFINAE, we have some hard to understand rules with template instantiation, especially often misunderstood deduced context.
But in C++20 we will have concepts, which can replace SFINAE in most contexts which is a great thing I believe. Writing code with concepts instead of SFINAE makes it more readable, easier to maintain and a lot easier to extend for new types and "rules".
Standard library if full of templates and has a very limited amount of virtual functions. Does it mean that we have to avoid virtual functions as much as possible and prefer templates always even if theirs syntax for some specific task is much less compact and much less understandable?
The question feels you misunderstood the C++ features. Templates allows use to write generic code while virtual functions are the C++ tool to implement runtime polymorphism. There is nothing which is 1:1 comparable.
While reading your example code, I would advice you to think again about your coding style!
If you want to write functions specific for different data types, simply use overloads as you did in your "template" example, but without the unneeded templates!
if you want to implement generic functions which works for different data types in the same code, use templates and if some exceptional code is needed for specific data types use template specialization for the selected specific code parts.
If you need more selective template code which needs SFINAE you should start implementing with c++20 concepts.
If you want to implement polymorphism decide to use run time or compile time polymorphism. As already said, for the first one virtual functions are the standard C++ tool to implement that and CRTP is one of standard solutions for the second one.
And my personal experience with "dynamic cast" is: Avoid it! Often it is a first hint that something is broken with your design. That is not a general rule but a check point to think again about the design. In rare cases it is the tool which fits. And in RTTI is not available for all targets and it has some overhead. On bare metal devices/embedded systems you sometimes can't use RTTI and also exceptions. If your code is intended to be used as a "platform" in your domain, and you have the mentioned restrictions, don't use RTTI!
EDIT: Answers from the comments
So, for now, with C++ we can make classes hierarchy with run-time polymorphism only.
No! CRTP builds also class hierarchies but for compile time polymorphism. But the solutions is quite different as you don't have a "common" base class. But as all is resolved in compile time, there is no technical need for the common base class. You simply should start reading about Mixins, maybe here: What are Mixins (as a concept) and CRTP as one of the implementation methods: CRTP article wikipedia.
don't know how to implemented something similar to virtual functions without run-time overhead.
See above CRTP and Mixin exactly implementing polymorphism without runtime overhead!
Templates give some possibility to do that.
Templates are only the base C++ tool. Templates are the same level as loops in C++. It is much to broad to say "templates" in this context.
So, if we need class hierarchy, does it mean that we have to use it even it will force us to use less compile time checks?
As said, a class hierarchy is only a part of the solution for the task to implement polymorphism. Think more in logical things to implement like polymorphism, serializer, database or whatever and the implementation solutions like virtual functions, loops, stacks, classes etc. "Compile time checks"? In most cases you don't have to write the "checks" your self. A simple overload is something like an if/else in compile time which "checks" for the data type. So simply use it out of the box, no template nor SFINAE is needed.
Or we have to use templates to implement some sort of compile-time classes hierarchy even it will make our syntax much less compact and understandable
Already mentioned: Template code can be readable! std::enable_if is much easier to read as some hand crafted SFINAE stuff, even both uses the same C++ template mechanics. And if you get familiar with c++ 20 concepts, you will see that there is a good chance to write more readable template code in the upcoming c++ version.
As I have understood are concepts quite similar to interfaces: Like interfaces, concepts allow to define some kind of a set of methods/concept/interface, which the implementation expects and needs to perform its task. Both strengthen the focus on semantic needs.
While Bjarne and many other people seem to see concepts as way to get rid of uses of enable_if and generally complicated templates, I wonder if it makes sense to use it instead of interfaces/pure abstract classes.
Benefits are obvious:
no runtime cost (v-table)
kind of duck typing, because the suitable classes do not have to implement the interface
even relationships between parameters (which interfaces do not support at all)
Of course a disadvantage is not far away:
no template definition checking for concepts, at least for now
…
I wonder if there are more of these and if it would make no sense after all.
I know that there are similar questions, but they are not specific with their purpose nor is it answered in an answer. I also found other people who had the same idea, but at no point there is somebody who really encourages/discourages this, let alone argues on it.
If you are using abstract classes for their intended purpose, then there is pretty much no way to replace them with concepts. Abstract base classes are for runtime polymorphism: the ability to, at runtime, have the implementation of an interface be decoupled from the site(s) where that interface gets used. You can use user input or data from a file to determine which derived class instance to create, then pass that instance to some other code that uses a pointer/reference to the base class.
Abstract classes are for defining an interface for runtime polymorphism.
A template is instantiated at compile-time. As such, everything about its interface must be verified at compile-time. You cannot vary which implementation of an interface you use for a template; it's statically written into your program, and the template gets instantiated with exactly and only the types you spell out in your code. That's compile-time polymorphism.
Concepts are for defining an interface for compile-time polymorphism. They don't work at runtime.
If you've been using abstract base classes for compile-time polymorphism, then you've been doing the wrong thing, and you should have stopped well before concepts came out.
I realize there are quite a few posts on this subject, but I am having trouble finding the answer to this exact question.
For function calls, which is faster, a pure-virtual interface or a pimpl?
At first glance, it seems to me that the pure-virtual interface would be faster, because the using the pimpl would cost two function calls instead of one...or would some kind of clever compiler trick take over in this case?
edit:
I am trying to decide which of these I should use to abstract away the system-dependent portions of a few objects that may end up having to be spawned quite frequently, and in large numbers.
edit:
I suppose it's worth saying at this point, that the root of my problem was that I mistook the Abstract Factory design pattern for a method of making my code work on multiple platforms, when it's real purpose is for switching implementations for a given interface at runtime.
The two options are not equivalent, they should not be compared on performance as the focus is different. Even if they were equivalent, the performance difference would be minimal to unimportant in most situations. If you are in the rare case that you know that dispatch is being an issue, then you have the tools to measure the difference yourself.
Why do you ask? The question doesn't seem to make sense.
One generally uses virtual functions when one wants polymorphism: when you want them to be overridden in derived classes.
One generally uses pimpl when one wants to remove implementation details from header files.
The two really aren't interchangeable. Off the top of my head, I cannot think of any reasonable situations where you would use one and consider replacing it with the other.
Anyways, that said, for a typical implementation of virtual functions, a function call involves reading the object to find the virtual function table pointer, then reading the virtual function table to find the function pointer, and finally calling the function pointer.
For a class implemented via pimpl, one function call is forced, but it could be absolutely anything 'under the hood'. Despite what you suggest, no second function call is implied by the paradigm.
Finally, don't forget the usual guidelines for optimization apply: you have to actually implement and measure. Trying to "think" up the answer tends to lead to poor results, even from people experienced at this sort of thing.
And, of course, the most important rule of optimization: make sure something matters before you devote a lot of time trying to optimize it. Otherwise, you are going to wind up wasting a lot of time and energy.
I've been flicking through the book Modern C++ Design by Andrei Alexandrescu and it seems interesting stuff. However it makes very extensive use of templates and I would like to find out if this should be avoided if using C++ for mobile platform development (Brew MP, WebOS, iOS etc.) due to size considerations.
In Symbian OS C++ the standard use of templates is discouraged, the Symbian OS itself uses them but using an idiom known as thin templates where the underlying implementation is done in a C style using void* pointers with a thin template layered on top of this to achieve type safety.
The reason they use this idiom as opposed to regular use of templates is specifically to avoid code bloating.
So what are opinions (or facts) on the use of templates when developing applications for mobile platforms.
Go ahead and use templates wherever they make your code easier to understand and to maintain. Avoidance of templates on mobile platforms can be categorized as "premature optimization".
If you run into executable-size issues, then redesign if necessary, but don't start with the assumption that templates will cause problems before you see any actual problems.
A lot of the stuff in "Modern C++ Design" and similar books is not going to lead to bloated code, because so much of it is really designed to ensure type safety and do compile-time metaprogramming magic, rather than to generate code.
Templates can be used to do a lot of different things. They can generate more code than you expect, but that's not a reason to ban their use. It wasn't so long ago that various authorities recommended avoiding exceptions, virtual functions, floating-point math, and even classes due to concerns about code size and performance, but people did those things, and somehow everything worked out fine.
Templates don't necessarily lead to code bloat. If you write a function or class template and instantiate it for a dozen different types then yes, you get a lot of duplicate code generated (probably, anyway. Some compilers can merge identical instantiations back together).
But if a template is instantiated for one type only, then there is zero cost in code size. If you instantiate it a couple of times, you pay a certain cost, but you'd also end up paying if you used any of the other ways to achieve the same thing. Dynamic polymorphism (virtual functions and inheritance) isn't free either. You pay for that in terms of vtables, code generated to facilitate all the type casts and conversions necessary, and simply because of code that can't be inlined or optimized away.
Taking std::vector as an example, then yes, if you use both vector<int> and vector<float>, you get two copies of some of the code. But with templates, only the code that is actually used gets compiled. The member functions that you never call won't generate any code, and even in the functions that are compiled, the compiler may be able to eliminate a lot of code. For example, for certain types, exception handling code may be unnecessary, so the compiler can eliminate it, yielding smaller code than if you'd used dynamic polymorphism, where the compiler would've been unable to make any assumptions about the type being stored. So in this made-up example, you'd get some code generated for both vector<int> and vector<float>, but each of them is going to be a lot smaller than a polymorphic vector as you might find in Java, for example.
The main problem with using templates is that it requires a compiler which supports it. On a PC, that's no problem. On any other platform which has a mature C++ compiler available, it's no problem.
But not all platforms have a modern heavy-duty C++ compiler available. Some don't support a number of advanced features, and some are just not good enough at the optimizations required to make template code work (templates tend to require a lot of inlining, for example). So on some platforms, it may be best to avoid templates. Not because of any concern for code size, but because the compiler may be unable to handle it.
In my personal experience using (and even abusing) templates very rarely result in large code bloat, and compiling with -Os will help a lot.
It's not that common to see huge template classes duplicated (instantiated) many times, both because rarely classes are huge, and because in most cases you only instantiate templates with a few different arguments, not hundreds. Besides it's easy to reuse some common code in your biggest template classes/functions, and compiler will help you in doing this.
Usually size of data (graphics, audio, ...) is orders of magnitude bigger than the code. So I wouldn't worry.
Of course there could be exceptions to what I said, but I guess they'll mostly be about advanced (special / weird / complicated) stuff, not with the most common everyday classes.
Summarizing my suggestion: use templates as much as you want, if something will go wrong you'll find that out by profiling, and you will easily be able to optimize the size.
Whatever you do, do NOT try writing some code, compile it, and compare executable size or code duplication.
I would say that generally (not just related to mobile development) this advice holds. Some of the techniques described in Modern C++ Design can lead to long build times and code bloat.
This is especially true when dealing with "obscure" devices, like cell phones. Many template techniques rely on that the compiler and linker do a perfect job in eliminating unused/duplicate code. If they don't, you risk having hundreds of duplicate std::vector instances scattered all over your code. And trust me, I have seen this happen.
This is not to say Modern C++ Design is a bad book, or that templates are bad. But especially on embedded projects it's best to watch out, because it can bite.
Some of the disadvantages would be
its syntax is complex
compiler generates extra code
They are hard to validate. Template code which doesn't get used tends to be seldom compiled at all. Therefore good coverage of test cases is a must. But testing is time-consuming, and then it may turn out the code never needed to be robust in the first place.
Hmm, how about...
3: They can be slow to compile
4: They force things to be calculated at compile time rather than run time (this can also be an advantage, if you prefer fast execution speed over runtime flexibility)
5: Older C++ compilers don't handle them, or don't handle them correctly
6: The error messages that they generate when you don't get the code right can be nearly incomprehensible
Templates expose your implementation to the clients of your code, which makes maintaining your ABI harder if you pass templated objects at library boundaries.
So far no-one seems to have mentioned the main disadvantage I find with templates: code readability plummets!
I'm not referring to syntax issues -- yes the syntax is ugly, but I can forgive that. What I mean is this: I find that with never-seen-before non-templated code, however large the application is, if I start at main() I can usually decode the broad strokes of what a program is doing without problems. And code that merely uses vector<int> or similar doesn't bother me in the slightest. But once code starts to define and use its own templates for purposes beyond simple container types, understandability rapidly goes out the window. And that has very negative implications for code maintenance.
Part of that is unavoidable: templates afford greater expressiveness via the complicated partial-order overload resolution rules (for function templates) and, to a lesser degree, partial specialisation (for class templates). But the rules are so damn complicated that even compiler writers (who I'm happy to acknowledge as being an order of magnitude smarter than I am) are still getting them wrong in corner cases.
The interaction of namespaces, friends, inheritance, overloading, automatic conversions and argument-dependent lookup in C++ is already complicated enough. But when you add templates into the mix, as well as the slight changes to rules for name lookup and automatic conversions that they come with, the complexity can reach proportions that, I would argue, no human can deal with. I just don't trust myself to read and understand code that makes use of all these constructs.
An unrelated difficulty with templates is that debuggers still have difficulty showing the contents of STL containers naturally (as compared to, say, C-style arrays).
The only real disadvantage is that if you make any tiny syntax error in a template (especially one used by other templates) the error messages are not gonna be helpful... expect a couple pages of almost-unusable error msgs;-). Compilers' defect are very compiler-specific, and the syntax, while ugly, is not really "complex". All in all, though -- despite the huge issue with proper error diagnostics -- templates are still the single best thing about C++, the one thing that might well tempt you to use C++ over other languages with inferior implementations of generics, such as Java...
They're complicated for the compiler to parse which means your compilation time will increase. Also it can be hard to parse compiler error messages if you have advanced template constructions.
Less people understand them, epsecially at the level of meta programming, therefore less people can maintain them.
When you use templates, your compiler only generates what you actually use. I don't think there is any disadvantages in using C++ template meta-programming except the compiling time which can be quite long if you used very complex structures as boost or loki libraries do.
A disadvantage: template errors are only detected by the compiler when the template is instantiated. Sometimes, errors in the methods of templates are only detected when the member method is instantiated, regardless if the rest of the template is instantiated.
If I have an error in a method, of a template class, that only one function references, but other code uses the template without that method, the compiler will not generate an error until the erroneous method is instantiated.
The absolute worst: The compiler error messages you get from bad template code.
I have used templates sometimes over the years. They can be handy but from a professional perspective, I am leaning away from them. Two of the reasons are:
1.
The need to either a.) expose the function definitions (not only declarations) "source" code to the "where used" code or b.) create a dummy instantiation in the source file. This is needed for compilation. Option a.) can be done by defining functions in the header or actually including the cpp.
One of the reasons that we tolerate headers in C++ (compared to C# for example) is because of the separation of "interface" from "implementation". Well, templates seem to be inconsistent with this philosophy.
2.
Functions called by a template type parameter instantiation may not be enforced at compile time resulting in link errors. E.g. T example; example.CompilerDoesntKnowIfThisFunctionExistsOnT();
This is "loose" IMHO.
Solutions:
Rather then templates, I lean towards using a base class whereby the derived/container classes know what is available at compile time. The base classes can provide the generic methods and "types" that templates are often used for. This is why source code availability can be helpful if existing code needs to be modified to insert a generic base class in the inheritance hierarchy where needed. Otherwise if, code is closed source, rewrite it better using generic base classes instead of using a template as a work around.
If type is unimportant e.g. vector< T > then how about just using"object". C++ has not provided an "object" keyword and I have proposed to Dr. Bjarne Stroustrup that this would be helpful especially to tell compiler and people reading code that type is not important (for cases when it isn't). I don't that think C++11 has this, perhaps C++14 will?