I know that Function Templates are used so as to make the functions portable and so that they could be used with any data types.
Also Explicit Specialization of templates is done if we have a more efficient implementation for a specific data type.
But then instead of Explicit Specialization we could also just code a Nontemplate Function which could be called from main .
This would save us some processing time as the compiler would locate Nontemplate Functions faster than Explicitly Specialized Templated Functions which would in turn be better in terms of efficiency.
So why do we use Explicit Specialization when we have the alternative of just calling Nontemplate Functions?
Please correct me If I'm wrong!
Edit 1:
I was told by my professor that whenever we make function templates and call the function from main ,the compiler first looks for a templated function and if its not able to locate that,then it searches for a function template from which it in turn makes a templated function and then calls for it.
This would save us some processing time as the compiler would locate Global Functions faster than Explicitly Specialized Templated Functions which would in turn be better in terms of efficiency.
Why would the compiler find a nontemplate function faster than a function template specialization? Have you benchmarked compiler performance to verify this statement? If you use a function named f, the compiler always has to compile a set of candidate functions and perform overload resolution to determine the correct function to be used.
At runtime (which is when performance really matters, right?) the performance of calling a function template instantiation should be no better than the performance of calling a nontemplate function.
So why do we use Explicit Specialization when we have the alternative of just calling Global Functions?
In the general case, for function templates, you don't use explicit specialization, because it's usually confusing and difficult. The ISO C++ standard has a poetic warning to be extremely cautious when specializing function templates. You can read Herb Sutter's "Why Not Specialize Function Templates?" for a good explanation of the issues and why you don't want to specialize function templates.
It sounds like you're confusing compile-time efficiency with run-time efficiency. The choice of which function to call is made at compile time, not run time, so it will make no difference to the run time of the program.
Explicit Specialization is used when you have a special case that can benefit from special treatment. Sometimes this backfires, as in the case of std::vector<bool>, while other times it's quite handy. It means that the user of the function doesn't need to be aware that there's a special case; it's just transparent.
For reasons of uniformity. The person using the API just calls methods with particular arguments, some get the generic function some get explicitly specialised functions - the client need neither know nor care which they use.
Related
Let's say I have a class named ClothingStore. That class has 3 member functions, that point a visitor to the right department of the store. Member functions are ChildrenDept, MenDept and WomenDept, depending on whether the visitor is a child, a man or a woman.
Function overloading can be used to make 3 functions that have same name, say, PointToDept, but take different input argument ( child, man, woman ).
What is actually happening on run-time when program is executing ?
My guess is that compiler adds switch statements to the program, to select the right member function. But that makes me wonder - is there any benefit in terms of program performance when using overloaded functions, instead of making your own function with switch statements? Again, my only conclusion on that part is code readability. Thank you.
My guess is that compiler adds switch statements to the program, to select the right member function.
That's a bad guess. C++ is a statically typed language. The type of a variable does not change at runtime. This means the decision as to which non-polymorphic overload to call is one that can always be made at compile time. Section 13.3 in the standard, Overload resolution, ensures that this is the case. There's no reason to have a runtime decision when that decision can be made at compile time. The runtime cost of having a non-polymorphic overloaded function in most implementations is zero. The only exception might be a C++ interpreter.
How does function overloading work at run-time
It doesn't. It works at compile-time. A call to an overloaded function is no different at runtime from a call to a non-overloaded function.
and why overload? ... is there any benefit in terms of program performance when using overloaded functions, instead of making your own function with switch statements?
Yes. There is no runtime overhead at all, compared with 'making your own function with switch statements'.
From Gene's comment:
The compiler sees three different functions just as though they had been differently named.
In the case of most compilers, they are differently named. This used to be called name mangling where the function name is prefixed by return type and suffixed by the parameter types.
Why are those C++11 new functions of header <string> (stod, stof, stoull) not member functions of the string class ?
Isn't more C++ compliant to write mystring.stod(...) rather than stod(mystring,...)?
It is a surprise to many, but C++ is not an Object-Oriented language (unlike Java or C#).
C++ is a multi-paradigm language, and therefore tries to use the best tool for the job whenever possible. In this instance, a free-function is the right tool.
Guideline: Prefer non-member non-friend functions to member functions (from Efficient C++, Item 23)
Reason: a member function or friend function has access to the class internals whereas a non-member non-friend function does not; therefore using a non-member non-friend function increases encapsulation.
Exception: when a member function or friend function provides a significant advantage (such as performance), then it is worth considering despite the extra coupling. For example even though std::find works really well, associative containers such as std::set provide a member-function std::set::find which works in O(log N) instead of O(N).
The fundamental reason is that they don't belong there. They
don't really have anything to do with strings. Stop and think
about it. User defined types should follow the same rules as
built-in types, so every time you defined a new user type,
you'd have to add a function to std::string. This would
actually be possible in C++: if std::string had a member
function template to, without a generic implementation, you
could add a specialization for each type, and call
str.to<double>() or str.to<MyType>(). But is this really
what you want. It doesn't seem like a clean solution to me,
having everyone writing a new class having to add
a specialization to std::string. Putting these sort of things
in the string class bastardizes it, and is really the opposite
of what OO tries to achieve.
If you were to insist on pure OO, they would have to be
members of double, int, etc. (A constructor, really. This
is what Python does, for example.) C++ doesn't insist on pure
OO, and doesn't allow basic types like double and int to
have members or special constructors. So free functions are
both an acceptable solution, and the only clean solution
possible in the context of the language.
FWIW: conversions to/from textual representation is always
a delicate problem: if I do it in the target type, then I've
introduced a dependency on the various sources and sinks of text
in the target type---and these can vary in time. If I do it in
the source or sink type, I make them dependent on the the type
being converted, which is even worse. The C++ solution is to
define a protocol (in std::streambuf), where the user writes
a new free function (operator<< and operator>>) to handle
the conversions, and counts on operator overload resolution to
find the correct function. The advantage of the free function
solution is that the conversions are part of neither the data
type (which thus doesn't have to know of sources and sinks) nor
the source or sink type (which thus doesn't have to know about
user defined data types). It seems like the best solution to
me. And functions like stod are just convenience functions,
which make one particularly frequent use easier to write.
Actually they are some utility functions and they don't need to be inside the main class. Similar utility functions such as atoi, atof are defined (but for char*) inside stdlib.h and they too are standalone functions.
When you use a template with numerous methods (like vector) and compile your code, will the compiler discard the code from the unused methods?
A template is not instantiated unless it is used, so there is actually no code to discard.
The standard says (14.7.1/10)
An implementation shall not implicitly instantiate a function template, a member template, a non-virtual member function, a member class, or a static data member of a class template that does not require instantiation. It is unspecified whether or not an implementation implicitly instantiates a virtual member function of a class template if the virtual member function would not otherwise be instantiated. The use of a template specialization in a default argument shall not cause the template to be implicitly instantiated except that a class template may be instantiated where its complete type is needed to determine the correctness of the default argument. The use of a default argument in a function call causes specializations in the default argument to be implicitly instantiated.
So if you can avoid making the template's member functions virtual, the compiler will not generate any code for them (and that might work for virtual functions as well, if the compiler is smart enough).
It depends on your optimization level. At higher optimization settings, yes, dead code elimination will most likely occur.
the compiler, optimizers, and the linker can omit and/or reduce that information. each mature tool likely has options specific to dead code elimination.
with templates, the code may not really be created in the first place (unless instantiated).
certainly not all of it will be removed in every scenario, however (rtti is a silent killer). a bit of caution and testing using your build settings can go a long way to help you reduce the binary sizes and dead code.
Smart compilers will exclude it most likely. Long time ago when I played with Borland C++ Builder, I think, it did not throw out unused template class methods. Can not confirm though
In my code I'm adopting a design strategy which is similar to some standard library algorithms in that the exact behavior can be customized by a function object. The simplest example is std::sort, where a function object can control how the comparison is made between objects.
I notice that the Visual C++ provides two implementations of std::sort, which naturally involves code duplication. I would have imagined that it was instead possible to have only one implementation, and provide a default comparator (using operator< ) as a default template parameter.
What is the rational behind two separate versions? Would my suggestion make the interface more complex in some way? Or result in confusing error messages when the object does not provide operator< ? Or maybe it just doesn't work?
Thanks,
David
Because function templates are not allowed by the standard to have default type arguments.
This, however, was amended in C++11, and now function templates can have default type arguments.
Prior to C++11, a function template could not have default template arguments, and a template argument cannot be deduced from a default function argument, so there was no way to make this work.
In C++11, which supports default template arguments for function templates, you could use a single function template, but changing it now would break backwards compatibility with older C++ code that relies on the functions having a particular type.
The key reason this works is that for_each () doesn’t actually assume
its third argument to be a function.
It simply assumes that its third
argument is something that can be
called with an appropriate argument. A
suitably defined object serves as well
as – and often better than – a
function. For example, it is easier to
inline the application operator of a
class than to inline a function passed
as a pointer to function.
Consequently, function objects often
execute faster than do ordinary
functions. An object of a class with
an application operator (§11.9) is
called a functionlike object, a
functor, or simply a function object.
[Stroustrup, C++ 3rd edition, 18.4-last paragraph]
I always thought that the operator
( ) call is just like function call
at runtime. how does it differ from
a normal function call?
Why is it easier to inline the
application operator than a normal
function?
How are they faster than function
call?
Generally, functors are passed to templated functions - if you're doing so, then it doesn't matter if you pass a "real" function (i.e. a function pointer) or a functor (i.e. a class with an overloaded operator()). Essentially, both have a function call operator and are thus valid template parameters for which the compiler can instantiate the for_each template. That means for_each is either instantiated with the specific type of the functor passed, or with the specific type of function pointer passed. And it's in that specialization that it is possible for functors to outperform function pointers.
After all, if you're passing a function pointer, then the compile-type type of the argument is just that - a function pointer. If for_each itself is not inlined, then this particular for_each instance is compiled to call an opaque function pointer - after all, how could the compiler inline a function pointer? It just knows its type, not which function of that type is actually passed - at least, unless it can use non-local information when optimizing, which is harder to do.
However, if you pass a functor, then the compile-time type of that functor is used to instantiate the for_each template. In doing so, you're probably passing a simple, non-virtual class with only one implementation of the appropriate operator(). So, when the compiler encounters a call to operator() it knows exactly which implementation is meant - the unique implementation for that functor - and now it can inline that.
If your functor uses virtual methods, the potential advantage disappears. And, of course, a functor is a class with which you can do all kinds of other inefficient things. But for the basic case, this is why it's easier for the compiler to optimize & inline a functor call than a function pointer call.
Summary
Function pointers can't be inlined since while compiling for_each the compiler has only the type of
the function and not the identity of the function. By contrast, functors can be inlined since even though the compiler only has the type of functor, the type generally suffices to uniquely identify the functor's operator() method.
Consider the two following template instantiations:
std::for_each<class std::vector<int>::const_iterator, class Functor>(...)
and
std::for_each<class std::vector<int>::const_iterator, void(*)(int)>(...)
Because the 1st is customised for each type of function object, and because operator() is often defined inline, then the compiler may, at its discretion, choose to inline the call.
In the 2nd scenario, the compiler will instantiate the template once for all functions of the same signature, therefore, it cannot easily inline the call.
Now, smart compilers may be able to figure out which function to call at compile time, especially in scenarios like this:
std::for_each(v.begin(), v.end(), &foo);
and still inline the function by generating custom instantiations instead of the single generic one mentioned earlier.
I always thought that the operator ( ) call is just like function call at runtime. how does it differ from a normal function call?
My guess is not very much. For evidence of this, look at your compiler's assembly output for each. Assuming the same level of optimization, it's likely to be nearly identical. (With the additional detail that the this pointer will have to get passed.)
Why is it easier to inline the application operator than a normal function?
To quote the blurb you quoted:
For example, it is easier to inline the application operator of a class than to inline a function passed as a pointer to function.
I am not a compiler person, but I read this as: If the function is being called through a function pointer, it's a hard problem for the compiler to guess whether the address stored in that function pointer will ever change at runtime, therefore it's not safe to replace the call instruction with the body of the function; come to think of it, the body of the function itself wouldn't necessarily be known at compile time.
How are they faster than function call?
In many circumstances I'd expect you wouldn't notice any difference. But, given your quotation's argument that the compiler is free to do more inlining, this could produce better code locality and fewer branches. If the code is called frequently this would produce notable speedup.