compile time polymorphism and runtime polymorphism - c++

I noticed that somewhere polymorphism just refer to virtual function. However, somewhere they include the function overloading and template. Later, I found there are two terms, compile time polymorphism and run-time polymorphism. Is that true?
My question is when we talked about polymorphism generally, what's the widely accepted meaning?

Yes, you're right, in C++ there are two recognized "types" of polymorphism. And they mean pretty much what you think they mean
Dynamic polymorphism
is what C#/Java/OOP people typically refer to simply as "polymorphism". It is essentially subclassing, either deriving from a base class and overriding one or more virtual functions, or implementing an interface. (which in C++ is done by overriding the virtual functions belonging to the abstract base class)
Static polymorphism
takes place at compile-time, and could be considered a variation of ducktyping. The idea here is simply that different types can be used in a function to represent the same concept, despite being completely unrelated. For a very simple example, consider this
template <typename T>
T add(const T& lhs, const T& rhs) { return lhs + rhs; }
If this had been dynamic polymorphism, then we would define the add function to take some kind of "IAddable" object as its arguments. Any object that implement that interface (or derive from that base class) can be used despite their different implementations, which gives us the polymorphic behavior. We don't care which type is passed to us, as long as it implements some kind of "can be added together" interface.
However, the compiler doesn't actually know which type is passed to the function. The exact type is only known at runtime, hence this is dynamic polymorphism.
Here, though, we don't require you to derive from anything, the type T just has to define the + operator. It is then inserted statically. So at compile-time, we can switch between any valid type as long as they behave the same (meaning that they define the members we need)
This is another form of polymorphism. In principle, the effect is the same: The function works with any implementation of the concept we're interested in. We don't care if the object we work on is a string, an int, a float or a complex number, as long as it implements the "can be added together" concept.
Since the type used is known statically (at compile-time), this is known as static polymorphism. And the way static polymorphism is achieved is through templates and function overloading.
However, when a C++ programmer just say polymorphism, they generally refer to dynamic/runtime polymorphism.
(Note that this isn't necessarily true for all languages. A functional programmer will typically mean something like static polymorphism when he uses the term -- the ability to define generic functions using some kind of parametrized types, similar to templates)

"Polymorphism" literally means "many forms". The term is unfortunately a bit overloaded in computer science (excuse the pun).
According to FOLDOC, polymorphism is "a concept first identified by Christopher Strachey (1967) and developed by Hindley and Milner, allowing types such as list of anything."
In general, it's "a programming language feature that allows values of different data types to be handled using a uniform interface", to quote Wikipedia, which goes on to describe two main types of polymorphism:
Parametric polymorphism is when the same code can be applied to multiple data types. Most people in the object-oriented programming community refer to this as "generic programming" rather than polymorphism. Generics (and to some extent templates) fit into this category.
Ad-hoc polymorphism is when different code is used for different data-types. Overloading falls into this category, as does overriding. This is what people in the object-oriented community are generally referring to when they say "polymorphism". (and in fact, many mean overriding, not overloading, when they use the term "polymorphism")
For ad-hoc polymorphism there's also the question of whether the resolution of implementation code happens at run-time (dynamic) or compile-time (static). Method overloading is generally static, and method overriding is dynamic. This is where the terms static/compile-time polymorphism and dynamic/run-time polymorphism come from.

Usually people are referring to run-time polymorpism in my experience ...

When a C++ programmer says "polymorphism" he most likely means subtype polymorphism, which means "late binding" or "dynamic binding" with virtual functions. Function overloading and generic programming are both instances of polymorphism and they do involve static binding at compile time, so they can be referred to collectively as compile-time polymorphism. Subtype polymorphism is almost always referred to as just polymorphism, but the term could also refer to all of the above.

In its most succinct form, polymorphism means the ability of one type to appear as if it is another type.
There are two main types of polymorphism.
Subtype polymorphism: if D derives from B then D is a B.
Interface polymorphism: if C implements an interface I.
The first is what you are thinking of as runtime polymorphism. The second does not really apply to C++ and is a really a concept that applies to Java and C#.
Some people do think of overloading in the special case of operators (+, -, /, *) as a type of polymorphism because it allows you to think of types that have overloaded these operators as replaceable for each other (i.e., + for string and + for int). This concept of polymorphism most often applies to dynamic languages. I consider this an abuse of the terminology.
As for template programming, you will see some use the term "polymorphism" but this is really a very different thing than what we usually mean by polymorphism. A better name for this concept is "generic programming" or "genericity."

Various types of Function overloading (compile time polymorphism ...
9 Jun 2011 ... Polymorphism means same entity behaving differently at different times. Compile time polymorphism is also called as static binding.
http://churmura.com/technology/programming/various-types-of-function-overloading-compile-time-polymorphism-static-binding/39886/

A simple explanation on compile time polymorphism and run time polymorphism from :
questionscompiled.com
Compile time Polymorphism:
C++ support polymorphism. One function multiple purpose, or in short many functions having same name but with different function body.
For every function call compiler binds the call to one function definition at compile time. This decision of binding among several functions is taken by considering formal arguments of the function, their data type and their sequence.
Run time polymorphism:
C++ allows binding to be delayed till run time. When you have a function with same name, equal number of arguments and same data type in same sequence in base class as well derived class and a function call of form: base_class_type_ptr->member_function(args); will always call base class member function. The keyword virtual on a member function in base class indicates to the compiler to delay the binding till run time.
Every class with atleast one virtual function has a vtable that helps in binding at run time. Looking at the content of base class type pointer it will correctly call the member function of one of possible derived / base class member function.

Yes, you are basically right. Compile-time polymorphism is the use of templates (instances of which's types vary, but are fixed at compile time) whereas run-time polymorphism refers to the use of inheritance and virtual functions (instances of which's types vary and are fixed at run time).

Related

Difference between runtime and compile time polymorphism in c++

I'm confused between these two polymorphism please help me out by giving simple examples as a i'm totally new to c++.Give me some basic idea only.
Polymorphism means writing general code to work with different objects without knowing their exact types.
Static binding is a property that allows the compiler to resolve the type called at compile time. But there can be static binding without polymorphism.The compile time polymorphism is implemented using function and operator overloading where compiler has all the prior knowledge about the data type and no. of arguments needed so it can select the appropriate function at compile time.
Dynamic binding is a property which allows to decide about the type at run time. But there can be dynamic binding without polymorphism. If dynamic binding is used for writing general code which works with objects of several classes in hierarchy then it will be dynamic polymorphism. Run time polymorphism is implemented by virtual functions(a member function declared in base class using keyword virtual which redefined with same name by its derived class).

When to use template vs inheritance

I've been looking around for this one, and the common response to this seems to be along the lines of "they are unrelated, and one can't be substituted for the other". But say you're in an interview and get asked "When would you use a template instead of inheritance and vice versa?"
The way I see it is that templates and inheritance are literally orthogonal concepts: Inheritance is "vertical" and goes down, from the abstract to the more and more concrete. A shape, a triange, an equilateral triangle.
Templates on the other hand are "horizontal" and define parallel instances of code that knowns nothing of each other. Sorting integers is formally the same as sorting doubles and sorting strings, but those are three entirely different functions. They all "look" the same from afar, but they have nothing to do with each other.
Inheritance provides runtime abstraction. Templates are code generation tools.
Because the concepts are orthogonal, they may happily be used together to work towards a common goal. My favourite example of this is type erasure, in which the type-erasing container contains a virtual base pointer to an implementation class, but there are arbitrarily many concrete implementations that are generated by a template derived class. Template code generation serves to fill an inheritance hierarchy. Magic.
The "common response" is wrong. In "Effective C++," Scott Meyers says in Item 41:
Item 41: Understand implicit interfaces and compile-time polymorphism.
Meyers goes on to summarize:
Both classes and templates support interfaces and polymorphism.
For classes, interfaces are explicit and centered on function
signatures. Polymorphism occurs at runtime through virtual functions.
For template parameters, interfaces are implicit and based on valid
expressions. Polymorphism occurs during compilation through template
instantiation and function overloading resolution.
use a template in a base (or composition) when you want to retain type safety or would like to avoid virtual dispatch.
Templates are appropriate when defining an interface that works on multiple types of unrelated objects. Templates make perfect sense for container classes where its necessary generalize the objects in the container, yet retain type information.
In case of inheritance, all parameters must be of the defined parameter type, or extend from it. So when methods operate on object that correctly have a direct hierarchical relationship, inheritance is the best choice.
When inheritance is incorrectly applied, then it requires creating overly complex class hierarchies, of unrelated objects. The complexity of the code will increase for a small gain. If this is the case, then use templates.
Template describes an algorithm like decide the result of a comparison between the two objects of a class or sorting them. The type(class) of objects being operated on vary but the operation or logic or step etc is logically the same.
Inheritance on the other hand is used by the child class only to extend or make more specific the parents functionality.
Hope that makes sense

What's the relationship between C++ template and duck typing?

To me, C++ template used the idea of duck typing, is this right? Does it mean all generic types referenced in template class or method are duck type?
To me C++ templates are a compile-time version of duck typing.
The compiler will compile e.g. Class and as long as your Duck
has all needed types it will instantiate a class.
If something is not correct(e.g. copy constructor missing) the compilation fails.
The counterpart in real ducktyping is a failure when you call a function with a
non-duck type. And here it would occur at runtime.
Duck typing means, "if it quacks like a duck and walks like a duck, then it's a duck". It doesn't have a formal definition in computer science for us to compare C++ against.
C++ is not identical to (for example) Python, of course, but they both have a concept of implicitly-defined interfaces. The interface required of an object used as a Python function argument is whatever the function does with it. The interface required of a type used as a C++ template argument is whatever the template does with objects of that type. That is the similarity, and that is the grounds on which C++ templates should be assessed.
Furthermore, because of template argument deduction, in C++ you can attempt to pass any old object, and the compiler will figure out whether it can instantiate the function template.
One difference is that in C++, if the argument doesn't quack, then the compiler objects. In Python, only the runtime objects (and only if the function is actually called, if there are conditionals in the code). This is a difference in the nature of the interface demanded of an object/type - in C++ either the template requires that a particular expression is valid, or it doesn't require that. In Python, the necessary valid expressions can depend on the runtime values of earlier necessary expressions. So in Python you can ask for an object that either quacks loudly or quietly, and if it quacks loudly it needs to walk too. In C++ you can do that by a conditional dynamic_cast, and if the volume is a compile-time constant you could do it template specializations, but you can't use static typing to say that a duck only needs to walk if quack_volume() returns loud. And of course in Python the required interface may not really be "required" - behavior if a method isn't present is to throw an exception, and it might be possible to document and guarantee the caller's behavior if that happens.
Up to you whether you define "duck typing" so that this difference means C++ doesn't have it.
To me, C++ template used the idea of duck typing, is this right?
No, C++ templates are used to implement generic code. That is, if you have code that can work with more than one type, you don't have to duplicate it for each type. Things like std::vector and std::list are obvious examples of this in action. C++ templates have been abused into doing other things, but genericity was the original intention.
Does it mean all generic types referenced in template class or method
are duck type?
No, they are just "normal" types just like every other type in C++. They are just not known until the template is actually instantiated.
However, templates can be used to implement something like duck typing. Iterators are an example. Consider this function:
template<class InputIterator, class OutputIterator>
OutputIterator copy(InputIterator first, InputIterator last,
OutputIterator result)
{
while (first!=last) *result++ = *first++;
return result;
}
Note that the copy function can accept arguments of any type, as along as it implements the inequality operator, the dereference operator, and the postfix increment operator. This is probably as close to duck typing as you'll get in C++.
Not exactly. Duck types (dynamic type style) will never yield compile-time type errors because they just don't have any type. With templates, you don't have types until you instantiate the template. Once you do, variables have distinct types, and you will indeed get compile-time errors.
Also, with duck types, you can have one variable point to different types of objects, because variables just have no types. That's not possible with templates — once you instantiate them, variables have a single specific type.
They are similar, though, in that constraints are implicit: only the features actually used are checked. As opposed to, say, polymorphic pointers, the actual type doesn't matter.
Yes, sort of - for example if type X has AddRef(), Release() and QueryInterface() methods with appropriate signatures it can be used as a COM object with CComPtr template class. But this is not complete duck typing - type checking is still enforced for parameters.
No, this is a different concept. duck typing is a method to find out the type of a dynamic typed container. C++ templates aren't dynamic typed, they get instantiated with a specific type.
Wikipedia covers this distinction.

C++ Abstract class can't have a method with a parameter of that class

I created this .h file
#pragma once
namespace Core
{
class IComparableObject
{
public:
virtual int CompareTo(IComparableObject obj)=0;
};
}
But compiler doesn't like IComparableObject obj param if the method is virtual pure, while
virtual int CompareTo(IComparableObject obj) {}
It's ok, however I want it as virtual pure. How can I manage to do it? Is it possible?
You are trying to pass obj by value. You cannot pass an abstract class instance by value, because no abstract class can ever be instantiated (directly). To do what you want, you have to pass obj by reference, for example like so:
virtual int CompareTo(IComparableObject const &obj)=0;
It works when you give an implementation for CompareTo because then the class is not abstract any longer. But be aware that slicing occurs! You don't want to pass obj by value.
Well I have to give an unexpected answer here! Dennycrane said you can do this:
virtual int CompareTo(IComparableObject const &obj)=0;
but this is not correct either. Oh yes, it compiles, but it is useless because it can never be implemented correctly.
This issue is fundamental to the collapse of (statically typed) Object Orientation, so it is vital that programmers using OO recognize the issue. The problem has a name, it is called the covariance problem and it destroys OO utterly as a general programming paradigm; that is, a way of representing and independently implementing general abstractions.
This explanation will be a bit long and sloppy so bear with me and try to read between the lines.
First, an abstract class with a pure virtual method taking no arguments can be easily implemented in any derived class, since the method has access to the non-static data variables of the derived class via the this pointer. The this pointer has the type of a pointer to the derived class, and so we can say it varies along with the class, in fact it is covariant with the derived class type.
Let me call this kind of polymorphism first order, it clearly supports dispatching predicates on the object type. Indeed, the return type of such a method may also vary down with the object and class type, that is, the return type is covariant.
Now, I will generalise the idea of a method with no arguments to allow arbitrary scalar arguments (such as ints) claiming this changes nothing: this is merely a family of methods indexed by the scalar type. The important property here is that the scalar type is closed. In a derived class exactly the same scalar type must be used. in other words, the type is invariant.
General introduction of invariant parameters to a virtual function still permits polymorphism, but the result is still first order.
Unfortunately, such functions have limited utility, although they are very useful when the abstraction is only first order: a good example is device drivers.
But what if you want to model something which is actually interesting, that is, it is at least a relation?
The answer to this is: you cannot do it. This is a mathematical fact and has nothing to do with the programming language involved. Lets suppose you have an abstraction for say, numbers, and you want to add one number to another number, or compare them (as in the OP's example). Ignoring symmetry, if you have N implementations, you will have to write N^2 functions to perform the operations. If you add a new implementation of the abstraction, you have to write N+1 new functions.
Now, I have the first proof that OO is screwed: you cannot fit N^2 methods into a virtual dispatch schema because such a schema is linear. N classes gives you N methods you can implement and for N>1, N^2 > N, so OO is screwed, QED.
In a C++ context you can see the problem: consider :
struct MyComparable : IComparableObject {
int CompareTo(IComparableObject &other) { .. }
};
Arggg! We're screwed! We can't fill in the .. part here because we only have a reference to an abstraction, which has no data in it to compare to. Of course this must be the case, because there are an open/indeterminate/infinite number of possible implementations. There's no possible way to write a single comparison routine as an axiom.
Of course, if you have various property routines, or a common universal representation you can do it, but this does not count, because then the mapping to the universal representation is parameterless and thus the abstraction is only first order. For example if you have various integer representations and you add them by converting both to GNU gmp's data type mpz, then you are using two covariant projection functions and a single global non-polymorphic comparison or addition function.
This is not a counter example, it is a non-solution of the problem, which is to represent a relation or method which is covariant in at least two variables (at least self and other).
You may think you could solve this with:
struct MyComparable : IComparableObject {
int CompareTo(MyComparable &other) { .. }
};
After all you can implement this interface because you know the representation of other now, since it is MyComparable.
Do not laugh at this solution, because it is exactly what Bertrand Meyer did in Eiffel, and it is what many people do in C++ with a small change to try to work around the fact it isn't type safe and doesn't actually override the base-class function:
struct MyComparable : IComparableObject {
int CompareTo(IComparableObject &other) {
try
MyComparable &sibling = dynamic_cast(other);
...
catch (..) { return 0; }
}
};
This isn't a solution. It says that two things aren't equal just because they have different representations. That does not meet the requirement, which is to compare two things in the abstract. Two numbers, for example, cannot fail to be equal just because the representation used is different: zero equals zero, even if one is an mpz and the other an int. Remember: the idea is to properly represent an abstraction, and that means the behaviour must depend only on the abstract value, not the details of a particular implementation.
Some people have tried double dispatch. Clearly, that cannot work either. There is no possible escape from the basic issue here: you cannot stuff a square into a line.
virtual function dispatch is linear, second order problems are quadratic, so OO cannot represent second order problems.
Now I want to be very clear here that C++ and other statically typed OO languages are broken, not because they can't solve this problem, because it cannot be solved, and it isn't a problem: its a simple fact. The reason these languages and the OO paradigm in general are broken is because they promise to deliver general abstractions and then fail to do so. In the case of C++ this is the promise:
struct IComparableObject { virtual int CompareTo(IComparableObject obj)=0; };
and here is where the implicit contract is broken:
struct MyComparable : IComparableObject {
int CompareTo(IComparableObject &other) { throw 00; }
};
because the implementation I gave there is effectively the only possible one.
Well before leaving, you may ask: What is the right way (TM).
The answer is: use functional programming. In C++ that means templates.
template<class T, class U> int compare(T,U);
So if you have N types to compare, and you actually compare all combinations, then yes indeed you have to provide N^2 specialisations. Which shows templates deliver on the promise, at least in this respect. It's a fact: you can't dispatch at run time over an open set of types if the function is variant in more than one parameter.
BTW: in case you aren't convinced by theory .. just go look at the ISO C++ Standard library and see how much virtual function polymorphism is used there, compared to functional programming with templates..
Finally please note carefully that I am not saying classes and such like are useless, I use virtual function polymorphism myself: I'm saying that this is limited to particular problems and not a general way to represent abstractions, and therefore not worthy of being called a paradigm.
From C++03, §10.4 3:
An abstract class shall not be used as a parameter type, as a function return type, or as the type of an explicit conversion. Pointers and references to an abstract class can be declared.
Passing obj as a const reference is allowed.
When the CompareTo member function is pure virtual, IComparableObject is an abstract class.
You can't directly copy an object of an abstract class.
When you pass an object by value you're directly copying that object.
Instead of passing by value, you can pass by reference to const.
That is, formal argument type IComparableObject const&.
By the way, the function should probably be declared const so that it can be called on const object.
Also, instead of #pragma once, which is non-standard (but supported by most compilers), consider an ordinary include guard.
Also, when posting code that illustrates a problem, be sure to post exact code. In this case, there's a missing semicolon at the end, indicating manual typing of the code (and so that there could be other typos not so easily identified as such, but instead misidentified as part of your problem). Simply copy and paste real code.
Cheers & hth.,

Are there alternatives to polymorphism in C++?

The CRTP is suggested in this question about dynamic polymorphism. However, this pattern is allegedly only useful for static polymorphism. The design I am looking at seems to be hampered speedwise by virtual function calls, as hinted at here. A speedup of even 2.5x would be fantastic.
The classes in question are simple and can be coded completely inline, however it is not known until runtime which classes will be used. Furthermore, they may be chained, in any order, heaping performance insult onto injury.
Any suggestions (including how the CRTP can be used in this case) welcome.
Edit: Googling turns up a mention of function templates. These look promising.
Polymorphism literally means multiple (poly) forms (morphs). In statically typed languages (such as C++) there are three types of polymorphism.
Adhoc polymorphism: This is best seen in C++ as function and method overloading. The same function name will bind to different methods based on matching the compile time type of the parameters of the call to the function or method signature.
Parametric polymorphism: In C++ this is templates and all the fun things you can do with it such as CRTP, specialization, partial specialization, meta-programming etc. Again this sort of polymorphism where the same template name can do different things based on the template parameters is a compile time polymorphism.
Subtype Polymorphism: Finally this is what we think of when we hear the word polymorphism in C++. This is where derived classes override virtual functions to specialize behavior. The same type of pointer to a base class can have different behavior based on the concrete derived type it is pointing to. This is the way to get run time polymorphism in C++.
If it is not known until runtime which classes will be used, you must use Subtype Polymorphism which will involve virtual function calls.
Virtual method calls have a very small performance overhead over statically bound calls. I'd urge you to look at the answers to this SO question.
I agree with m-sharp that you're not going to avoid runtime polymorphism.
If you value optimisation over elegance, try replacing say
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
v[i]->trivial_virtual_method();
}
with something like
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
{
if (v[i]->tag==FooTag)
static_cast<Foo*>(v[i])->Foo::trivial_virtual_method();
else if (v[i]->tag==BarTag)
static_cast<Bar*>(v[i])->Bar::trivial_virtual_method();
else...
}
}
it's not pretty, certainly not OOP (more a reversion to what you might do in good old 'C') but if the virtual methods are trivial enough you should get a function with no calls (subject to good enough compiler & optimisation options). A variant using dynamic_cast or typeid might be slightly more elegant/safe but beware that those features have their own overhead which is probably comparable to a virtual call anyway.
Where you'll most likely see an improvement from the above is if some classes methods are no-ops, and it saved you from calling them, or if the functions contain common loop-invariant code and the optimiser manages to hoist it out of the loop.
You can go the Ole C Route and use unions. Although that too can be messy.