Is coercion static or dynamic polymorphism? - c++

Parametric polymorphism and overloading are static polymorphism, because the compiler knows at compile-time which function to call.
Subclassing is dynamic polimorphism, beause the function gets determined at run-time. But what is coercion (implicit casting)? Static or dynamic polimorphism?
The compiler knows at compile time which function to call, but the actual cast happens at run-time. Or is that statement wrong?

Runtime polymorphism involves (potentially) several distinct bits of machine code being selected from based on some runtime data related to the runtime type of data involved, and that selection happens at runtime. (I say potentially because you can use virtual dispatch when there's only one concrete derived type, but the runtime mechanism is there to support further types).
With coercion, only one machine code path is required to massage the data into some other type needed by the code then executed - there is no runtime type-based selection of the code to execute. What that one machine code path should be is decided at compile time.

Related

Is there an inverse function for typeid in C++17?

Does C++17 provide a way to get the type from a typeid or is the factory pattern still the only methodology?
type_info is a runtime value; its exact contents can only be determined via runtime execution. C++ is a statically typed language; at compile time, the type of everything must be known. As such, type_info-based reification (the ability to take a description of a thing and turn it into the thing itself) is not going to ever happen in C++.
C++ will likely get reflection and reification mechanisms in the future, but they will only be static mechanisms, not runtime mechanisms.

2nd phase compilation in templates

I am trying to understand the compilation process and the code generation process in c++ template universe.
I have read that during the first phase of compilation only the basic syntax is checked(in templated code).
And that the actual code is generated only for those data types which are used for which the compilation is done completely- this is termed as second phase compilation.
I am not able to understand that how can a compiler know for which data type can the templated code be called and for which to generate the code(and hence do 2nd phase compilation). There might be cases when the function calls(in case off function templates) might not be so strightforward to derive the datatype during compile time, these can be derived only during runtime basedon input from user.
Assuming i have a written a huge code using templates with a lot of conditions based on which it generates new instance of templated code(lets say new instance of datatype for a class). I cant test the code for all the data types. So does it mean that if i test it for a couple of data types, there are still chances of my code failing unexpectedly for some other data types? If so, how can i ensure to force 2nd compilation for all the data types(irrespective of that data type based on my input is instantiated or not).
The types determined during compilation time only rely on the static information. A function template that is used with a particular type will generate code for that type, since the option needs to be available in the runtime. If it can be statically determined that a function call will never happen, though, I think that the compiler might omit that implementation, but there are certain cases that'd still force that.
You can't test for all datatypes, since that's an infinite set. You can create a set of all standard types, but you obviously can't check every user-defined type ever. The idea in generic code is to not depend on the particulars of the type you're allowing to pass it. Alternatively, you might close the set of possible instances to only include the types you sanction.
I am not able to understand that how can a compiler know for which data type can the templated code be called
The compiler knows for which data types the templated code is actually called because it sees every place in the program where the templated code is actually called. No magic here. Instantiation happens at call sites. No instantiation is done for types that are not used in actual existing calls.
there are still chances of my code failing
This is true for all test-based validation, templates or no templates, and even for things other than software. You cannot cover all possible use cases by tests. It's a fundamental fact of life. Deal with it... somehow.

Difference between runtime and compile time polymorphism in c++

I'm confused between these two polymorphism please help me out by giving simple examples as a i'm totally new to c++.Give me some basic idea only.
Polymorphism means writing general code to work with different objects without knowing their exact types.
Static binding is a property that allows the compiler to resolve the type called at compile time. But there can be static binding without polymorphism.The compile time polymorphism is implemented using function and operator overloading where compiler has all the prior knowledge about the data type and no. of arguments needed so it can select the appropriate function at compile time.
Dynamic binding is a property which allows to decide about the type at run time. But there can be dynamic binding without polymorphism. If dynamic binding is used for writing general code which works with objects of several classes in hierarchy then it will be dynamic polymorphism. Run time polymorphism is implemented by virtual functions(a member function declared in base class using keyword virtual which redefined with same name by its derived class).

compile time polymorphism and runtime polymorphism

I noticed that somewhere polymorphism just refer to virtual function. However, somewhere they include the function overloading and template. Later, I found there are two terms, compile time polymorphism and run-time polymorphism. Is that true?
My question is when we talked about polymorphism generally, what's the widely accepted meaning?
Yes, you're right, in C++ there are two recognized "types" of polymorphism. And they mean pretty much what you think they mean
Dynamic polymorphism
is what C#/Java/OOP people typically refer to simply as "polymorphism". It is essentially subclassing, either deriving from a base class and overriding one or more virtual functions, or implementing an interface. (which in C++ is done by overriding the virtual functions belonging to the abstract base class)
Static polymorphism
takes place at compile-time, and could be considered a variation of ducktyping. The idea here is simply that different types can be used in a function to represent the same concept, despite being completely unrelated. For a very simple example, consider this
template <typename T>
T add(const T& lhs, const T& rhs) { return lhs + rhs; }
If this had been dynamic polymorphism, then we would define the add function to take some kind of "IAddable" object as its arguments. Any object that implement that interface (or derive from that base class) can be used despite their different implementations, which gives us the polymorphic behavior. We don't care which type is passed to us, as long as it implements some kind of "can be added together" interface.
However, the compiler doesn't actually know which type is passed to the function. The exact type is only known at runtime, hence this is dynamic polymorphism.
Here, though, we don't require you to derive from anything, the type T just has to define the + operator. It is then inserted statically. So at compile-time, we can switch between any valid type as long as they behave the same (meaning that they define the members we need)
This is another form of polymorphism. In principle, the effect is the same: The function works with any implementation of the concept we're interested in. We don't care if the object we work on is a string, an int, a float or a complex number, as long as it implements the "can be added together" concept.
Since the type used is known statically (at compile-time), this is known as static polymorphism. And the way static polymorphism is achieved is through templates and function overloading.
However, when a C++ programmer just say polymorphism, they generally refer to dynamic/runtime polymorphism.
(Note that this isn't necessarily true for all languages. A functional programmer will typically mean something like static polymorphism when he uses the term -- the ability to define generic functions using some kind of parametrized types, similar to templates)
"Polymorphism" literally means "many forms". The term is unfortunately a bit overloaded in computer science (excuse the pun).
According to FOLDOC, polymorphism is "a concept first identified by Christopher Strachey (1967) and developed by Hindley and Milner, allowing types such as list of anything."
In general, it's "a programming language feature that allows values of different data types to be handled using a uniform interface", to quote Wikipedia, which goes on to describe two main types of polymorphism:
Parametric polymorphism is when the same code can be applied to multiple data types. Most people in the object-oriented programming community refer to this as "generic programming" rather than polymorphism. Generics (and to some extent templates) fit into this category.
Ad-hoc polymorphism is when different code is used for different data-types. Overloading falls into this category, as does overriding. This is what people in the object-oriented community are generally referring to when they say "polymorphism". (and in fact, many mean overriding, not overloading, when they use the term "polymorphism")
For ad-hoc polymorphism there's also the question of whether the resolution of implementation code happens at run-time (dynamic) or compile-time (static). Method overloading is generally static, and method overriding is dynamic. This is where the terms static/compile-time polymorphism and dynamic/run-time polymorphism come from.
Usually people are referring to run-time polymorpism in my experience ...
When a C++ programmer says "polymorphism" he most likely means subtype polymorphism, which means "late binding" or "dynamic binding" with virtual functions. Function overloading and generic programming are both instances of polymorphism and they do involve static binding at compile time, so they can be referred to collectively as compile-time polymorphism. Subtype polymorphism is almost always referred to as just polymorphism, but the term could also refer to all of the above.
In its most succinct form, polymorphism means the ability of one type to appear as if it is another type.
There are two main types of polymorphism.
Subtype polymorphism: if D derives from B then D is a B.
Interface polymorphism: if C implements an interface I.
The first is what you are thinking of as runtime polymorphism. The second does not really apply to C++ and is a really a concept that applies to Java and C#.
Some people do think of overloading in the special case of operators (+, -, /, *) as a type of polymorphism because it allows you to think of types that have overloaded these operators as replaceable for each other (i.e., + for string and + for int). This concept of polymorphism most often applies to dynamic languages. I consider this an abuse of the terminology.
As for template programming, you will see some use the term "polymorphism" but this is really a very different thing than what we usually mean by polymorphism. A better name for this concept is "generic programming" or "genericity."
Various types of Function overloading (compile time polymorphism ...
9 Jun 2011 ... Polymorphism means same entity behaving differently at different times. Compile time polymorphism is also called as static binding.
http://churmura.com/technology/programming/various-types-of-function-overloading-compile-time-polymorphism-static-binding/39886/
A simple explanation on compile time polymorphism and run time polymorphism from :
questionscompiled.com
Compile time Polymorphism:
C++ support polymorphism. One function multiple purpose, or in short many functions having same name but with different function body.
For every function call compiler binds the call to one function definition at compile time. This decision of binding among several functions is taken by considering formal arguments of the function, their data type and their sequence.
Run time polymorphism:
C++ allows binding to be delayed till run time. When you have a function with same name, equal number of arguments and same data type in same sequence in base class as well derived class and a function call of form: base_class_type_ptr->member_function(args); will always call base class member function. The keyword virtual on a member function in base class indicates to the compiler to delay the binding till run time.
Every class with atleast one virtual function has a vtable that helps in binding at run time. Looking at the content of base class type pointer it will correctly call the member function of one of possible derived / base class member function.
Yes, you are basically right. Compile-time polymorphism is the use of templates (instances of which's types vary, but are fixed at compile time) whereas run-time polymorphism refers to the use of inheritance and virtual functions (instances of which's types vary and are fixed at run time).

static binding of default parameter

In Effective C++, the book just mentioned one sentence why default parameters are static bound:
If default parameter values were dynamically bound, compilers would have to come up with a way to determine the appropriate default values for parameters of virtual functions at runtime, which would be slower and more complicated than the current mechanism of determining them during compilation.
Can anybody elaborate this a bit more? Why it is complicated and inefficient?
Thanks so much!
Whenever a class has virtual functions, the compiler generates a so-called v-table to calculate the proper addresses that are needed at runtime to support dynamic binding and polymorphic behavior. Lots of class optimizers work toward removing virtual functions for this reason exactly. Less overhead, and smaller code. If default parameters were also calculated into the equation, it would make the whole virtual function mechanism all the more cumbersome and bloated.
Because for a function call the actual call would need to be looked up using the vtables associated with the object instance and from that the deault would need to be inferred in some manner. Meaning that the ctable would need extension or there would need to be extra administration to link the default to a vtable entry.