Parametric polymorphism and overloading are static polymorphism, because the compiler knows at compile-time which function to call.
Subclassing is dynamic polimorphism, beause the function gets determined at run-time. But what is coercion (implicit casting)? Static or dynamic polimorphism?
The compiler knows at compile time which function to call, but the actual cast happens at run-time. Or is that statement wrong?
Runtime polymorphism involves (potentially) several distinct bits of machine code being selected from based on some runtime data related to the runtime type of data involved, and that selection happens at runtime. (I say potentially because you can use virtual dispatch when there's only one concrete derived type, but the runtime mechanism is there to support further types).
With coercion, only one machine code path is required to massage the data into some other type needed by the code then executed - there is no runtime type-based selection of the code to execute. What that one machine code path should be is decided at compile time.
http://codepad.org/etWqYnn3
I'm working on some form of a reflexion system for C++ despite the many who have warned against. What I'm looking at having is a set of interfaces IScope, IType, IMember, IMonikerClient and a wrapper class which contains the above say CReflexion. Ignoring all but the member which is the important part here is what I would like to do:
1) Instance the wrapper
2) Determine which type is to be used
3) Instance type
4) Overload the () and [] to access the contained member from outer(the wrapper) in code as easily as it is done when using a std::vector
I find that using 0x I can forward a method call with any type for a parameter. I can't however cast dynamically as cast doesn't take a variable(unless there are ways I am unaware of!)
I linked the rough idea above. I am currently using a switch statement to handle the varying interfaces. I would, and for obvious reasons, like to collapse this. I get type match errors in the switch cases as a cause of the call to the methods compiling against each case where only one of three work for any condition and compiler errors are thrown.
Could someone suggest anything to me here? That is aside from sticking to VARIANT :/
Thanks!
C++, even in "0x land", simply does not expose the kind of information you would need to create something like reflection.
I find that using 0x I can forward a method call with any type for a parameter.
You cannot forward a type as a parameter. You can forward the const-volatile qualifiers on a member, but that's all done in templates, at compile time. No runtime check ever is done when you're using things like forward.
Your template there for operator() is not going to compile unless T is convertable to int*, string*, and A** all at once. Think of templates as a simple find and replace algorithm that generates several functions for you -- the value of T gets replaced with the typename when the template is instantiated, and the function is compiled as normal.
Finally, you can only use dyanmic_cast to cast down the class hierarchy -- casting between the completely unrelated types A B and C isn't going to operate correctly.
You're better off taking the time to rethink your design such that it doesn't use reflection at all. It will probably be a better design anyway, considering even in language with reflection, reflection is most often used to paper over poor designs.
I noticed that somewhere polymorphism just refer to virtual function. However, somewhere they include the function overloading and template. Later, I found there are two terms, compile time polymorphism and run-time polymorphism. Is that true?
My question is when we talked about polymorphism generally, what's the widely accepted meaning?
Yes, you're right, in C++ there are two recognized "types" of polymorphism. And they mean pretty much what you think they mean
Dynamic polymorphism
is what C#/Java/OOP people typically refer to simply as "polymorphism". It is essentially subclassing, either deriving from a base class and overriding one or more virtual functions, or implementing an interface. (which in C++ is done by overriding the virtual functions belonging to the abstract base class)
Static polymorphism
takes place at compile-time, and could be considered a variation of ducktyping. The idea here is simply that different types can be used in a function to represent the same concept, despite being completely unrelated. For a very simple example, consider this
template <typename T>
T add(const T& lhs, const T& rhs) { return lhs + rhs; }
If this had been dynamic polymorphism, then we would define the add function to take some kind of "IAddable" object as its arguments. Any object that implement that interface (or derive from that base class) can be used despite their different implementations, which gives us the polymorphic behavior. We don't care which type is passed to us, as long as it implements some kind of "can be added together" interface.
However, the compiler doesn't actually know which type is passed to the function. The exact type is only known at runtime, hence this is dynamic polymorphism.
Here, though, we don't require you to derive from anything, the type T just has to define the + operator. It is then inserted statically. So at compile-time, we can switch between any valid type as long as they behave the same (meaning that they define the members we need)
This is another form of polymorphism. In principle, the effect is the same: The function works with any implementation of the concept we're interested in. We don't care if the object we work on is a string, an int, a float or a complex number, as long as it implements the "can be added together" concept.
Since the type used is known statically (at compile-time), this is known as static polymorphism. And the way static polymorphism is achieved is through templates and function overloading.
However, when a C++ programmer just say polymorphism, they generally refer to dynamic/runtime polymorphism.
(Note that this isn't necessarily true for all languages. A functional programmer will typically mean something like static polymorphism when he uses the term -- the ability to define generic functions using some kind of parametrized types, similar to templates)
"Polymorphism" literally means "many forms". The term is unfortunately a bit overloaded in computer science (excuse the pun).
According to FOLDOC, polymorphism is "a concept first identified by Christopher Strachey (1967) and developed by Hindley and Milner, allowing types such as list of anything."
In general, it's "a programming language feature that allows values of different data types to be handled using a uniform interface", to quote Wikipedia, which goes on to describe two main types of polymorphism:
Parametric polymorphism is when the same code can be applied to multiple data types. Most people in the object-oriented programming community refer to this as "generic programming" rather than polymorphism. Generics (and to some extent templates) fit into this category.
Ad-hoc polymorphism is when different code is used for different data-types. Overloading falls into this category, as does overriding. This is what people in the object-oriented community are generally referring to when they say "polymorphism". (and in fact, many mean overriding, not overloading, when they use the term "polymorphism")
For ad-hoc polymorphism there's also the question of whether the resolution of implementation code happens at run-time (dynamic) or compile-time (static). Method overloading is generally static, and method overriding is dynamic. This is where the terms static/compile-time polymorphism and dynamic/run-time polymorphism come from.
Usually people are referring to run-time polymorpism in my experience ...
When a C++ programmer says "polymorphism" he most likely means subtype polymorphism, which means "late binding" or "dynamic binding" with virtual functions. Function overloading and generic programming are both instances of polymorphism and they do involve static binding at compile time, so they can be referred to collectively as compile-time polymorphism. Subtype polymorphism is almost always referred to as just polymorphism, but the term could also refer to all of the above.
In its most succinct form, polymorphism means the ability of one type to appear as if it is another type.
There are two main types of polymorphism.
Subtype polymorphism: if D derives from B then D is a B.
Interface polymorphism: if C implements an interface I.
The first is what you are thinking of as runtime polymorphism. The second does not really apply to C++ and is a really a concept that applies to Java and C#.
Some people do think of overloading in the special case of operators (+, -, /, *) as a type of polymorphism because it allows you to think of types that have overloaded these operators as replaceable for each other (i.e., + for string and + for int). This concept of polymorphism most often applies to dynamic languages. I consider this an abuse of the terminology.
As for template programming, you will see some use the term "polymorphism" but this is really a very different thing than what we usually mean by polymorphism. A better name for this concept is "generic programming" or "genericity."
Various types of Function overloading (compile time polymorphism ...
9 Jun 2011 ... Polymorphism means same entity behaving differently at different times. Compile time polymorphism is also called as static binding.
http://churmura.com/technology/programming/various-types-of-function-overloading-compile-time-polymorphism-static-binding/39886/
A simple explanation on compile time polymorphism and run time polymorphism from :
questionscompiled.com
Compile time Polymorphism:
C++ support polymorphism. One function multiple purpose, or in short many functions having same name but with different function body.
For every function call compiler binds the call to one function definition at compile time. This decision of binding among several functions is taken by considering formal arguments of the function, their data type and their sequence.
Run time polymorphism:
C++ allows binding to be delayed till run time. When you have a function with same name, equal number of arguments and same data type in same sequence in base class as well derived class and a function call of form: base_class_type_ptr->member_function(args); will always call base class member function. The keyword virtual on a member function in base class indicates to the compiler to delay the binding till run time.
Every class with atleast one virtual function has a vtable that helps in binding at run time. Looking at the content of base class type pointer it will correctly call the member function of one of possible derived / base class member function.
Yes, you are basically right. Compile-time polymorphism is the use of templates (instances of which's types vary, but are fixed at compile time) whereas run-time polymorphism refers to the use of inheritance and virtual functions (instances of which's types vary and are fixed at run time).
In Effective C++, the book just mentioned one sentence why default parameters are static bound:
If default parameter values were dynamically bound, compilers would have to come up with a way to determine the appropriate default values for parameters of virtual functions at runtime, which would be slower and more complicated than the current mechanism of determining them during compilation.
Can anybody elaborate this a bit more? Why it is complicated and inefficient?
Thanks so much!
Whenever a class has virtual functions, the compiler generates a so-called v-table to calculate the proper addresses that are needed at runtime to support dynamic binding and polymorphic behavior. Lots of class optimizers work toward removing virtual functions for this reason exactly. Less overhead, and smaller code. If default parameters were also calculated into the equation, it would make the whole virtual function mechanism all the more cumbersome and bloated.
Because for a function call the actual call would need to be looked up using the vtables associated with the object instance and from that the deault would need to be inferred in some manner. Meaning that the ctable would need extension or there would need to be extra administration to link the default to a vtable entry.
The CRTP is suggested in this question about dynamic polymorphism. However, this pattern is allegedly only useful for static polymorphism. The design I am looking at seems to be hampered speedwise by virtual function calls, as hinted at here. A speedup of even 2.5x would be fantastic.
The classes in question are simple and can be coded completely inline, however it is not known until runtime which classes will be used. Furthermore, they may be chained, in any order, heaping performance insult onto injury.
Any suggestions (including how the CRTP can be used in this case) welcome.
Edit: Googling turns up a mention of function templates. These look promising.
Polymorphism literally means multiple (poly) forms (morphs). In statically typed languages (such as C++) there are three types of polymorphism.
Adhoc polymorphism: This is best seen in C++ as function and method overloading. The same function name will bind to different methods based on matching the compile time type of the parameters of the call to the function or method signature.
Parametric polymorphism: In C++ this is templates and all the fun things you can do with it such as CRTP, specialization, partial specialization, meta-programming etc. Again this sort of polymorphism where the same template name can do different things based on the template parameters is a compile time polymorphism.
Subtype Polymorphism: Finally this is what we think of when we hear the word polymorphism in C++. This is where derived classes override virtual functions to specialize behavior. The same type of pointer to a base class can have different behavior based on the concrete derived type it is pointing to. This is the way to get run time polymorphism in C++.
If it is not known until runtime which classes will be used, you must use Subtype Polymorphism which will involve virtual function calls.
Virtual method calls have a very small performance overhead over statically bound calls. I'd urge you to look at the answers to this SO question.
I agree with m-sharp that you're not going to avoid runtime polymorphism.
If you value optimisation over elegance, try replacing say
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
v[i]->trivial_virtual_method();
}
with something like
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
{
if (v[i]->tag==FooTag)
static_cast<Foo*>(v[i])->Foo::trivial_virtual_method();
else if (v[i]->tag==BarTag)
static_cast<Bar*>(v[i])->Bar::trivial_virtual_method();
else...
}
}
it's not pretty, certainly not OOP (more a reversion to what you might do in good old 'C') but if the virtual methods are trivial enough you should get a function with no calls (subject to good enough compiler & optimisation options). A variant using dynamic_cast or typeid might be slightly more elegant/safe but beware that those features have their own overhead which is probably comparable to a virtual call anyway.
Where you'll most likely see an improvement from the above is if some classes methods are no-ops, and it saved you from calling them, or if the functions contain common loop-invariant code and the optimiser manages to hoist it out of the loop.
You can go the Ole C Route and use unions. Although that too can be messy.