Are there alternatives to polymorphism in C++? - c++

The CRTP is suggested in this question about dynamic polymorphism. However, this pattern is allegedly only useful for static polymorphism. The design I am looking at seems to be hampered speedwise by virtual function calls, as hinted at here. A speedup of even 2.5x would be fantastic.
The classes in question are simple and can be coded completely inline, however it is not known until runtime which classes will be used. Furthermore, they may be chained, in any order, heaping performance insult onto injury.
Any suggestions (including how the CRTP can be used in this case) welcome.
Edit: Googling turns up a mention of function templates. These look promising.

Polymorphism literally means multiple (poly) forms (morphs). In statically typed languages (such as C++) there are three types of polymorphism.
Adhoc polymorphism: This is best seen in C++ as function and method overloading. The same function name will bind to different methods based on matching the compile time type of the parameters of the call to the function or method signature.
Parametric polymorphism: In C++ this is templates and all the fun things you can do with it such as CRTP, specialization, partial specialization, meta-programming etc. Again this sort of polymorphism where the same template name can do different things based on the template parameters is a compile time polymorphism.
Subtype Polymorphism: Finally this is what we think of when we hear the word polymorphism in C++. This is where derived classes override virtual functions to specialize behavior. The same type of pointer to a base class can have different behavior based on the concrete derived type it is pointing to. This is the way to get run time polymorphism in C++.
If it is not known until runtime which classes will be used, you must use Subtype Polymorphism which will involve virtual function calls.
Virtual method calls have a very small performance overhead over statically bound calls. I'd urge you to look at the answers to this SO question.

I agree with m-sharp that you're not going to avoid runtime polymorphism.
If you value optimisation over elegance, try replacing say
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
v[i]->trivial_virtual_method();
}
with something like
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
{
if (v[i]->tag==FooTag)
static_cast<Foo*>(v[i])->Foo::trivial_virtual_method();
else if (v[i]->tag==BarTag)
static_cast<Bar*>(v[i])->Bar::trivial_virtual_method();
else...
}
}
it's not pretty, certainly not OOP (more a reversion to what you might do in good old 'C') but if the virtual methods are trivial enough you should get a function with no calls (subject to good enough compiler & optimisation options). A variant using dynamic_cast or typeid might be slightly more elegant/safe but beware that those features have their own overhead which is probably comparable to a virtual call anyway.
Where you'll most likely see an improvement from the above is if some classes methods are no-ops, and it saved you from calling them, or if the functions contain common loop-invariant code and the optimiser manages to hoist it out of the loop.

You can go the Ole C Route and use unions. Although that too can be messy.

Related

Compile time vs run time polymorphism in C++ advantages/disadvantages

In C++ when it is possible to implement the same functionality using either run time (sub classes, virtual functions) or compile time (templates, function overloading) polymorphism, why would you choose one over the other?
I would think that the compiled code would be larger for compile time polymorphism (more method/class definitions created for template types), and that compile time would give you more flexibility, while run time would give you "safer" polymorphism (i.e. harder to be used incorrectly by accident).
Are my assumptions correct? Are there any other advantages/disadvantages to either? Can anyone give a specific example where both would be viable options but one or the other would be a clearly better choice?
Also, does compile time polymorphism produce faster code, since it is not necessary to call functions through vtable, or does this get optimized away by the compiler anyway?
Example:
class Base
{
virtual void print() = 0;
}
class Derived1 : Base
{
virtual void print()
{
//do something different
}
}
class Derived2 : Base
{
virtual void print()
{
//do something different
}
}
//Run time
void print(Base o)
{
o.print();
}
//Compile time
template<typename T>
print(T o)
{
o.print();
}
Static polymorphism produces faster code, mostly because of the possibility of aggressive inlining. Virtual functions can rarely be inlined, and mostly in a "non-polymorphic" scenarios. See this item in C++ FAQ. If speed is your goal, you basically have no choice.
On the other hand, not only compile times, but also the readability and debuggability of the code is much worse when using static polymorphism. For instance: abstract methods are a clean way of enforcing implementation of certain interface methods. To achieve the same goal using static polymorphism, you need to restore to concept checking or the curiously recurring template pattern.
The only situation when you really have to use dynamic polymorphism is when the implementation is not available at compile time; for instance, when it's loaded from a dynamic library. In practice though, you may want to exchange performance for cleaner code and faster compilation.
After you filter out obviously bad and suboptimal cases I believe you're left with almost nothing. IMO it is pretty rare when you're facing that kind of choice. You could improve the question by stating an example, and for that a real comparison van be provided.
Assuming we have that realistic choice I'd go for the compile time solution -- why waste runtime for something not absolutely necessary? Also is something is decided at compile time it is easier to think about, follow in head and do evaluation.
Virtual functions, just like function pointers make you unable to create accurate call graphs. You can review the bottom but not easily from the top. virtual functions shall follow some rules but if they don't, you have to look all of them for the sinner.
Also there are some losses on performance, probably not a big deal in majority of cases but if no balance on the other side, why take it?
In C++ when it is possible to implement the same functionality using either run time (sub classes, virtual functions) or compile time (templates, function overloading) polymorphism, why would you choose one over the other?
I would think that the compiled code would be larger for compile time polymorphism (more method/class definitions created for template types)...
Often yes - due to multiple instantiations for different combinations of template parameters, but consider:
with templates, only the functions actually called are instantiated
dead code elimination
constant array dimensions allowing member variables such as T mydata[12]; to be allocated with the object, automatic storage for local variables etc., whereas a runtime polymorphic implementation might need to use dynamic allocation (i.e. new[]) - this can dramatically impact cache efficiency in some cases
inlining of function calls, which makes trivial things like small-object get/set operations about an order of magnitude faster on the implementations I've benchmarked
avoiding virtual dispatch, which amounts to following a pointer to a table of function pointers, then making an out-of-line call to one of them (it's normally the out-of-line aspect that hurts performance most)
...and that compile time would give you more flexibility...
Templates certainly do:
given the same template instantiated for different types, the same code can mean different things: for example, T::f(1) might call a void f(int) noexcept function in one instantiation, a virtual void f(double) in another, a T::f functor object's operator()(float) in yet another; looking at it from another perspective, different parameter types can provide what the templated code needs in whatever way suits them best
SFINAE lets your code adjust at compile time to use the most efficient interfaces objects supports, without the objects actively having to make a recommendation
due to the instantiate-only-functions-called aspect mentioned above, you can "get away" with instantiating a class template with a type for which only some of the class template's functions would compile: in some ways that's bad because programmers may expect that their seemingly working Template<MyType> will support all the operations that the Template<> supports for other types, only to have it fail when they try a specific operation; in other ways it's good because you can still use Template<> if you're not interested in all the operations
if Concepts [Lite] make it into a future C++ Standard, programmers will have the option of putting stronger up-front contraints on the semantic operations that types used as template paramters must support, which will avoid nasty surprises as a user finds their Template<MyType>::operationX broken, and generally give simpler error messages earlier in the compile
...while run time would give you "safer" polymorphism (i.e. harder to be used incorrectly by accident).
Arguably, as they're more rigid given the template flexibility above. The main "safety" problems with runtime polymorphism are:
some problems end up encouraging "fat" interfaces (in the sense Stroustrup mentions in The C++ Programming Language): APIs with functions that only work for some of the derived types, and algorithmic code needs to keep "asking" the derived types "should I do this for you", "can you do this", "did that work" etc..
you need virtual destructors: some classes don't have them (e.g. std::vector) - making it harder to derive from them safely, and the in-object pointers to virtual dispatch tables aren't valid across processes, making it hard to put runtime polymorphic objects in shared memory for access by multiple processes
Can anyone give a specific example where both would be viable options but one or the other would be a clearly better choice?
Sure. Say you're writing a quick-sort function: you could only support data types that derive from some Sortable base class with a virtual comparison function and a virtual swap function, or you could write a sort template that uses a Less policy parameter defaulting to std::less<T>, and std::swap<>. Given the performance of a sort is overwhelmingly dominated by the performance of these comparison and swap operations, a template is massively better suited to this. That's why C++ std::sort clearly outperforms the C library's generic qsort function, which uses function pointers for what's effectively a C implementation of virtual dispatch. See here for more about that.
Also, does compile time polymorphism produce faster code, since it is not necessary to call functions through vtable, or does this get optimized away by the compiler anyway?
It's very often faster, but very occasionally the sum impact of template code bloat may overwhelm the myriad ways compile time polymorphism is normally faster, such that on balance it's worse.

When to use template vs inheritance

I've been looking around for this one, and the common response to this seems to be along the lines of "they are unrelated, and one can't be substituted for the other". But say you're in an interview and get asked "When would you use a template instead of inheritance and vice versa?"
The way I see it is that templates and inheritance are literally orthogonal concepts: Inheritance is "vertical" and goes down, from the abstract to the more and more concrete. A shape, a triange, an equilateral triangle.
Templates on the other hand are "horizontal" and define parallel instances of code that knowns nothing of each other. Sorting integers is formally the same as sorting doubles and sorting strings, but those are three entirely different functions. They all "look" the same from afar, but they have nothing to do with each other.
Inheritance provides runtime abstraction. Templates are code generation tools.
Because the concepts are orthogonal, they may happily be used together to work towards a common goal. My favourite example of this is type erasure, in which the type-erasing container contains a virtual base pointer to an implementation class, but there are arbitrarily many concrete implementations that are generated by a template derived class. Template code generation serves to fill an inheritance hierarchy. Magic.
The "common response" is wrong. In "Effective C++," Scott Meyers says in Item 41:
Item 41: Understand implicit interfaces and compile-time polymorphism.
Meyers goes on to summarize:
Both classes and templates support interfaces and polymorphism.
For classes, interfaces are explicit and centered on function
signatures. Polymorphism occurs at runtime through virtual functions.
For template parameters, interfaces are implicit and based on valid
expressions. Polymorphism occurs during compilation through template
instantiation and function overloading resolution.
use a template in a base (or composition) when you want to retain type safety or would like to avoid virtual dispatch.
Templates are appropriate when defining an interface that works on multiple types of unrelated objects. Templates make perfect sense for container classes where its necessary generalize the objects in the container, yet retain type information.
In case of inheritance, all parameters must be of the defined parameter type, or extend from it. So when methods operate on object that correctly have a direct hierarchical relationship, inheritance is the best choice.
When inheritance is incorrectly applied, then it requires creating overly complex class hierarchies, of unrelated objects. The complexity of the code will increase for a small gain. If this is the case, then use templates.
Template describes an algorithm like decide the result of a comparison between the two objects of a class or sorting them. The type(class) of objects being operated on vary but the operation or logic or step etc is logically the same.
Inheritance on the other hand is used by the child class only to extend or make more specific the parents functionality.
Hope that makes sense

C++ double dispatch "extensible" without RTTI

Does anyone know a way to have double dispatch handled correctly in C++ without using RTTI and dynamic_cast<> and also a solution, in which the class hierarchy is extensible, that is the base class can be derived from further and its definition/implementation does not need to know about that?
I suspect there is no way, but I'd be glad to be proven wrong :)
The first thing to realize is that double (or higher order) dispatch doesn't scale. With single
dispatch, and n types, you need n functions; for double dispatch n^2, and so on. How you
handle this problem partially determines how you handle double dispatch. One obvious solution is to
limit the number of derived types, by creating a closed hierarchy; in that case, double dispatch can
be implemented easily using a variant of the visitor pattern. If you don't close the hierarchy,
then you have several possible approaches.
If you insist that every pair corresponds to a function, then you basically need a:
std::map<std::pair<std::type_index, std::type_index>, void (*)(Base const& lhs, Base const& rhs)>
dispatchMap;
(Adjust the function signature as necessary.) You also have to implement the n^2 functions, and
insert them into the dispatchMap. (I'm assuming here that you use free functions; there's no
logical reason to put them in one of the classes rather than the other.) After that, you call:
(*dispatchMap[std::make_pair( std::type_index( typeid( obj1 ) ), std::type_index( typeid( obj2 ) )])( obj1, obj2 );
(You'll obviously want to wrap that into a function; it's not the sort of thing you want scattered
all over the code.)
A minor variant would be to say that only certain combinations are legal. In this case, you can use
find on the dispatchMap, and generate an error if you don't find what you're looking for.
(Expect a lot of errors.) The same solution could e used if you can define some sort of default
behavior.
If you want to do it 100% correctly, with some of the functions able to handle an intermediate class
and all of its derivatives, you then need some sort of more dynamic searching, and ordering to
control overload resolution. Consider for example:
Base
/ \
/ \
I1 I2
/ \ / \
/ \ / \
D1a D1b D2a D2b
If you have an f(I1, D2a) and an f(D1a, I2), which one should be chosen. The simplest solution
is just a linear search, selecting the first which can be called (as determined by dynamic_cast on
pointers to the objects), and manually managing the order of insertion to define the overload
resolution you wish. With n^2 functions, this could become slow fairly quickly, however. Since
there is an ordering, it should be possible to use std::map, but the ordering function is going to
be decidedly non-trivial to implement (and will still have to use dynamic_cast all over the
place).
All things considered, my suggestion would be to limit double dispatch to small, closed hierarchies,
and stick to some variant of the visitor pattern.
The "visitor pattern" in C++ is often equated with double dispatch. It uses no RTTI or dynamic_casts.
See also the answers to this question.
The first problem is trivial. dynamic_cast involves two things: run-time check and a type cast. The former requires RTTI, the latter does not. All you need to do to replace dynamic_cast with a functionality that does the same without requiring RTTI is to have your own method to check the type at run-time. To do this, all you need is a simple virtual function that returns some sort of identification of what type it is or what more-specific interface it complies to (that can be an enum, an integer ID, even a string). For the cast, you can safely do a static_cast once you have already done the run-time check yourself and you are sure that the type you are casting to is in the object's hierarchy. So, that solves the problem of emulating the "full" functionality of dynamic_cast without needing the built-in RTTI. Another, more involved solution is to create your own RTTI system (like it is done in several softwares, like LLVM that Matthieu mentioned).
The second problem is a big one. How to create a double dispatch mechanism that scales well with an extensible class hierarchy. That's hard. At compile-time (static polymorphism), this can be done quite nicely with function overloads (and/or template specializations). At run-time, this is much harder. As far as I know, the only solution, as mentioned by Konrad, is to keep a dispatch table of function pointers (or something of that nature). With some use of static polymorphism and splitting dispatch functions into categories (like function signatures and stuff), you can avoid having to violate type safety, in my opinion. But, before implementing this, you should think very hard about your design to see if this double dispatch is really necessary, if it really needs to be a run-time dispatch, and if it really needs to have a separate function for each combination of two classes involved (maybe you can come up with a reduced and fixed number of abstract classes that capture all the truly distinct methods you need to implement).
You may want to check how LLVM implement isa<>, dyn_cast<> and cast<> as a template system, since it's compiled without RTTI.
It is a bit cumbersome (requires tidbits of code in every class involved) but very lightweight.
LLVM Programmer's Manual has a nice example and a reference to the implementation.
(All 3 methods share the same tidbit of code)
You can fake the behaviour by implementing the compile-time logic of multiple dispatch yourself. However, this is extremely tedious. Bjarne Stroustrup has co-authored a paper describing how this could be implemented in a compiler.
The underlying mechanism – a dispatch table – could be dynamically generated. However, using this approach you would of course lose all syntactical support. You’d need to to maintain 2-dimensional matrix of method pointers and manually look up the correct method depending on the argument types. This would render a simple (hypothetical) call
collision(foo, bar);
at least as complicated as
DynamicDispatchTable::lookup(collision_signature, FooClass, BarClass)(foo, bar);
since you didn’t want to use RTTI. And this is assuming that all your methods take only two arguments. As soon as more arguments are required (even if those aren’t part of the multiple dispatch) this becomes more complicated still, and would require circumventing type safety.

What kind of polymorphism is considered more idiomatic in C++? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
C++ being a value oriented language doesn't seem to support OO (and thus sub-typing polymorphism) very well. As for parametric polymorphism, lack of type inference on type parameters and verbose syntax of templates makes them challenging to use.
Please note that the only languages I know moderately well are Java (sub-typing polymorphism) and Haskell (parametric polymorphism). Both languages are leaned towards one kind of polymorphism. However C++ supports both (to some extent), but both seem to work in a matter that I find unintuitive. So when programming in C++ I have a pretty hard time in deciding what way I should exactly code.
So my question is what kind of polymorphism is considered more idiomatic in C++?
EDIT 1:
Explanation of my "C++ doesn't support OO well" point:
Dynamic method dispatch and LSP are very common in OO, aren't they? But when it comes to C++, applying these techniques without resorting to pointers (raw or smart) is not possible (or practical).
For example,consider a class Person with virtual method print which prints his name to the console. Let there be another class Student that extends Person and overrides print to print his name plus his school's name.
Now consider the following function:
void blah(const Person & p) {
p.print();
}
Here if I pass a Student object, print method would invoke print from Person, not from Student. Thus it defies the very basic idea of subtyping polymorphism.
Now I am aware that I can use dynamic allocation (i.e. pointers) to achieve subtyping polymorphism in this case. However static allocation is more common in C++. Pointers are used as last resort (I remember having read it in some other thread here).So I find it difficult it difficult to reconcile the Good Practices that recommend static allocation over dynamic allocation (this is what I meant when I said C++ is value oriented) with subtyping polymorphism.
When using Java, I tend to use dynamic allocation all over and thus subtyping polymorphism is quite natural there. This is not the case with C++ however,
Hope my point is clear now.
EDIT 2:
Okay, the example I gave in my edit 1 is wrong. But my point is still valid and I have faced the problem many times. I am unable to recall all those cases top of my head.
Here's one case that comes to my mind.
In Java you can have reference of super type in your classes and then make them point to instances of any of its subtypes.
For example,
class A {
B y1;
B y2;
}
abstract class B {
// yada yada
}
class B1 exyends B {
// yada yada
}
class B2 extends B {
// yada yada
}
Here the references y1 and y2 in A can be made to point to instances of either B1, B2 or any other subclass of B. C++ references cannot be reassigned. So I will have to use pointers here. So this provs that in C++ it's not possible to achieve all sorts of subtyping polymorphism without using pointers.
Having added the fifth vote to reopen gives me a chance at being the first to add another reply. Let's start with the claim that C++ doesn't support OO well. The example given is:
Now consider the following function:
void blah(const Person & p) {
p.print();
}
Here if I pass a Student object, print method would invoke print from Person, not from
Student. Thus it defies the very basic idea of subtyping polymorphism.
To make a long story short, this example is just plain wrong -- or more accurately, the claim made about the example is wrong. If you pass a Student object to this function, what will be invoked will be Student::print, not Person::print as claimed above. Thus, C++ implements polymorphism exactly as the OP apparently wishes.
The only part of this that isn't idiomatic C++ is that you normally use operator<< to print out objects, so instead of print (apparently) printing only to std::cout, you should probably have it take a parameter, and instead of blah, overload operator<<, something like:
std::ostream &operator<<(std::ostream &os, Person const &p) {
return p.print(os);
}
Now, it is possible to create a blah that would act as described, but to do so you'd have to have it take its parameter by value:
void blah(Person p) {
p.print();
}
So there is some degree of truth to the original claim -- specifically, when/if you want to use polymorphism, you do need to use pointers or references.
Note, however, that this isn't related (any more than peripherally) to how you allocate objects. You can pass by reference regardless of how the object in question was allocated. If a function takes a pointer, you can pass the address of an automatically or statically allocated object. If it takes a reference, you can pass a dynamically allocated object.
As far as type inference goes, C++ has it for function templates, but not class templates. C++0x adds decltype and a new meaning for auto (which has been a reserved word, but essentially never used almost since the dawn of C) that allow type inference for a wider variety of situations. It also adds lambdas (the lack of which really is a serious problem with the current C++), which can use auto. There are still situations where type inference isn't supported, but would be nice -- but at least IMO, auto (in particular) reduces that quite a bit.
As far as verbosity goes, there's little question that it's at least partly true. Somewhat like Java, your degree of comfort in writing C++ tends to depend to at least some degree on an editor that includes various "tricks" (e.g., code completion) to help reduce the amount you type. Haskell excels in this respect -- Haskell lets you accomplish more per character typed than almost any other language around (APL being one of the few obvious exceptions). At the same time, it's worth noting that "generics" (in either Java or C#) are about as verbose, but much less versatile than C++ templates. In terms of verbosity, C++ stands somewhere between Haskell at (or close to) one extreme, and Java and C# at (or, again, close to) the opposite extreme.
Getting to the original question of which is used more often: there was a time when C++ didn't have templates, so essentially your only choice was subtyping. As you can probably guess, at that time it was used a lot, even when it wasn't really the best choice.
C++ has had templates for a long time now. Templates are now so common that they're essentially unavoidable. Just for example, IOStreams, which originally used only inheritance, now also use templates. The standard containers, iterators, and algorithms all use templates heavily (and eschew inheritance completely).
As such, older code (and new code from coders who are older or more conservative) tends to concentrate primarily or exclusively on subtyping. Newer and/or more liberally written code, tends to use templates more. At least in my experience, most reasonably recent code uses a mixture of both. Between the two, I'll normally use subtyping when I have to, but prefer templates when they can do the job.
Edit: demo code showing polymorphism:
#include <iostream>
class Person {
public:
virtual void print() const { std::cout << "Person::print()\n"; }
};
class Student : public Person {
public:
virtual void print() const { std::cout << "Student::print()\n"; }
};
void blah(const Person &p) {
p.print();
}
int main() {
Student s;
blah(s);
return 0;
}
result (cut and pasted from running code above on my computer, compiled with MS VC++):
Student::print()
So yes, it does polymorphism exactly as you'd want -- and note that in this example, the object in question is allocated on the stack, not using new.
Edit 2: (in response to edit of question):
It's true that you can't assign to a reference. That's orthogonal to questions of polymorphism though -- it doesn't matter (for example) whether what you want to assign is of the same or different type from what it was initialized with, you can't do an assignment either way.
At least to me, it would seem obvious that there must be some difference in capabilities between references and pointers, or there would have been no reason to add references to the language. If you want to assign them to refer to different objects, you need to user pointers, not references. Generally speaking, I'd use a reference when you can, and a pointer if you have to. At least IMO, a reference as a class member is usually highly suspect at best (e.g., it means you can't assign objects of that type). Bottom: if you want what a reference does, by all means use a reference -- but complaining because a reference isn't a pointer doesn't seem (at least to me) to make much sense.
Both options have their advantages. Java-style inheritance is far more common in real world C++ code. Since your code typically has to play well with others, I would focus on subtyping polymorphism first since that's what most people know well.
Also, you should consider whether polymorphism is really the right way to express the solution to your problems. Far too often people build elaborate inheritance trees when they aren't necessary.
Templates are evaluated at compile-time by creating basically a copy of the templated function or object. If you need polymorphism during runtime (ie: a std::vector<Base*> vec and vec->push_back(new Derived()) every now and then...) you're forced to use subtypes and virtual methods.
EDIT: I guess I should put forward the case where Templates are better. Templates are "open", in that a templated function or object will work with classes that you haven't made yet... so long as those classes fit your interface. For example, auto_ptr<> works with any class I can make, even though the standard library designers haven't thought of my classes really. Similarly, the templated algorithms such as reverse work on any class that supports dereferencing, and operator++.
When using subtyping, you have to write down the class hierarchy somewhere. That is, you need to say B extends A somewhere in your code... before you can use B like A. On the other hand, you DON'T have to say B implements Randomaccessiterator for it to work with templated code.
In the few situations where both satisfy your requirements, then use the one you're more comfortable with using. In my experience, this situation doesn't happen very often.
'Suck' is a pretty strong term. Perhaps we need to think about what Stroustrup was aiming for with C++
C++ was designed with certain pricniples in mind. Amongst others:
It had to be backwards compatible with C.
It shouldn't restrict the programmer from doing what they wanted to.
You shouldn't pay for things you don't use.
So, that first one makes a pretty stiff standard to stick to - everything that was legal in C had to work (and work with the same effect) when compiled in C++. Because of this, a lot of necessary compromises were included.
On the other hand, C++ also gives you a lot of power (or, as it has been described for C, 'enough rope to hang yourself.') With power comes responsibility, and the compiler won't argue with you if you choose to do something stupid.
I'll admit now, it's been about 15 years since I last looked at Haskell, so I'm a bit rusty on that - but parametric polymorphism (full type safety) can always be overridden in C++.
Subtyping polymorphism can to. Esentially, anything can be overridden - the compiler won't argue with you if you insist on casting one pointer type to another (no matter how insane.)
So, having got that out of the way, C++ does give lots of options with polymorphism.
The classic public inheritance models 'is-a' - sub-classing. It's very common.
Protected inheritance models 'is-implemented-in-terms-of' (inheriting implementation, but not interface)
Private inheritance models 'is-implemented-using' (containing the implementation)
The latter two are much less common. Aggregation (creating a class instance inside the class) is much more common, and often more flexible.
But C++ also supports multiple inheritance (true implementation and interface multiple inheritance), with all the inherent complexity and risks of repeated inheritance that brings (the dreaded diamond pattern) - and also ways of dealing with that.
(Scott Myers 'Effective C++' and 'More Effective C++' will help untangle the compexities if you're interested.)
I'm not convinced that C++ is a 'value oriented language' to the exclusion of other things. C++ can be what you want it to be, pretty much. You just need to know what it is you want, and how to make it do it.
It's not that C++ sucks, so much as C++ is very sharp, and you can easily cut yourself.

compile time polymorphism and runtime polymorphism

I noticed that somewhere polymorphism just refer to virtual function. However, somewhere they include the function overloading and template. Later, I found there are two terms, compile time polymorphism and run-time polymorphism. Is that true?
My question is when we talked about polymorphism generally, what's the widely accepted meaning?
Yes, you're right, in C++ there are two recognized "types" of polymorphism. And they mean pretty much what you think they mean
Dynamic polymorphism
is what C#/Java/OOP people typically refer to simply as "polymorphism". It is essentially subclassing, either deriving from a base class and overriding one or more virtual functions, or implementing an interface. (which in C++ is done by overriding the virtual functions belonging to the abstract base class)
Static polymorphism
takes place at compile-time, and could be considered a variation of ducktyping. The idea here is simply that different types can be used in a function to represent the same concept, despite being completely unrelated. For a very simple example, consider this
template <typename T>
T add(const T& lhs, const T& rhs) { return lhs + rhs; }
If this had been dynamic polymorphism, then we would define the add function to take some kind of "IAddable" object as its arguments. Any object that implement that interface (or derive from that base class) can be used despite their different implementations, which gives us the polymorphic behavior. We don't care which type is passed to us, as long as it implements some kind of "can be added together" interface.
However, the compiler doesn't actually know which type is passed to the function. The exact type is only known at runtime, hence this is dynamic polymorphism.
Here, though, we don't require you to derive from anything, the type T just has to define the + operator. It is then inserted statically. So at compile-time, we can switch between any valid type as long as they behave the same (meaning that they define the members we need)
This is another form of polymorphism. In principle, the effect is the same: The function works with any implementation of the concept we're interested in. We don't care if the object we work on is a string, an int, a float or a complex number, as long as it implements the "can be added together" concept.
Since the type used is known statically (at compile-time), this is known as static polymorphism. And the way static polymorphism is achieved is through templates and function overloading.
However, when a C++ programmer just say polymorphism, they generally refer to dynamic/runtime polymorphism.
(Note that this isn't necessarily true for all languages. A functional programmer will typically mean something like static polymorphism when he uses the term -- the ability to define generic functions using some kind of parametrized types, similar to templates)
"Polymorphism" literally means "many forms". The term is unfortunately a bit overloaded in computer science (excuse the pun).
According to FOLDOC, polymorphism is "a concept first identified by Christopher Strachey (1967) and developed by Hindley and Milner, allowing types such as list of anything."
In general, it's "a programming language feature that allows values of different data types to be handled using a uniform interface", to quote Wikipedia, which goes on to describe two main types of polymorphism:
Parametric polymorphism is when the same code can be applied to multiple data types. Most people in the object-oriented programming community refer to this as "generic programming" rather than polymorphism. Generics (and to some extent templates) fit into this category.
Ad-hoc polymorphism is when different code is used for different data-types. Overloading falls into this category, as does overriding. This is what people in the object-oriented community are generally referring to when they say "polymorphism". (and in fact, many mean overriding, not overloading, when they use the term "polymorphism")
For ad-hoc polymorphism there's also the question of whether the resolution of implementation code happens at run-time (dynamic) or compile-time (static). Method overloading is generally static, and method overriding is dynamic. This is where the terms static/compile-time polymorphism and dynamic/run-time polymorphism come from.
Usually people are referring to run-time polymorpism in my experience ...
When a C++ programmer says "polymorphism" he most likely means subtype polymorphism, which means "late binding" or "dynamic binding" with virtual functions. Function overloading and generic programming are both instances of polymorphism and they do involve static binding at compile time, so they can be referred to collectively as compile-time polymorphism. Subtype polymorphism is almost always referred to as just polymorphism, but the term could also refer to all of the above.
In its most succinct form, polymorphism means the ability of one type to appear as if it is another type.
There are two main types of polymorphism.
Subtype polymorphism: if D derives from B then D is a B.
Interface polymorphism: if C implements an interface I.
The first is what you are thinking of as runtime polymorphism. The second does not really apply to C++ and is a really a concept that applies to Java and C#.
Some people do think of overloading in the special case of operators (+, -, /, *) as a type of polymorphism because it allows you to think of types that have overloaded these operators as replaceable for each other (i.e., + for string and + for int). This concept of polymorphism most often applies to dynamic languages. I consider this an abuse of the terminology.
As for template programming, you will see some use the term "polymorphism" but this is really a very different thing than what we usually mean by polymorphism. A better name for this concept is "generic programming" or "genericity."
Various types of Function overloading (compile time polymorphism ...
9 Jun 2011 ... Polymorphism means same entity behaving differently at different times. Compile time polymorphism is also called as static binding.
http://churmura.com/technology/programming/various-types-of-function-overloading-compile-time-polymorphism-static-binding/39886/
A simple explanation on compile time polymorphism and run time polymorphism from :
questionscompiled.com
Compile time Polymorphism:
C++ support polymorphism. One function multiple purpose, or in short many functions having same name but with different function body.
For every function call compiler binds the call to one function definition at compile time. This decision of binding among several functions is taken by considering formal arguments of the function, their data type and their sequence.
Run time polymorphism:
C++ allows binding to be delayed till run time. When you have a function with same name, equal number of arguments and same data type in same sequence in base class as well derived class and a function call of form: base_class_type_ptr->member_function(args); will always call base class member function. The keyword virtual on a member function in base class indicates to the compiler to delay the binding till run time.
Every class with atleast one virtual function has a vtable that helps in binding at run time. Looking at the content of base class type pointer it will correctly call the member function of one of possible derived / base class member function.
Yes, you are basically right. Compile-time polymorphism is the use of templates (instances of which's types vary, but are fixed at compile time) whereas run-time polymorphism refers to the use of inheritance and virtual functions (instances of which's types vary and are fixed at run time).