According to the wikipedia page for Parametric Polymorphism:
Some implementations of type polymorphism are superficially similar to parametric polymorphism while also introducing ad hoc aspects. One example is C++ template specialization.
Question: Why is C++ said to only implement something superficially similar to paramaterized polymorphism? In particular, aren't templates an example of full on parametric polymorphism?
The article you linked to explains that. The very text you quoted actually gives one example of something that sets C++'s templates apart from pure parametric polymorphism: C++ template specialisation.
It continues on this theme:
Following Christopher Strachey,[2] parametric polymorphism may be contrasted with ad hoc polymorphism, in which a single polymorphic function can have a number of distinct and potentially heterogeneous implementations depending on the type of argument(s) to which it is applied. Thus, ad hoc polymorphism can generally only support a limited number of such distinct types, since a separate implementation has to be provided for each type.
Thus, as described, C++ templates come close to — but are not exactly — parametric polymorphism.
Why is C++ said to only implement something superficially similar to parameterized polymorphism? In particular, aren't templates an example of full on parametric polymorphism?
Templated functions in C++ work on the basis of "substitution" of the parameter. Which essentially means that the compiler generates yet another version of the function where the template arguments are hardcoded into the function.
Suppose you have this in C++:
template <typename T>
T add(T a, T b) {
return a + b;
}
int main() {
int i = add(2, 3);
double d = add(2.7, 3.8);
return i + (int)d;
}
During compilation, that will result in two functions: int add(int a, int b) { return a + b; } and double add(double a, double b) { return a + b; }
One function will ONLY handle ints, and the other will ONLY handle doubles. No polymorphism.
So really, you end up with as many implementations as the number of argument variations.
"But why isn't this parametric polymorphism?" you might ask?
You need the full source code of the 'add' function, in order to call it with your own particular variation of something that overloads the binary '+' operator! - That's the detail that makes the difference.
If C++ had proper parametric polymorphism, like C# for instance, your final compiled implementation of 'add' would contain enough logic to determine at runtime what the '+' overload would be for any given parameter acceptable to 'add'. And you wouldn't need the source code for that function, to call it with new types you invented.
What does this mean in reality?
But don't understand this as if C++ is less powerful or C# being more powerful. It's simply one of many language feature details.
If you have the full source available for your templated functions, then C++'s semantics are far superior. If you only have a static or dynamic library at your disposal, then a parametric polymorphic implementation (e.g. C#) is superior.
Related
Sorry for the newbie question, but I have a feeling I am missing something here:
If I have a certain class template which looks like this (basically the only way to pass a lambda to a function in C++, unless I am mistaken):
template<typename V, typename F>
class Something
{
public:
int some_method(V val, F func) {
double intermediate = val.do_something();
return func(intermediate);
}
}
By reading the implementation of this class, I can see that the V class must implement double do_something(), and that F must be a function/functor with the signature int F(double).
However, in languages like Java or C#, the constraints for the generic parameters are explicitly stated in the generic class signature, so they are obvious without having to look at the source code, e.g.
class Something<V> where V : IDoesSomething // interface with the DoSomething() method
{
// func delegate signature is explicit
public int SomeMethod(V val, Func<double, int> func)
{
double intermediate = val.DoSomething();
return func(intermediate);
}
}
My question is: how do I know how to implement more complex input arguments in practice? Can this somehow be documented using code only, when writing a library with template classes in C++, or is the only way to parse the code manually and look for parameter usage?
(or third possibility, add methods to the class until the compiler stops failing)
C# and Java Generics have similar syntax and some common uses with C++ templates, but they are very different beasts.
Here is a good overview.
In C++, by default template checking was done by instantiation of code, and requrements are in documentation.
Note that much of the requirements of C++ templates is semantic not syntactic; iterators need not only have the proper operations, those operations need to have the proper meaning.
You can check syntactic properties of types in C++ templates. Off the top of my head, there are 6 basic ways.
You can have a traits class requirement, like std::iterator_traits.
You can do SFINAE, an accidentally Turing-complete template metaprogramming technique.
You can use concepts and/or requires clauses if your compiler is modern enough.
You can generate static_asserts to check properties
You can use traits ADL functions, like begin.
You can just duck type, and use it as if it had the properties you want. If it quacks like a duck, it is a duck.
All of these have pluses and minuses.
The downside to all of them is that they can be harder to set up than "this parameter must inherit from the type Foo". Concepts can handle that, only a bit more verbose than Java.
Java style type erasure can be dominated using C++ templates. std::function is an example of a duck typed type eraser that allows unrelated types to be stored as values; doing something as restricted as Java is rarely worthwhile, as the machinery to do it is harder than making something more powerful.
C# reification cannot be fully duplicated by C++, because C#'s runtime environment ships with a compiler, and can effectively compile a new type when you instantiate at runtime. I have seen people ship compilers with C++ programs, compile dynamic libraries, then load and execute them, but that isn't something I'd advise.
Using modern c++20, you can:
template<Number N>
struct Polynomial;
where Number is a concept which checks N (a type) against its properties. This is a bit like the Java signature stuff on steroids.
And by c++23 you'll be able to use compile time reflection to do things that make templates look like preprocessor macros.
What is exactly new in c++ concepts? In my understanding they are functionally equal to using static_assert, but in a 'nice' manner meaning that compiler errors will be more readable (as Bjarne Stroustup said you won't get 10 pages or erros, but just one).
Basically, is it true that everything you can do with concepts you can also achieve using static_assert?
Is there something I am missing?
tl;dr
Compared to static_asserts, concepts are more powerful because:
they give you good diagnostic that you wouldn't easily achieve with static_asserts
they let you easily overload template functions without std::enable_if (that is impossible only with static_asserts)
they let you define static interfaces and reuse them without losing diagnostic (there would be the need for multiple static_asserts in each function)
they let you express your intents better and improve readability (which is a big issue with templates)
This can ease the worlds of:
templates
static polymorphism
overloading
and be the building block for interesting paradigms.
What are concepts?
Concepts express "classes" (not in the C++ term, but rather as a "group") of types that satisfy certain requirements. As an example you can see that the Swappable concept express the set of types that:
allows calls to std::swap
And you can easily see that, for example, std::string, std::vector, std::deque, int etc... satisfy this requirement and can therefore be used interchangeably in a function like:
template<typename Swappable>
void func(const Swappable& a, const Swappable& b) {
std::swap(a, b);
}
Concepts always existed in C++, the actual feature that will be added in the (possibly near) future will just allow you to express and enforce them in the language.
Better diagnostic
As far as better diagnostic goes, we will just have to trust the committee for now. But the output they "guarantee":
error: no matching function for call to 'sort(list<int>&)'
sort(l);
^
note: template constraints not satisfied because
note: `T' is not a/an `Sortable' type [with T = list<int>] since
note: `declval<T>()[n]' is not valid syntax
is very promising.
It's true that you can achieve a similar output using static_asserts but that would require different static_asserts per function and that could get tedious very fast.
As an example, imagine you have to enforce the amount of requirements given by the Container concept in 2 functions taking a template parameter; you would need to replicate them in both functions:
template<typename C>
void func_a(...) {
static_assert(...);
static_assert(...);
// ...
}
template<typename C>
void func_b(...) {
static_assert(...);
static_assert(...);
// ...
}
Otherwise you would loose the ability to distinguish which requirement was not satisfied.
With concepts instead, you can just define the concept and enforce it by simply writing:
template<Container C>
void func_a(...);
template<Container C>
void func_b(...);
Concepts overloading
Another great feature that is introduced is the ability to overload template functions on template constraints. Yes, this is also possible with std::enable_if, but we all know how ugly that can become.
As an example you could have a function that works on Containers and overload it with a version that happens to work better with SequenceContainers:
template<Container C>
int func(C& c);
template<SequenceContainer C>
int func(C& c);
The alternative, without concepts, would be this:
template<typename T>
std::enable_if<
Container<T>::value,
int
> func(T& c);
template<typename T>
std::enable_if<
SequenceContainer<T>::value,
int
> func(T& c);
Definitely uglier and possibly more error prone.
Cleaner syntax
As you have seen in the examples above the syntax is definitely cleaner and more intuitive with concepts. This can reduce the amount of code required to express constraints and can improve readability.
As seen before you can actually get to an acceptable level with something like:
static_assert(Concept<T>::value);
but at that point you would loose the great diagnostic of different static_assert. With concepts you don't need this tradeoff.
Static polymorphism
And finally concepts have interesting similarities to other functional paradigms like type classes in Haskell. For example they can be used to define static interfaces.
For example, let's consider the classical approach for an (infamous) game object interface:
struct Object {
// …
virtual update() = 0;
virtual draw() = 0;
virtual ~Object();
};
Then, assuming you have a polymorphic std::vector of derived objects you can do:
for (auto& o : objects) {
o.update();
o.draw();
}
Great, but unless you want to use multiple inheritance or entity-component-based systems, you are pretty much stuck with only one possible interface per class.
But if you actually want static polymorphism (polymorphism that is not that dynamic after all) you could define an Object concept that requires update and draw member functions (and possibly others).
At that point you can just create a free function:
template<Object O>
void process(O& o) {
o.update();
o.draw();
}
And after that you could define another interface for your game objects with other requirements. The beauty of this approach is that you can develop as many interfaces as you want without
modifying your classes
require a base class
And they are all checked and enforced at compile time.
This is just a stupid example (and a very simplistic one), but concepts really open up a whole new world for templates in C++.
If you want more informations you can read this nice article on C++ concepts vs Haskell type classes.
I created this .h file
#pragma once
namespace Core
{
class IComparableObject
{
public:
virtual int CompareTo(IComparableObject obj)=0;
};
}
But compiler doesn't like IComparableObject obj param if the method is virtual pure, while
virtual int CompareTo(IComparableObject obj) {}
It's ok, however I want it as virtual pure. How can I manage to do it? Is it possible?
You are trying to pass obj by value. You cannot pass an abstract class instance by value, because no abstract class can ever be instantiated (directly). To do what you want, you have to pass obj by reference, for example like so:
virtual int CompareTo(IComparableObject const &obj)=0;
It works when you give an implementation for CompareTo because then the class is not abstract any longer. But be aware that slicing occurs! You don't want to pass obj by value.
Well I have to give an unexpected answer here! Dennycrane said you can do this:
virtual int CompareTo(IComparableObject const &obj)=0;
but this is not correct either. Oh yes, it compiles, but it is useless because it can never be implemented correctly.
This issue is fundamental to the collapse of (statically typed) Object Orientation, so it is vital that programmers using OO recognize the issue. The problem has a name, it is called the covariance problem and it destroys OO utterly as a general programming paradigm; that is, a way of representing and independently implementing general abstractions.
This explanation will be a bit long and sloppy so bear with me and try to read between the lines.
First, an abstract class with a pure virtual method taking no arguments can be easily implemented in any derived class, since the method has access to the non-static data variables of the derived class via the this pointer. The this pointer has the type of a pointer to the derived class, and so we can say it varies along with the class, in fact it is covariant with the derived class type.
Let me call this kind of polymorphism first order, it clearly supports dispatching predicates on the object type. Indeed, the return type of such a method may also vary down with the object and class type, that is, the return type is covariant.
Now, I will generalise the idea of a method with no arguments to allow arbitrary scalar arguments (such as ints) claiming this changes nothing: this is merely a family of methods indexed by the scalar type. The important property here is that the scalar type is closed. In a derived class exactly the same scalar type must be used. in other words, the type is invariant.
General introduction of invariant parameters to a virtual function still permits polymorphism, but the result is still first order.
Unfortunately, such functions have limited utility, although they are very useful when the abstraction is only first order: a good example is device drivers.
But what if you want to model something which is actually interesting, that is, it is at least a relation?
The answer to this is: you cannot do it. This is a mathematical fact and has nothing to do with the programming language involved. Lets suppose you have an abstraction for say, numbers, and you want to add one number to another number, or compare them (as in the OP's example). Ignoring symmetry, if you have N implementations, you will have to write N^2 functions to perform the operations. If you add a new implementation of the abstraction, you have to write N+1 new functions.
Now, I have the first proof that OO is screwed: you cannot fit N^2 methods into a virtual dispatch schema because such a schema is linear. N classes gives you N methods you can implement and for N>1, N^2 > N, so OO is screwed, QED.
In a C++ context you can see the problem: consider :
struct MyComparable : IComparableObject {
int CompareTo(IComparableObject &other) { .. }
};
Arggg! We're screwed! We can't fill in the .. part here because we only have a reference to an abstraction, which has no data in it to compare to. Of course this must be the case, because there are an open/indeterminate/infinite number of possible implementations. There's no possible way to write a single comparison routine as an axiom.
Of course, if you have various property routines, or a common universal representation you can do it, but this does not count, because then the mapping to the universal representation is parameterless and thus the abstraction is only first order. For example if you have various integer representations and you add them by converting both to GNU gmp's data type mpz, then you are using two covariant projection functions and a single global non-polymorphic comparison or addition function.
This is not a counter example, it is a non-solution of the problem, which is to represent a relation or method which is covariant in at least two variables (at least self and other).
You may think you could solve this with:
struct MyComparable : IComparableObject {
int CompareTo(MyComparable &other) { .. }
};
After all you can implement this interface because you know the representation of other now, since it is MyComparable.
Do not laugh at this solution, because it is exactly what Bertrand Meyer did in Eiffel, and it is what many people do in C++ with a small change to try to work around the fact it isn't type safe and doesn't actually override the base-class function:
struct MyComparable : IComparableObject {
int CompareTo(IComparableObject &other) {
try
MyComparable &sibling = dynamic_cast(other);
...
catch (..) { return 0; }
}
};
This isn't a solution. It says that two things aren't equal just because they have different representations. That does not meet the requirement, which is to compare two things in the abstract. Two numbers, for example, cannot fail to be equal just because the representation used is different: zero equals zero, even if one is an mpz and the other an int. Remember: the idea is to properly represent an abstraction, and that means the behaviour must depend only on the abstract value, not the details of a particular implementation.
Some people have tried double dispatch. Clearly, that cannot work either. There is no possible escape from the basic issue here: you cannot stuff a square into a line.
virtual function dispatch is linear, second order problems are quadratic, so OO cannot represent second order problems.
Now I want to be very clear here that C++ and other statically typed OO languages are broken, not because they can't solve this problem, because it cannot be solved, and it isn't a problem: its a simple fact. The reason these languages and the OO paradigm in general are broken is because they promise to deliver general abstractions and then fail to do so. In the case of C++ this is the promise:
struct IComparableObject { virtual int CompareTo(IComparableObject obj)=0; };
and here is where the implicit contract is broken:
struct MyComparable : IComparableObject {
int CompareTo(IComparableObject &other) { throw 00; }
};
because the implementation I gave there is effectively the only possible one.
Well before leaving, you may ask: What is the right way (TM).
The answer is: use functional programming. In C++ that means templates.
template<class T, class U> int compare(T,U);
So if you have N types to compare, and you actually compare all combinations, then yes indeed you have to provide N^2 specialisations. Which shows templates deliver on the promise, at least in this respect. It's a fact: you can't dispatch at run time over an open set of types if the function is variant in more than one parameter.
BTW: in case you aren't convinced by theory .. just go look at the ISO C++ Standard library and see how much virtual function polymorphism is used there, compared to functional programming with templates..
Finally please note carefully that I am not saying classes and such like are useless, I use virtual function polymorphism myself: I'm saying that this is limited to particular problems and not a general way to represent abstractions, and therefore not worthy of being called a paradigm.
From C++03, §10.4 3:
An abstract class shall not be used as a parameter type, as a function return type, or as the type of an explicit conversion. Pointers and references to an abstract class can be declared.
Passing obj as a const reference is allowed.
When the CompareTo member function is pure virtual, IComparableObject is an abstract class.
You can't directly copy an object of an abstract class.
When you pass an object by value you're directly copying that object.
Instead of passing by value, you can pass by reference to const.
That is, formal argument type IComparableObject const&.
By the way, the function should probably be declared const so that it can be called on const object.
Also, instead of #pragma once, which is non-standard (but supported by most compilers), consider an ordinary include guard.
Also, when posting code that illustrates a problem, be sure to post exact code. In this case, there's a missing semicolon at the end, indicating manual typing of the code (and so that there could be other typos not so easily identified as such, but instead misidentified as part of your problem). Simply copy and paste real code.
Cheers & hth.,
I noticed that somewhere polymorphism just refer to virtual function. However, somewhere they include the function overloading and template. Later, I found there are two terms, compile time polymorphism and run-time polymorphism. Is that true?
My question is when we talked about polymorphism generally, what's the widely accepted meaning?
Yes, you're right, in C++ there are two recognized "types" of polymorphism. And they mean pretty much what you think they mean
Dynamic polymorphism
is what C#/Java/OOP people typically refer to simply as "polymorphism". It is essentially subclassing, either deriving from a base class and overriding one or more virtual functions, or implementing an interface. (which in C++ is done by overriding the virtual functions belonging to the abstract base class)
Static polymorphism
takes place at compile-time, and could be considered a variation of ducktyping. The idea here is simply that different types can be used in a function to represent the same concept, despite being completely unrelated. For a very simple example, consider this
template <typename T>
T add(const T& lhs, const T& rhs) { return lhs + rhs; }
If this had been dynamic polymorphism, then we would define the add function to take some kind of "IAddable" object as its arguments. Any object that implement that interface (or derive from that base class) can be used despite their different implementations, which gives us the polymorphic behavior. We don't care which type is passed to us, as long as it implements some kind of "can be added together" interface.
However, the compiler doesn't actually know which type is passed to the function. The exact type is only known at runtime, hence this is dynamic polymorphism.
Here, though, we don't require you to derive from anything, the type T just has to define the + operator. It is then inserted statically. So at compile-time, we can switch between any valid type as long as they behave the same (meaning that they define the members we need)
This is another form of polymorphism. In principle, the effect is the same: The function works with any implementation of the concept we're interested in. We don't care if the object we work on is a string, an int, a float or a complex number, as long as it implements the "can be added together" concept.
Since the type used is known statically (at compile-time), this is known as static polymorphism. And the way static polymorphism is achieved is through templates and function overloading.
However, when a C++ programmer just say polymorphism, they generally refer to dynamic/runtime polymorphism.
(Note that this isn't necessarily true for all languages. A functional programmer will typically mean something like static polymorphism when he uses the term -- the ability to define generic functions using some kind of parametrized types, similar to templates)
"Polymorphism" literally means "many forms". The term is unfortunately a bit overloaded in computer science (excuse the pun).
According to FOLDOC, polymorphism is "a concept first identified by Christopher Strachey (1967) and developed by Hindley and Milner, allowing types such as list of anything."
In general, it's "a programming language feature that allows values of different data types to be handled using a uniform interface", to quote Wikipedia, which goes on to describe two main types of polymorphism:
Parametric polymorphism is when the same code can be applied to multiple data types. Most people in the object-oriented programming community refer to this as "generic programming" rather than polymorphism. Generics (and to some extent templates) fit into this category.
Ad-hoc polymorphism is when different code is used for different data-types. Overloading falls into this category, as does overriding. This is what people in the object-oriented community are generally referring to when they say "polymorphism". (and in fact, many mean overriding, not overloading, when they use the term "polymorphism")
For ad-hoc polymorphism there's also the question of whether the resolution of implementation code happens at run-time (dynamic) or compile-time (static). Method overloading is generally static, and method overriding is dynamic. This is where the terms static/compile-time polymorphism and dynamic/run-time polymorphism come from.
Usually people are referring to run-time polymorpism in my experience ...
When a C++ programmer says "polymorphism" he most likely means subtype polymorphism, which means "late binding" or "dynamic binding" with virtual functions. Function overloading and generic programming are both instances of polymorphism and they do involve static binding at compile time, so they can be referred to collectively as compile-time polymorphism. Subtype polymorphism is almost always referred to as just polymorphism, but the term could also refer to all of the above.
In its most succinct form, polymorphism means the ability of one type to appear as if it is another type.
There are two main types of polymorphism.
Subtype polymorphism: if D derives from B then D is a B.
Interface polymorphism: if C implements an interface I.
The first is what you are thinking of as runtime polymorphism. The second does not really apply to C++ and is a really a concept that applies to Java and C#.
Some people do think of overloading in the special case of operators (+, -, /, *) as a type of polymorphism because it allows you to think of types that have overloaded these operators as replaceable for each other (i.e., + for string and + for int). This concept of polymorphism most often applies to dynamic languages. I consider this an abuse of the terminology.
As for template programming, you will see some use the term "polymorphism" but this is really a very different thing than what we usually mean by polymorphism. A better name for this concept is "generic programming" or "genericity."
Various types of Function overloading (compile time polymorphism ...
9 Jun 2011 ... Polymorphism means same entity behaving differently at different times. Compile time polymorphism is also called as static binding.
http://churmura.com/technology/programming/various-types-of-function-overloading-compile-time-polymorphism-static-binding/39886/
A simple explanation on compile time polymorphism and run time polymorphism from :
questionscompiled.com
Compile time Polymorphism:
C++ support polymorphism. One function multiple purpose, or in short many functions having same name but with different function body.
For every function call compiler binds the call to one function definition at compile time. This decision of binding among several functions is taken by considering formal arguments of the function, their data type and their sequence.
Run time polymorphism:
C++ allows binding to be delayed till run time. When you have a function with same name, equal number of arguments and same data type in same sequence in base class as well derived class and a function call of form: base_class_type_ptr->member_function(args); will always call base class member function. The keyword virtual on a member function in base class indicates to the compiler to delay the binding till run time.
Every class with atleast one virtual function has a vtable that helps in binding at run time. Looking at the content of base class type pointer it will correctly call the member function of one of possible derived / base class member function.
Yes, you are basically right. Compile-time polymorphism is the use of templates (instances of which's types vary, but are fixed at compile time) whereas run-time polymorphism refers to the use of inheritance and virtual functions (instances of which's types vary and are fixed at run time).
The CRTP is suggested in this question about dynamic polymorphism. However, this pattern is allegedly only useful for static polymorphism. The design I am looking at seems to be hampered speedwise by virtual function calls, as hinted at here. A speedup of even 2.5x would be fantastic.
The classes in question are simple and can be coded completely inline, however it is not known until runtime which classes will be used. Furthermore, they may be chained, in any order, heaping performance insult onto injury.
Any suggestions (including how the CRTP can be used in this case) welcome.
Edit: Googling turns up a mention of function templates. These look promising.
Polymorphism literally means multiple (poly) forms (morphs). In statically typed languages (such as C++) there are three types of polymorphism.
Adhoc polymorphism: This is best seen in C++ as function and method overloading. The same function name will bind to different methods based on matching the compile time type of the parameters of the call to the function or method signature.
Parametric polymorphism: In C++ this is templates and all the fun things you can do with it such as CRTP, specialization, partial specialization, meta-programming etc. Again this sort of polymorphism where the same template name can do different things based on the template parameters is a compile time polymorphism.
Subtype Polymorphism: Finally this is what we think of when we hear the word polymorphism in C++. This is where derived classes override virtual functions to specialize behavior. The same type of pointer to a base class can have different behavior based on the concrete derived type it is pointing to. This is the way to get run time polymorphism in C++.
If it is not known until runtime which classes will be used, you must use Subtype Polymorphism which will involve virtual function calls.
Virtual method calls have a very small performance overhead over statically bound calls. I'd urge you to look at the answers to this SO question.
I agree with m-sharp that you're not going to avoid runtime polymorphism.
If you value optimisation over elegance, try replacing say
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
v[i]->trivial_virtual_method();
}
with something like
void invoke_trivial_on_all(const std::vector<Base*>& v)
{
for (int i=0;i<v.size();i++)
{
if (v[i]->tag==FooTag)
static_cast<Foo*>(v[i])->Foo::trivial_virtual_method();
else if (v[i]->tag==BarTag)
static_cast<Bar*>(v[i])->Bar::trivial_virtual_method();
else...
}
}
it's not pretty, certainly not OOP (more a reversion to what you might do in good old 'C') but if the virtual methods are trivial enough you should get a function with no calls (subject to good enough compiler & optimisation options). A variant using dynamic_cast or typeid might be slightly more elegant/safe but beware that those features have their own overhead which is probably comparable to a virtual call anyway.
Where you'll most likely see an improvement from the above is if some classes methods are no-ops, and it saved you from calling them, or if the functions contain common loop-invariant code and the optimiser manages to hoist it out of the loop.
You can go the Ole C Route and use unions. Although that too can be messy.