A function to return different derived-type object/reference not a pointer [duplicate] - c++

I did find some questions already on StackOverflow with similar title, but when I read the answers, they were focusing on different parts of the question, which were really specific (e.g. STL/containers).
Could someone please show me, why you must use pointers/references for implementing polymorphism? I can understand pointers may help, but surely references only differentiate between pass-by-value and pass-by-reference?
Surely so long as you allocate memory on the heap, so that you can have dynamic binding, then this would have been enough. Obviously not.

"Surely so long as you allocate memory on the heap" - where the memory is allocated has nothing to do with it. It's all about the semantics. Take, for instance:
Derived d;
Base* b = &d;
d is on the stack (automatic memory), but polymorphism will still work on b.
If you don't have a base class pointer or reference to a derived class, polymorphism doesn't work because you no longer have a derived class. Take
Base c = Derived();
The c object isn't a Derived, but a Base, because of slicing. So, technically, polymorphism still works, it's just that you no longer have a Derived object to talk about.
Now take
Base* c = new Derived();
c just points to some place in memory, and you don't really care whether that's actually a Base or a Derived, but the call to a virtual method will be resolved dynamically.

In C++, an object always has a fixed type and size known at compile-time and (if it can and does have its address taken) always exists at a fixed address for the duration of its lifetime. These are features inherited from C which help make both languages suitable for low-level systems programming. (All of this is subject to the as-if, rule, though: a conforming compiler is free to do whatever it pleases with code as long as it can be proven to have no detectable effect on any behavior of a conforming program that is guaranteed by the standard.)
A virtual function in C++ is defined (more or less, no need for extreme language lawyering) as executing based on the run-time type of an object; when called directly on an object this will always be the compile-time type of the object, so there is no polymorphism when a virtual function is called this way.
Note that this didn't necessarily have to be the case: object types with virtual functions are usually implemented in C++ with a per-object pointer to a table of virtual functions which is unique to each type. If so inclined, a compiler for some hypothetical variant of C++ could implement assignment on objects (such as Base b; b = Derived()) as copying both the contents of the object and the virtual table pointer along with it, which would easily work if both Base and Derived were the same size. In the case that the two were not the same size, the compiler could even insert code that pauses the program for an arbitrary amount of time in order to rearrange memory in the program and update all possible references to that memory in a way that could be proven to have no detectable effect on the semantics of the program, terminating the program if no such rearrangement could be found: this would be very inefficient, though, and could not be guaranteed to ever halt, obviously not desirable features for an assignment operator to have.
So in lieu of the above, polymorphism in C++ is accomplished by allowing references and pointers to objects to reference and point to objects of their declared compile-time types and any subtypes thereof. When a virtual function is called through a reference or pointer, and the compiler cannot prove that the object referenced or pointed to is of a run-time type with a specific known implementation of that virtual function, the compiler inserts code which looks up the correct virtual function to call a run-time. It did not have to be this way, either: references and pointers could have been defined as being non-polymorphic (disallowing them to reference or point to subtypes of their declared types) and forcing the programmer to come up with alternative ways of implementing polymorphism. The latter is clearly possible since it's done all the time in C, but at that point there's not much reason to have a new language at all.
In sum, the semantics of C++ are designed in such a way to allow the high-level abstraction and encapsulation of object-oriented polymorphism while still retaining features (like low-level access and explicit management of memory) which allow it to be suitable for low-level development. You could easily design a language that had some other semantics, but it would not be C++ and would have different benefits and drawbacks.

I found it helpful to understand that a copy constructor is invoked when assigning like this:
class Base { };
class Derived : public Base { };
Derived x; /* Derived type object created */
Base y = x; /* Copy is made (using Base's copy constructor), so y really is of type Base. Copy can cause "slicing" btw. */
Since y is an actual object of class Base, rather than the original one, functions called on this are Base's functions.

Consider little endian architectures: values are stored low-order-bytes first. So, for any given unsigned integer, the values 0-255 are stored in the first byte of the value. Accessing the low 8-bits of any value simply requires a pointer to it's address.
So we could implement uint8 as a class. We know that an instance of uint8 is ... one byte. If we derive from it and produce uint16, uint32, etc, the interface remains the same for purposes of abstraction, but the one most important change is size of the concrete instances of the object.
Of course, if we implemented uint8 and char, the sizes may be the same, likewise sint8.
However, operator= of uint8 and uint16 are going to move different quantities of data.
In order to create a Polymorphic function we must either be able to:
a/ receive the argument by value by copying the data into a new location of the correct size and layout,
b/ take a pointer to the object's location,
c/ take a reference to the object instance,
We can use templates to achieve a, so polymorphism can work without pointers and references, but if we are not counting templates, then lets consider what happens if we implement uint128 and pass it to a function expecting uint8? Answer: 8 bits get copied instead of 128.
So what if we made our polymorphic function accept uint128 and we passed it a uint8. If our uint8 we were copying was unfortunately located, our function would attempt to copy 128 bytes of which 127 were outside of our accessible memory -> crash.
Consider the following:
class A { int x; };
A fn(A a)
{
return a;
}
class B : public A {
uint64_t a, b, c;
B(int x_, uint64_t a_, uint64_t b_, uint64_t c_)
: A(x_), a(a_), b(b_), c(c_) {}
};
B b1 { 10, 1, 2, 3 };
B b2 = fn(b1);
// b2.x == 10, but a, b and c?
At the time fn was compiled, there was no knowledge of B. However, B is derived from A so polymorphism should allow that we can call fn with a B. However, the object it returns should be an A comprising a single int.
If we pass an instance of B to this function, what we get back should be just a { int x; } with no a, b, c.
This is "slicing".
Even with pointers and references we don't avoid this for free. Consider:
std::vector<A*> vec;
Elements of this vector could be pointers to A or something derived from A. The language generally solves this through the use of the "vtable", a small addition to the object's instance which identifies the type and provides function pointers for virtual functions. You can think of it as something like:
template<class T>
struct PolymorphicObject {
T::vtable* __vtptr;
T __instance;
};
Rather than every object having its own distinct vtable, classes have them, and object instances merely point to the relevant vtable.
The problem now is not slicing but type correctness:
struct A { virtual const char* fn() { return "A"; } };
struct B : public A { virtual const char* fn() { return "B"; } };
#include <iostream>
#include <cstring>
int main()
{
A* a = new A();
B* b = new B();
memcpy(a, b, sizeof(A));
std::cout << "sizeof A = " << sizeof(A)
<< " a->fn(): " << a->fn() << '\n';
}
http://ideone.com/G62Cn0
sizeof A = 4 a->fn(): B
What we should have done is use a->operator=(b)
http://ideone.com/Vym3Lp
but again, this is copying an A to an A and so slicing would occur:
struct A { int i; A(int i_) : i(i_) {} virtual const char* fn() { return "A"; } };
struct B : public A {
int j;
B(int i_) : A(i_), j(i_ + 10) {}
virtual const char* fn() { return "B"; }
};
#include <iostream>
#include <cstring>
int main()
{
A* a = new A(1);
B* b = new B(2);
*a = *b; // aka a->operator=(static_cast<A*>(*b));
std::cout << "sizeof A = " << sizeof(A)
<< ", a->i = " << a->i << ", a->fn(): " << a->fn() << '\n';
}
http://ideone.com/DHGwun
(i is copied, but B's j is lost)
The conclusion here is that pointers/references are required because the original instance carries membership information with it that copying may interact with.
But also, that polymorphism is not perfectly solved within C++ and one must be cognizant of their obligation to provide/block actions which could produce slicing.

You need pointers or reference because for the kind of polymorphism you are interested in (*), you need that the dynamic type could be different from the static type, in other words that the true type of the object is different than the declared type. In C++ that happens only with pointers or references.
(*) Genericity, the type of polymorphism provided by templates, doesn't need pointers nor references.

When an object is passed by value, it's typically put on the stack. Putting something on the stack requires knowledge of just how big it is. When using polymorphism, you know that the incoming object implements a particular set of features, but you usually have no idea the size of the object (nor should you, necessarily, that's part of the benefit). Thus, you can't put it on the stack. You do, however, always know the size of a pointer.
Now, not everything goes on the stack, and there are other extenuating circumstances. In the case of virtual methods, the pointer to the object is also a pointer to the object's vtable(s), which indicate where the methods are. This allows the compiler to find and call the functions, regardless of what object it's working with.
Another cause is that very often the object is implemented outside of the calling library, and allocated with a completely different (and possibly incompatible) memory manager. It could also have members that can't be copied, or would cause problems if they were copied with a different manager. There could be side-effects to copying and all sorts of other complications.
The result is that the pointer is the only bit of information on the object that you really properly understand, and provides enough information to figure out where the other bits you need are.

Related

Check if list of abstract elements contains an element of a certain derived type in C++? [duplicate]

I did find some questions already on StackOverflow with similar title, but when I read the answers, they were focusing on different parts of the question, which were really specific (e.g. STL/containers).
Could someone please show me, why you must use pointers/references for implementing polymorphism? I can understand pointers may help, but surely references only differentiate between pass-by-value and pass-by-reference?
Surely so long as you allocate memory on the heap, so that you can have dynamic binding, then this would have been enough. Obviously not.
"Surely so long as you allocate memory on the heap" - where the memory is allocated has nothing to do with it. It's all about the semantics. Take, for instance:
Derived d;
Base* b = &d;
d is on the stack (automatic memory), but polymorphism will still work on b.
If you don't have a base class pointer or reference to a derived class, polymorphism doesn't work because you no longer have a derived class. Take
Base c = Derived();
The c object isn't a Derived, but a Base, because of slicing. So, technically, polymorphism still works, it's just that you no longer have a Derived object to talk about.
Now take
Base* c = new Derived();
c just points to some place in memory, and you don't really care whether that's actually a Base or a Derived, but the call to a virtual method will be resolved dynamically.
In C++, an object always has a fixed type and size known at compile-time and (if it can and does have its address taken) always exists at a fixed address for the duration of its lifetime. These are features inherited from C which help make both languages suitable for low-level systems programming. (All of this is subject to the as-if, rule, though: a conforming compiler is free to do whatever it pleases with code as long as it can be proven to have no detectable effect on any behavior of a conforming program that is guaranteed by the standard.)
A virtual function in C++ is defined (more or less, no need for extreme language lawyering) as executing based on the run-time type of an object; when called directly on an object this will always be the compile-time type of the object, so there is no polymorphism when a virtual function is called this way.
Note that this didn't necessarily have to be the case: object types with virtual functions are usually implemented in C++ with a per-object pointer to a table of virtual functions which is unique to each type. If so inclined, a compiler for some hypothetical variant of C++ could implement assignment on objects (such as Base b; b = Derived()) as copying both the contents of the object and the virtual table pointer along with it, which would easily work if both Base and Derived were the same size. In the case that the two were not the same size, the compiler could even insert code that pauses the program for an arbitrary amount of time in order to rearrange memory in the program and update all possible references to that memory in a way that could be proven to have no detectable effect on the semantics of the program, terminating the program if no such rearrangement could be found: this would be very inefficient, though, and could not be guaranteed to ever halt, obviously not desirable features for an assignment operator to have.
So in lieu of the above, polymorphism in C++ is accomplished by allowing references and pointers to objects to reference and point to objects of their declared compile-time types and any subtypes thereof. When a virtual function is called through a reference or pointer, and the compiler cannot prove that the object referenced or pointed to is of a run-time type with a specific known implementation of that virtual function, the compiler inserts code which looks up the correct virtual function to call a run-time. It did not have to be this way, either: references and pointers could have been defined as being non-polymorphic (disallowing them to reference or point to subtypes of their declared types) and forcing the programmer to come up with alternative ways of implementing polymorphism. The latter is clearly possible since it's done all the time in C, but at that point there's not much reason to have a new language at all.
In sum, the semantics of C++ are designed in such a way to allow the high-level abstraction and encapsulation of object-oriented polymorphism while still retaining features (like low-level access and explicit management of memory) which allow it to be suitable for low-level development. You could easily design a language that had some other semantics, but it would not be C++ and would have different benefits and drawbacks.
I found it helpful to understand that a copy constructor is invoked when assigning like this:
class Base { };
class Derived : public Base { };
Derived x; /* Derived type object created */
Base y = x; /* Copy is made (using Base's copy constructor), so y really is of type Base. Copy can cause "slicing" btw. */
Since y is an actual object of class Base, rather than the original one, functions called on this are Base's functions.
Consider little endian architectures: values are stored low-order-bytes first. So, for any given unsigned integer, the values 0-255 are stored in the first byte of the value. Accessing the low 8-bits of any value simply requires a pointer to it's address.
So we could implement uint8 as a class. We know that an instance of uint8 is ... one byte. If we derive from it and produce uint16, uint32, etc, the interface remains the same for purposes of abstraction, but the one most important change is size of the concrete instances of the object.
Of course, if we implemented uint8 and char, the sizes may be the same, likewise sint8.
However, operator= of uint8 and uint16 are going to move different quantities of data.
In order to create a Polymorphic function we must either be able to:
a/ receive the argument by value by copying the data into a new location of the correct size and layout,
b/ take a pointer to the object's location,
c/ take a reference to the object instance,
We can use templates to achieve a, so polymorphism can work without pointers and references, but if we are not counting templates, then lets consider what happens if we implement uint128 and pass it to a function expecting uint8? Answer: 8 bits get copied instead of 128.
So what if we made our polymorphic function accept uint128 and we passed it a uint8. If our uint8 we were copying was unfortunately located, our function would attempt to copy 128 bytes of which 127 were outside of our accessible memory -> crash.
Consider the following:
class A { int x; };
A fn(A a)
{
return a;
}
class B : public A {
uint64_t a, b, c;
B(int x_, uint64_t a_, uint64_t b_, uint64_t c_)
: A(x_), a(a_), b(b_), c(c_) {}
};
B b1 { 10, 1, 2, 3 };
B b2 = fn(b1);
// b2.x == 10, but a, b and c?
At the time fn was compiled, there was no knowledge of B. However, B is derived from A so polymorphism should allow that we can call fn with a B. However, the object it returns should be an A comprising a single int.
If we pass an instance of B to this function, what we get back should be just a { int x; } with no a, b, c.
This is "slicing".
Even with pointers and references we don't avoid this for free. Consider:
std::vector<A*> vec;
Elements of this vector could be pointers to A or something derived from A. The language generally solves this through the use of the "vtable", a small addition to the object's instance which identifies the type and provides function pointers for virtual functions. You can think of it as something like:
template<class T>
struct PolymorphicObject {
T::vtable* __vtptr;
T __instance;
};
Rather than every object having its own distinct vtable, classes have them, and object instances merely point to the relevant vtable.
The problem now is not slicing but type correctness:
struct A { virtual const char* fn() { return "A"; } };
struct B : public A { virtual const char* fn() { return "B"; } };
#include <iostream>
#include <cstring>
int main()
{
A* a = new A();
B* b = new B();
memcpy(a, b, sizeof(A));
std::cout << "sizeof A = " << sizeof(A)
<< " a->fn(): " << a->fn() << '\n';
}
http://ideone.com/G62Cn0
sizeof A = 4 a->fn(): B
What we should have done is use a->operator=(b)
http://ideone.com/Vym3Lp
but again, this is copying an A to an A and so slicing would occur:
struct A { int i; A(int i_) : i(i_) {} virtual const char* fn() { return "A"; } };
struct B : public A {
int j;
B(int i_) : A(i_), j(i_ + 10) {}
virtual const char* fn() { return "B"; }
};
#include <iostream>
#include <cstring>
int main()
{
A* a = new A(1);
B* b = new B(2);
*a = *b; // aka a->operator=(static_cast<A*>(*b));
std::cout << "sizeof A = " << sizeof(A)
<< ", a->i = " << a->i << ", a->fn(): " << a->fn() << '\n';
}
http://ideone.com/DHGwun
(i is copied, but B's j is lost)
The conclusion here is that pointers/references are required because the original instance carries membership information with it that copying may interact with.
But also, that polymorphism is not perfectly solved within C++ and one must be cognizant of their obligation to provide/block actions which could produce slicing.
You need pointers or reference because for the kind of polymorphism you are interested in (*), you need that the dynamic type could be different from the static type, in other words that the true type of the object is different than the declared type. In C++ that happens only with pointers or references.
(*) Genericity, the type of polymorphism provided by templates, doesn't need pointers nor references.
When an object is passed by value, it's typically put on the stack. Putting something on the stack requires knowledge of just how big it is. When using polymorphism, you know that the incoming object implements a particular set of features, but you usually have no idea the size of the object (nor should you, necessarily, that's part of the benefit). Thus, you can't put it on the stack. You do, however, always know the size of a pointer.
Now, not everything goes on the stack, and there are other extenuating circumstances. In the case of virtual methods, the pointer to the object is also a pointer to the object's vtable(s), which indicate where the methods are. This allows the compiler to find and call the functions, regardless of what object it's working with.
Another cause is that very often the object is implemented outside of the calling library, and allocated with a completely different (and possibly incompatible) memory manager. It could also have members that can't be copied, or would cause problems if they were copied with a different manager. There could be side-effects to copying and all sorts of other complications.
The result is that the pointer is the only bit of information on the object that you really properly understand, and provides enough information to figure out where the other bits you need are.

Call derived class method when upcasted [duplicate]

I did find some questions already on StackOverflow with similar title, but when I read the answers, they were focusing on different parts of the question, which were really specific (e.g. STL/containers).
Could someone please show me, why you must use pointers/references for implementing polymorphism? I can understand pointers may help, but surely references only differentiate between pass-by-value and pass-by-reference?
Surely so long as you allocate memory on the heap, so that you can have dynamic binding, then this would have been enough. Obviously not.
"Surely so long as you allocate memory on the heap" - where the memory is allocated has nothing to do with it. It's all about the semantics. Take, for instance:
Derived d;
Base* b = &d;
d is on the stack (automatic memory), but polymorphism will still work on b.
If you don't have a base class pointer or reference to a derived class, polymorphism doesn't work because you no longer have a derived class. Take
Base c = Derived();
The c object isn't a Derived, but a Base, because of slicing. So, technically, polymorphism still works, it's just that you no longer have a Derived object to talk about.
Now take
Base* c = new Derived();
c just points to some place in memory, and you don't really care whether that's actually a Base or a Derived, but the call to a virtual method will be resolved dynamically.
In C++, an object always has a fixed type and size known at compile-time and (if it can and does have its address taken) always exists at a fixed address for the duration of its lifetime. These are features inherited from C which help make both languages suitable for low-level systems programming. (All of this is subject to the as-if, rule, though: a conforming compiler is free to do whatever it pleases with code as long as it can be proven to have no detectable effect on any behavior of a conforming program that is guaranteed by the standard.)
A virtual function in C++ is defined (more or less, no need for extreme language lawyering) as executing based on the run-time type of an object; when called directly on an object this will always be the compile-time type of the object, so there is no polymorphism when a virtual function is called this way.
Note that this didn't necessarily have to be the case: object types with virtual functions are usually implemented in C++ with a per-object pointer to a table of virtual functions which is unique to each type. If so inclined, a compiler for some hypothetical variant of C++ could implement assignment on objects (such as Base b; b = Derived()) as copying both the contents of the object and the virtual table pointer along with it, which would easily work if both Base and Derived were the same size. In the case that the two were not the same size, the compiler could even insert code that pauses the program for an arbitrary amount of time in order to rearrange memory in the program and update all possible references to that memory in a way that could be proven to have no detectable effect on the semantics of the program, terminating the program if no such rearrangement could be found: this would be very inefficient, though, and could not be guaranteed to ever halt, obviously not desirable features for an assignment operator to have.
So in lieu of the above, polymorphism in C++ is accomplished by allowing references and pointers to objects to reference and point to objects of their declared compile-time types and any subtypes thereof. When a virtual function is called through a reference or pointer, and the compiler cannot prove that the object referenced or pointed to is of a run-time type with a specific known implementation of that virtual function, the compiler inserts code which looks up the correct virtual function to call a run-time. It did not have to be this way, either: references and pointers could have been defined as being non-polymorphic (disallowing them to reference or point to subtypes of their declared types) and forcing the programmer to come up with alternative ways of implementing polymorphism. The latter is clearly possible since it's done all the time in C, but at that point there's not much reason to have a new language at all.
In sum, the semantics of C++ are designed in such a way to allow the high-level abstraction and encapsulation of object-oriented polymorphism while still retaining features (like low-level access and explicit management of memory) which allow it to be suitable for low-level development. You could easily design a language that had some other semantics, but it would not be C++ and would have different benefits and drawbacks.
I found it helpful to understand that a copy constructor is invoked when assigning like this:
class Base { };
class Derived : public Base { };
Derived x; /* Derived type object created */
Base y = x; /* Copy is made (using Base's copy constructor), so y really is of type Base. Copy can cause "slicing" btw. */
Since y is an actual object of class Base, rather than the original one, functions called on this are Base's functions.
Consider little endian architectures: values are stored low-order-bytes first. So, for any given unsigned integer, the values 0-255 are stored in the first byte of the value. Accessing the low 8-bits of any value simply requires a pointer to it's address.
So we could implement uint8 as a class. We know that an instance of uint8 is ... one byte. If we derive from it and produce uint16, uint32, etc, the interface remains the same for purposes of abstraction, but the one most important change is size of the concrete instances of the object.
Of course, if we implemented uint8 and char, the sizes may be the same, likewise sint8.
However, operator= of uint8 and uint16 are going to move different quantities of data.
In order to create a Polymorphic function we must either be able to:
a/ receive the argument by value by copying the data into a new location of the correct size and layout,
b/ take a pointer to the object's location,
c/ take a reference to the object instance,
We can use templates to achieve a, so polymorphism can work without pointers and references, but if we are not counting templates, then lets consider what happens if we implement uint128 and pass it to a function expecting uint8? Answer: 8 bits get copied instead of 128.
So what if we made our polymorphic function accept uint128 and we passed it a uint8. If our uint8 we were copying was unfortunately located, our function would attempt to copy 128 bytes of which 127 were outside of our accessible memory -> crash.
Consider the following:
class A { int x; };
A fn(A a)
{
return a;
}
class B : public A {
uint64_t a, b, c;
B(int x_, uint64_t a_, uint64_t b_, uint64_t c_)
: A(x_), a(a_), b(b_), c(c_) {}
};
B b1 { 10, 1, 2, 3 };
B b2 = fn(b1);
// b2.x == 10, but a, b and c?
At the time fn was compiled, there was no knowledge of B. However, B is derived from A so polymorphism should allow that we can call fn with a B. However, the object it returns should be an A comprising a single int.
If we pass an instance of B to this function, what we get back should be just a { int x; } with no a, b, c.
This is "slicing".
Even with pointers and references we don't avoid this for free. Consider:
std::vector<A*> vec;
Elements of this vector could be pointers to A or something derived from A. The language generally solves this through the use of the "vtable", a small addition to the object's instance which identifies the type and provides function pointers for virtual functions. You can think of it as something like:
template<class T>
struct PolymorphicObject {
T::vtable* __vtptr;
T __instance;
};
Rather than every object having its own distinct vtable, classes have them, and object instances merely point to the relevant vtable.
The problem now is not slicing but type correctness:
struct A { virtual const char* fn() { return "A"; } };
struct B : public A { virtual const char* fn() { return "B"; } };
#include <iostream>
#include <cstring>
int main()
{
A* a = new A();
B* b = new B();
memcpy(a, b, sizeof(A));
std::cout << "sizeof A = " << sizeof(A)
<< " a->fn(): " << a->fn() << '\n';
}
http://ideone.com/G62Cn0
sizeof A = 4 a->fn(): B
What we should have done is use a->operator=(b)
http://ideone.com/Vym3Lp
but again, this is copying an A to an A and so slicing would occur:
struct A { int i; A(int i_) : i(i_) {} virtual const char* fn() { return "A"; } };
struct B : public A {
int j;
B(int i_) : A(i_), j(i_ + 10) {}
virtual const char* fn() { return "B"; }
};
#include <iostream>
#include <cstring>
int main()
{
A* a = new A(1);
B* b = new B(2);
*a = *b; // aka a->operator=(static_cast<A*>(*b));
std::cout << "sizeof A = " << sizeof(A)
<< ", a->i = " << a->i << ", a->fn(): " << a->fn() << '\n';
}
http://ideone.com/DHGwun
(i is copied, but B's j is lost)
The conclusion here is that pointers/references are required because the original instance carries membership information with it that copying may interact with.
But also, that polymorphism is not perfectly solved within C++ and one must be cognizant of their obligation to provide/block actions which could produce slicing.
You need pointers or reference because for the kind of polymorphism you are interested in (*), you need that the dynamic type could be different from the static type, in other words that the true type of the object is different than the declared type. In C++ that happens only with pointers or references.
(*) Genericity, the type of polymorphism provided by templates, doesn't need pointers nor references.
When an object is passed by value, it's typically put on the stack. Putting something on the stack requires knowledge of just how big it is. When using polymorphism, you know that the incoming object implements a particular set of features, but you usually have no idea the size of the object (nor should you, necessarily, that's part of the benefit). Thus, you can't put it on the stack. You do, however, always know the size of a pointer.
Now, not everything goes on the stack, and there are other extenuating circumstances. In the case of virtual methods, the pointer to the object is also a pointer to the object's vtable(s), which indicate where the methods are. This allows the compiler to find and call the functions, regardless of what object it's working with.
Another cause is that very often the object is implemented outside of the calling library, and allocated with a completely different (and possibly incompatible) memory manager. It could also have members that can't be copied, or would cause problems if they were copied with a different manager. There could be side-effects to copying and all sorts of other complications.
The result is that the pointer is the only bit of information on the object that you really properly understand, and provides enough information to figure out where the other bits you need are.

Initialisation of objects with/without vtable

Say I have a pool that allocates some buffer.
int size = 10;
T* buffer = (T*) new char[size * sizeof(T)];
If I now want to assign some data to the buffer, i do the following.
buffer[0] = data;
My question is now what is the difference in initialization of objects that have vtable and those that don't.
From what I can see, I can without a problem assign classes to this buffer, and as long as I don't call any virtual functions, function calls work just fine.
e.g.
class A{
void function(){}
};
A a;
buffer[0] = a;
a.function(); // works
But:
class B{
void function(){}
virtual void virtual_function(){}
};
B b;
buffer[0] = b;
b.function(); // does work
b.virtual_function() // does not work.
Why does non-virtual function work?
Is it because the function is statically declared due to it being a normal class function and therefore is being copied when we do the assignment?
But then it doesn't make sense that I need to call the constructor on the buffer I created in case I need to make sure the virtual function works as well. new (buffer[0]) T(); in order to call the constructor on the object created.
Both examples first create the appropriate size of the buffer then do a assignment, view this as a pool where I pre-allocate memory depending on the amount of objects I want to fit in the pool.
Maybe I just looked at this to long and confused my self :)
Your non-virtual functions "work" (a relative term) because they need no vtable lookup. Under the hood is implementation-dependent, but consider what is needed to execute a non-virtual member.
You need a function pointer, and a this. The latter is obvious, but where does the fn-ptr come from? its just a plain function call (expecting a this, then any supplied arguments). There is no polymorphic potential here. No vtable lookup required means the compiler can (and often does) simply take the address of what we think is an object, push it, push any supplied args, and invoke the member function as a plain-old-call. The compiler knows which function to call, and needs no vtable-intermediary.
It is not uncommon for this to cause headaches when invoking non-static, non-virtual member function on illicit pointers. If the function is virtual, you'll generally (if you're fortunate) blow up on the call. If the function is non-virtual, you'll generally (if you're fortunate) blow up somewhere in the body of the function as it tries to access member data that isn't there (including a vtable-directed execution if your non-virtual calls a virtual).
To demonstrate this, consider this (obviously UB) example. Try it.
#include <iostream>
class NullClass
{
public:
void call_me()
{
std::cout << static_cast<void*>(this) << '\n';
std::cout << "How did I get *here* ???" << '\n';
}
};
int main()
{
NullClass *noObject = NULL;
noObject->call_me();
}
Output (OSX 10.10.1 x64, clang 3.5)
0x0
How did I get *here* ???
The bottom line is no vtable is bound to the object when you allocate raw memory and assign a pointer via a cast as you are. If you want to do this, you need to construct the object via placement-new. And in so doing, do not forget you must also destroy the object (which has nothing to do with the memory it occupies, as you're managing that separately) by calling its destructor manually.
Finally, the assignment you're invoking does not copy the vtable. Frankly there is no reason to. The vtable of a properly constructed object is already properly built, and referenced by the vtable pointer for a given object instance. Said-pointer does not participate in object copying, which has its own set of mandated requirements from the language standard.
new char[...]
This does not construct object T (does not calls constructor).
Virtual table is created during construction.
The problem is not specially with virtual functions but more generally with inheritance. As buffer is an array of A, when you write :
B b;
buffer[0] = b;
you first construct a B object (first line), and later construct an A object using its copy constructor initialized with b (second line).
So when you later call buffer[0].virtual_function() you actually apply the virtual function to an A object, not to aB one.
By the way, a direct call to b.virtual_function() should still correctly call the B version since it is applied to a real B object :
B b;
buffer[0] = b;
b.virtual_function(); // calls B version
If you do not need to take a copy of the object, you could use an array of pointers.

Why doesn't polymorphism work without pointers/references?

I did find some questions already on StackOverflow with similar title, but when I read the answers, they were focusing on different parts of the question, which were really specific (e.g. STL/containers).
Could someone please show me, why you must use pointers/references for implementing polymorphism? I can understand pointers may help, but surely references only differentiate between pass-by-value and pass-by-reference?
Surely so long as you allocate memory on the heap, so that you can have dynamic binding, then this would have been enough. Obviously not.
"Surely so long as you allocate memory on the heap" - where the memory is allocated has nothing to do with it. It's all about the semantics. Take, for instance:
Derived d;
Base* b = &d;
d is on the stack (automatic memory), but polymorphism will still work on b.
If you don't have a base class pointer or reference to a derived class, polymorphism doesn't work because you no longer have a derived class. Take
Base c = Derived();
The c object isn't a Derived, but a Base, because of slicing. So, technically, polymorphism still works, it's just that you no longer have a Derived object to talk about.
Now take
Base* c = new Derived();
c just points to some place in memory, and you don't really care whether that's actually a Base or a Derived, but the call to a virtual method will be resolved dynamically.
In C++, an object always has a fixed type and size known at compile-time and (if it can and does have its address taken) always exists at a fixed address for the duration of its lifetime. These are features inherited from C which help make both languages suitable for low-level systems programming. (All of this is subject to the as-if, rule, though: a conforming compiler is free to do whatever it pleases with code as long as it can be proven to have no detectable effect on any behavior of a conforming program that is guaranteed by the standard.)
A virtual function in C++ is defined (more or less, no need for extreme language lawyering) as executing based on the run-time type of an object; when called directly on an object this will always be the compile-time type of the object, so there is no polymorphism when a virtual function is called this way.
Note that this didn't necessarily have to be the case: object types with virtual functions are usually implemented in C++ with a per-object pointer to a table of virtual functions which is unique to each type. If so inclined, a compiler for some hypothetical variant of C++ could implement assignment on objects (such as Base b; b = Derived()) as copying both the contents of the object and the virtual table pointer along with it, which would easily work if both Base and Derived were the same size. In the case that the two were not the same size, the compiler could even insert code that pauses the program for an arbitrary amount of time in order to rearrange memory in the program and update all possible references to that memory in a way that could be proven to have no detectable effect on the semantics of the program, terminating the program if no such rearrangement could be found: this would be very inefficient, though, and could not be guaranteed to ever halt, obviously not desirable features for an assignment operator to have.
So in lieu of the above, polymorphism in C++ is accomplished by allowing references and pointers to objects to reference and point to objects of their declared compile-time types and any subtypes thereof. When a virtual function is called through a reference or pointer, and the compiler cannot prove that the object referenced or pointed to is of a run-time type with a specific known implementation of that virtual function, the compiler inserts code which looks up the correct virtual function to call a run-time. It did not have to be this way, either: references and pointers could have been defined as being non-polymorphic (disallowing them to reference or point to subtypes of their declared types) and forcing the programmer to come up with alternative ways of implementing polymorphism. The latter is clearly possible since it's done all the time in C, but at that point there's not much reason to have a new language at all.
In sum, the semantics of C++ are designed in such a way to allow the high-level abstraction and encapsulation of object-oriented polymorphism while still retaining features (like low-level access and explicit management of memory) which allow it to be suitable for low-level development. You could easily design a language that had some other semantics, but it would not be C++ and would have different benefits and drawbacks.
I found it helpful to understand that a copy constructor is invoked when assigning like this:
class Base { };
class Derived : public Base { };
Derived x; /* Derived type object created */
Base y = x; /* Copy is made (using Base's copy constructor), so y really is of type Base. Copy can cause "slicing" btw. */
Since y is an actual object of class Base, rather than the original one, functions called on this are Base's functions.
Consider little endian architectures: values are stored low-order-bytes first. So, for any given unsigned integer, the values 0-255 are stored in the first byte of the value. Accessing the low 8-bits of any value simply requires a pointer to it's address.
So we could implement uint8 as a class. We know that an instance of uint8 is ... one byte. If we derive from it and produce uint16, uint32, etc, the interface remains the same for purposes of abstraction, but the one most important change is size of the concrete instances of the object.
Of course, if we implemented uint8 and char, the sizes may be the same, likewise sint8.
However, operator= of uint8 and uint16 are going to move different quantities of data.
In order to create a Polymorphic function we must either be able to:
a/ receive the argument by value by copying the data into a new location of the correct size and layout,
b/ take a pointer to the object's location,
c/ take a reference to the object instance,
We can use templates to achieve a, so polymorphism can work without pointers and references, but if we are not counting templates, then lets consider what happens if we implement uint128 and pass it to a function expecting uint8? Answer: 8 bits get copied instead of 128.
So what if we made our polymorphic function accept uint128 and we passed it a uint8. If our uint8 we were copying was unfortunately located, our function would attempt to copy 128 bytes of which 127 were outside of our accessible memory -> crash.
Consider the following:
class A { int x; };
A fn(A a)
{
return a;
}
class B : public A {
uint64_t a, b, c;
B(int x_, uint64_t a_, uint64_t b_, uint64_t c_)
: A(x_), a(a_), b(b_), c(c_) {}
};
B b1 { 10, 1, 2, 3 };
B b2 = fn(b1);
// b2.x == 10, but a, b and c?
At the time fn was compiled, there was no knowledge of B. However, B is derived from A so polymorphism should allow that we can call fn with a B. However, the object it returns should be an A comprising a single int.
If we pass an instance of B to this function, what we get back should be just a { int x; } with no a, b, c.
This is "slicing".
Even with pointers and references we don't avoid this for free. Consider:
std::vector<A*> vec;
Elements of this vector could be pointers to A or something derived from A. The language generally solves this through the use of the "vtable", a small addition to the object's instance which identifies the type and provides function pointers for virtual functions. You can think of it as something like:
template<class T>
struct PolymorphicObject {
T::vtable* __vtptr;
T __instance;
};
Rather than every object having its own distinct vtable, classes have them, and object instances merely point to the relevant vtable.
The problem now is not slicing but type correctness:
struct A { virtual const char* fn() { return "A"; } };
struct B : public A { virtual const char* fn() { return "B"; } };
#include <iostream>
#include <cstring>
int main()
{
A* a = new A();
B* b = new B();
memcpy(a, b, sizeof(A));
std::cout << "sizeof A = " << sizeof(A)
<< " a->fn(): " << a->fn() << '\n';
}
http://ideone.com/G62Cn0
sizeof A = 4 a->fn(): B
What we should have done is use a->operator=(b)
http://ideone.com/Vym3Lp
but again, this is copying an A to an A and so slicing would occur:
struct A { int i; A(int i_) : i(i_) {} virtual const char* fn() { return "A"; } };
struct B : public A {
int j;
B(int i_) : A(i_), j(i_ + 10) {}
virtual const char* fn() { return "B"; }
};
#include <iostream>
#include <cstring>
int main()
{
A* a = new A(1);
B* b = new B(2);
*a = *b; // aka a->operator=(static_cast<A*>(*b));
std::cout << "sizeof A = " << sizeof(A)
<< ", a->i = " << a->i << ", a->fn(): " << a->fn() << '\n';
}
http://ideone.com/DHGwun
(i is copied, but B's j is lost)
The conclusion here is that pointers/references are required because the original instance carries membership information with it that copying may interact with.
But also, that polymorphism is not perfectly solved within C++ and one must be cognizant of their obligation to provide/block actions which could produce slicing.
You need pointers or reference because for the kind of polymorphism you are interested in (*), you need that the dynamic type could be different from the static type, in other words that the true type of the object is different than the declared type. In C++ that happens only with pointers or references.
(*) Genericity, the type of polymorphism provided by templates, doesn't need pointers nor references.
When an object is passed by value, it's typically put on the stack. Putting something on the stack requires knowledge of just how big it is. When using polymorphism, you know that the incoming object implements a particular set of features, but you usually have no idea the size of the object (nor should you, necessarily, that's part of the benefit). Thus, you can't put it on the stack. You do, however, always know the size of a pointer.
Now, not everything goes on the stack, and there are other extenuating circumstances. In the case of virtual methods, the pointer to the object is also a pointer to the object's vtable(s), which indicate where the methods are. This allows the compiler to find and call the functions, regardless of what object it's working with.
Another cause is that very often the object is implemented outside of the calling library, and allocated with a completely different (and possibly incompatible) memory manager. It could also have members that can't be copied, or would cause problems if they were copied with a different manager. There could be side-effects to copying and all sorts of other complications.
The result is that the pointer is the only bit of information on the object that you really properly understand, and provides enough information to figure out where the other bits you need are.

Memory structure of a function-only object?

Let's say we have a class that looks like this:
class A
{
public:
int FuncA( int x );
int FuncB( int y );
int a;
int b;
};
Now, I know that objects of this class will be laid out in memory with just the two ints. That is, if I make a vector of instances of class A, there will be two ints for one instance, then followed by two ints for the second instance etc. The objects are POD.
BUT let's say the class looks like this:
class B
{
public:
int FuncA( int x );
int FuncB( int y );
};
What do objects of this class look like in memory? If I fill a vector with instances of B... what's in the vector? I've been told that non-virtual member functions are in the end compiled as free functions somewhere completely unrelated to the instances of the class in which they're declared (virtual function are too, but the objects store a vtable with function pointers). That the access restrictions are merely at the semantic, "human" level. Only the data members of a class (and the vtable etc.) actually make up the memory structure of objects.
So again, what do objects of class B look like in memory? Is it some kind of placeholder value? Something has to be there, I can take the object's address. It has to point to something. Whatever it is, is the compiler allowed to inline/optimize out these objects and treat the method calls as just normal free function calls? If I create a vector of these and call the same method on every object, can the compiler eliminate the vector and replace it with just a bunch of normal calls?
I'm just curious.
All objects in C++ are guaranteed to have a sizeof >= 1 so that each object will have a unique address.
I haven't tried it, but I would guess that in your example, the compiler would allocate but not initialize 1 byte for each function object in the array/vector.
As Ferruccio said, All objects in C++ are guaranteed to have a size of at least 1. Mostly likely, it's 1 byte, but fills out the size of the alignment, but whatever.
However, when used as a base class, it does not need to fill any space, so that:
class A {} a; // a is 1 byte.
class B {} b; // b is 1 byte.
class C { A a; B b;} c; // c is 2 bytes.
class D : public A, B { } d; // d is 1 byte.
class E : public A, B { char ee; } e; // e is only 1 byte
What do objects of this class look like in memory?
It's entirely up to the compiler. An instance of an empty class must have non-zero size, so that distinct objects have distinct addresses (unless it's instantiated as a base class of another class, in which case it can take up no space at all). Typically, it will consist of a single uninitialised byte.
Whatever it is, is the compiler allowed to inline/optimize out these objects and treat the method calls as just normal free function calls?
Yes; the compiler doesn't have to create the object unless you do something like taking its address. Empty function objects are used quite a lot in the Standard Library, so it's important that they don't introduce any unnecessary overhead.
I performed the following experiment:
#include <iostream>
class B
{
public:
int FuncA( int x );
int FuncB( int y );
};
int main()
{
std::cout << sizeof( B ) ;
}
The result was 1 (VC++ 2010)
It seems to me that the class actually requires no memory whatsoever, but that an object cannot be zero sized since that would make no semantic sense if you took its address for example. This is borne out by Ferruccio's answer.s
Everything I say from here on out is implementation dependent - but most implementations will conform.
If the class has any virtual methods, there will be an invisible vtable pointer member. That isn't the case with your example however.
Yes, the compiler will treat a member function call the same as a free function call, again unless it's a virtual function. Even if it is a virtual function, the compiler can bypass the vtable if it knows the concrete type at the time of the call. Each call will still depend on the object, because there's an invisible this parameter with the object's pointer that gets added to the call.
I would think they just look like any objects in C++:
Each instance of the class occupies space. Because objects in C++ must have a size of at least 1 (so they have unique addresses, as Ferruccino said), objects that don't specify any data don't receive special treatment.
Non-virtual functions do not occupy any space at all in a class. Rather, they can be thought of as functions like this:
int B_FuncA(B *this, int x);
int B_FuncB(B *this, int y);
If this class can be used by other .cpp files, I think these will become actual class instances, not regular functions.
If you just want your functions to be free rather than bound to objects, you could either make them static or use a namespace.
I've been told that non-virtual member functions are in the end compiled as free functions somewhere completely unrelated to the instances of the class in which they're declared (virtual function are too, but the objects store a vtable with function pointers). That the access restrictions are merely at the semantic, "human" level. Only the data members of a class (and the vtable etc.) actually make up the memory structure of objects.
Yep, that is usually how it works. It might be worth pointing out the distinction that this isn't specified in the standard, and it's not required -- it just makes sense to implement classes like this in the compiler.
So again, what do objects of class B look like in memory? Is it some kind of placeholder value? Something has to be there, I can take the object's address
Exactly. :)
The C++ standard requires that objects take up at least one byte, for exactly the reason you say. It must have an address, and if I put these objects into an array, I must be able to increment a pointer in order to get "the next" object, so every object must have a unique address and take up at least 1 byte. (Of course, empty objects don't have to take exactly 1 byte. Some compilers may choose to make them 4 bytes, or any other size, for performance reasons)
A sensible compiler won't even make it a placeholder value though. Why bother writing any specific value into this one byte? We can just let it contain whatever garbage it held when the object was created. It'll never be accessed anyway. A single byte is just allocated, and never read or written to.