I've been thinking why constexr and virtual are mutually exclusive, and someone added:
... constexpr is all about execution at compile time; if we're executing the function at compile time, we obviously know the type of data upon which it's acting at compile time as well, so late binding clearly isn't relevant.
However, it's possible that the dynamic type is not identical to the static type even in compile-time, and there might be cases where the dynamic type is needed:
class A {
public:
/* virtual */ constexpr int foo() {
return 1;
}
};
class B : public A {
public:
constexpr int foo() {
return 2;
}
};
constexpr int foo(A &a) {
// The static type is fixed here.
// What if we want to call B::foo() ?
return a.foo();
}
int main() {
B b;
constexpr int c = foo(b);
return 0;
}
That is, my question is
what's the (possible) rationale behind the standard prohibiting the combination of the two?
This restriction exists since constexpr was introduced in C++11:
10.1.5 The constexpr specifier [dcl.constexpr]
3 The definition of a constexpr function shall satisfy the following requirements:
(3.1) - it shall not be virtual;
But you're asking about rationale for this restriction, and not about the restriction itself.
The fact is, it may just be an oversight. Since in a constant expression the dynamic type of the object is required to be known, this restriction is unnecessary and artificial: This is what Peter Dimov and Vassil Vassilev assert in P1064R0, where they propose to remove it.
In fact, the wording for it no longer exists in the current draft.
Because virtual calls are resolved via vtables/RTTI (RunTime Type Information) located in the memory layout of your object at runtime, the "true type" behind the "used object handle" is not known at compile time.
In your example:
constexpr int foo(A &a) {
// The static type is fixed here.
// What if we want to call B::foo() ?
return a.foo();
}
If the foo member function is virtual there is no way for that foo function to be executed at compile time. And there is no point in marking a function as constexpr if it can never be executed at compile time. Hence there is no point in ever marking something both constexpr and virtual.
Now technically, depending on the code complexity, multiple cases might be resolvable by adding a completely new system (other than RTTI) to resolve virtual calls that would be compile time compatible (e.g some form of compiler meta data when everything is constexpr that remembers what type of instance you put in a pointer/reference) but that simply is not a thing right now.
If you want a runtime indirection (which is what virtual does) it doesn't make much sense to also want that to be executed at compile time.
P.S: sorry for the "compile time" and "runtime" spam.
Indeed, it is possible for compilers to know at compile time what is the dynamic type of a constant expression. This is a possibility, that is some time used by optimizer to devirtualize calls at compile time.
But the c++ language evolution take also in consideration how difficult it is to implement it in compilers. If such a consideration were not taken, then the c++ standard will be too far from the c++ coded so that it would be unusefull.
Maybe, it has been evaluated that keeping trac of the static type of every reference during constant expression evaluation would be too expensive to implement. This is where reality hurts!
Related
I can easily say that by declaring a function as constexpr, we evaluate it during the compile-time and this saves time during run-time as the result was already produced.
On the other hand, virtual functions need to be resolved during run-time. Hence, I guess we cannot get rid of the resolution process. Only the result can be fetched quickly thanks to the mechanism of constexpr functions.
Is there any other benefit of constexpr virtual functions?
Well the obvious benefit is that you can even do virtual function calls at compile time now.
struct Base {
constexpr virtual int get() { return 1; }
virtual ~Base() = default;
};
struct Child : Base {
constexpr int get() override { return 2; }
};
constexpr int foo(bool b) {
Base* ptr = b ? new Base() : new Child();
auto res = ptr->get(); // this call is not possible prior to C++20
delete ptr;
return res;
}
constexpr auto BaseVal = foo(true);
constexpr auto ChildVal = foo(false);
You can't use the get function via a base pointer in a constant expression prior to C++20. If you make it constexpr, you can though. Example.
Now thinking about what benefit we could get from virtual function calls at compile time: maybe compile times. C++ has basically two mechanisms to deal with polymorphism:
templates, and
virtual functions.
Both solve essentially the same problems but at different stages in your program's life time. Of course it's nice to do as much computation as possible at compile time and therefore have the best performance at run time. However, this is not always a feasible approach because compile time can explode quickly due to how templates work.
Speculations start here. Now what if we broaden the stages at which virtual functions can be called and also allow them to be called at compile time? This would allow us, in some cases, to replace heavily recursive or nested templates with virtual function calls. Assuming that the constexpr interpreter is faster than the compiler recursively resolving templates, you could see some compile time reductions.
Of course this benefit is overshadowed by the performance increases you'll get from concepts and modules.
Another benefit lies in the nature of constexpr in general: UB is forbidden during constant evaluation. This means you could check if your virtual functions are UB free with a few static asserts.
As of C++2a, virtual functions can now be constexpr. But as far as I know, you still cannot call arbitrary function pointers in constexpr context.
Dynamic polymorphism is usually implemented using a vtable, which contains the function pointer to call.
Also, dynamic polymorphism with virtual is useful to call overriding functions of a type you don't know which one it is at compile time. For example:
struct A {
virtual void fn() const {
std::cout << 'A' << std::endl;
}
};
void a_or_b(A const& a) {
// The compiler has no idea `B` exists
// it must be deferred at runtime
a.fn();
}
struct B : A {
void fn() const override {
std::cout << 'A' << std::endl;
}
};
int main() {
// We choose which class is sent
a_or_b(rand() % 2 ? A{} : B{});
}
So considering those that function pointers cannot be called at compile time and virtual polymorphism is used when the compiler don't have enough information to statically infer what function to call, how are virtual constexpr functions possible?
Please keep in mind that constexpr virtual functions would be called at compile time only when the type is already known to the compiler and obviously they would not be called through virtual dispatch.
Corresponding proposal provides similar explanation:
Virtual function calls are currently prohibited in constant
expressions. Since in a constant expression the dynamic type of the
object is required to be known (in order to, for example, diagnose
undefined behavior in casts), the restriction is unnecessary and
artificial. We propose the restriction be removed.
It also has a very nice motivating example.
Can virtual functions like X::f() in the following code
struct X
{
constexpr virtual int f() const
{
return 0;
}
};
be constexpr?
This answer is no longer correct as of C++20.
No. From [dcl.constexpr]/3 (7.1.5, "The constexpr specifier"):
The definition of a constexpr function shall satisfy the following requirements:
— it shall not be virtual
Up through C++17, virtual functions could not be declared constexpr. The general reason being that, in constexpr code, everything happen can at compile time. So there really isn't much point to having a function which takes a reference to a base class and calls virtual functions on it; you may as well make it a template function and pass the real type, since you know the real type.
Of course, this thinking doesn't really work as constexpr code becomes more complex, or if you want to share interfaces between compile-time and runtime code. In both cases, losing track of the original type is easy to do. It would also allow std::error_code to be more constexpr-friendly.
Also, the fact that C++20 will allow us to do (limited) dynamic allocation of objects means that it is very easy to lose track of the original type. You can now create a vector<Base*> in constexpr code, insert some Derived class instances into it, and pass that to a constexpr function to operate on.
So C++20 allows virtual functions to be declared constexpr.
Can virtual functions be constexpr?
Yes. Only since C++20, virtual functions can be constexpr.
Referring to this question, stackoverflow.com/q/14188612, are there situations when the compiler folds the method instantiation of two objects?
Let's say we have the following class with a private "stateless" method add, that does not modify the class members:
class Element
{
public:
Class(int a, int b) : a_(a), b_(b)
{
c_ = add(a, b);
}
private:
int add(int a, int b)
{
return a + b;
}
private:
int a_;
int b_;
int c_;
}
int main(void)
{
Element a(1, 2);
Element b(3, 4);
}
Can we sometimes expect that add will actually be compiled as a static-like method? Or, to be more clear, the address of a.add to be equal to b.add (add stored only once).
This is merely a question related to understanding compiler optimizations.
The compiler will always generate one binary method/function for add,
independent of how many objects you have. Anything else isn´t just stupid, but impossible:
The compiler can´t possibly know/calculate how many objects will exist during runtime just from the code. While it is possible with your example, more complicated programs will instantiate variables (or not) based on input given at runtime (keyboard, files...).
Note that templates can lead to more than one generation, one for each template type used in the code (but for that, the code is enough to know everything, and it has nothing to do with the object count).
When you define a method inside the class definition it usually means that the method should be inlined into each caller. The compiler can choose not to, but quite often you might find that the method doesn't actually exist in your output program at all (not true in debug builds, of course).
For non-inline member functions, the standard says
There shall be at most one definition of a non-inline member function in a program
There is no entity in C++ b.add or a.add from which you can take the address. The address-of operator needs a qualified-id to a member function of the form C::m(...) to get the address of a function. In your case, the address of add is
auto ptr = &Element::add;
and is independent from any instance. This gives a member-function pointer, which can only be used to call the function together with an object, e.g. (a.*ptr)(0,1) or (b.*ptr)(2,3) if add were a public method.
Comparing virtual functions in C++ and virtual tables in C, do compilers in general (and for sufficiently large projects) do as good a job at devirtualization?
Naively, it seems like virtual functions in C++ have slightly more semantics, thus may be easier to devirtualize.
Update: Mooing Duck mentioned inlining devirtualized functions. A quick check shows missed optimizations with virtual tables:
struct vtab {
int (*f)();
};
struct obj {
struct vtab *vtab;
int data;
};
int f()
{
return 5;
}
int main()
{
struct vtab vtab = {f};
struct obj obj = {&vtab, 10};
printf("%d\n", obj.vtab->f());
}
My GCC will not inline f, although it is called directly, i.e., devirtualized. The equivalent in C++,
class A
{
public:
virtual int f() = 0;
};
class B
{
public:
int f() {return 5;}
};
int main()
{
B b;
printf("%d\n", b.f());
}
does even inline f. So there's a first difference between C and C++, although I don't think that the added semantics in the C++ version are relevant in this case.
Update 2: In order to devirtualize in C, the compiler has to prove that the function pointer in the virtual table has a certain value. In order to devirtualize in C++, the compiler has to prove that the object is an instance of a particular class. It would seem that the proof is harder in the first case. However, virtual tables are typically modified in only very few places, and most importantly: just because it looks harder, doesn't mean that compilers aren't as good in it (for otherwise you might argue that xoring is generally faster than adding two integers).
The difference is that in C++, the compiler can guarantee that the virtual table address never changes. In C then it's just another pointer and you could wreak any kind of havoc with it.
However, virtual tables are typically modified in only very few places
The compiler doesn't know that in C. In C++, it can assume that it never changes.
I tried to summarize in http://hubicka.blogspot.ca/2014/01/devirtualization-in-c-part-2-low-level.html why generic optimizations have hard time to devirtualize. Your testcase gets inlined for me with GCC 4.8.1, but in slightly less trivial testcase where you pass pointer to your "object" out of main it will not.
The reason is that to prove that the virtual table pointer in obj and the virtual table itself did not change the alias analysis module has to track all possible places you can point to it. In a non-trivial code where you pass things outside of the current compilation unit this is often a lost game.
C++ gives you more information on when type of object may change and when it is known. GCC makes use of it and it will make a lot more use of it in the next release. (I will write on that soon, too).
Yes, if it is possible for the compiler to deduce the exact type of a virtualized type, it can "devirtualize" (or even inline!) the call. A compiler can only do this if it can guarantee that no matter what, this is the function needed.
The major concern is basically threading. In the C++ example, the guarantees hold even in a threaded environment. In C, that can't be guaranteed, because the object could be grabbed by another thread/process, and overwritten (deliberately or otherwise), so the function is never "devirtualized" or called directly. In C the lookup will always be there.
struct A {
virtual void func() {std::cout << "A";};
}
struct B : A {
virtual void func() {std::cout << "B";}
}
int main() {
B b;
b.func(); //this will inline in optimized builds.
}
It depends on what you are comparing compiler inlining to. Compared to link time or profile guided or just in time optimizations, compilers have less information to use. With less information, the compile time optimizations will be more conservative (and do less inlining overall).
A compiler will still generally be pretty decent at inlining virtual functions as it is equivalent to inlining function pointer calls (say, when you pass a free function to an STL algorithm function like sort or for_each).