This question already has answers here:
Alternative virtual function calls implementations?
(11 answers)
Closed 8 years ago.
Discussion
I'm aware that all the implementations (i.e., C++ compilers) that I know, implement the dynamic dispatch mechanism via the use of virtual dispatch tables and virtual table pointers (i.e., the known vtable and vptr).
However, interrogating the C++ standard I found out that the C++ standard does not mandate exactly how dynamic dispatch must be implemented. This means that a vendor could use an alternative method for dynamic dispatch provided that its behaviour complies to the C++ standard demands for dynamic dispatch behaviour.
Questions
Q1. Are there any other valid methods, beside vtables and vptrs, that dynamic dispatch could be implemented with?
Q2. If Q1 is true: What are the reasons, if any, that made implementers decide to use vtables and vptrs to implement dynamic dispatch instead of some other valid method?
Q1: Dynamic compilers can implement virtual functions faster than using a vtable. Say a method is virtual, but all objects created so far use implementation X. A dynamic compiler will produce a direct call to implementation X or even inline it. When an object using a different implementation is created, all the code that might now be wrong will be recompiled.
Even if there are two implementations, the dynamic compiler may produce code like "if (object uses implementation X) { inlined_code_for_x (); } else { recompile_this_code (); }
Q2: A potential reason: If you have a base class with many virtual functions and a huge vtable, and many derived classes which rarely override any of these virtual functions, then having the same vtable for each class is inefficient. Both from a memory point of view, and potential from an execution point of view, because certain processor optimisations don't work if pointers to the same function are stored in different memory locations.
Related
For those compiler implementations that use vtables: are there any cases when virtual functions tables are changed at run time? Or are vtables only filled at compile time, and no actions are performed to modify them at run time?
I am not aware of any C++ ABI with a polymorphism implementation that employs virtual tables changing at runtime.
It wouldn't be very useful, anyway, since virtual tables typically describe a property of the code (the relationship of member functions to each other w.r.t. position in class hierarchy) and C++ code doesn't change at runtime.
And because it wouldn't be useful, it would be wasteful.
The short answer is no.
A slightly longer (and probably implementation specific) answer is that the object's pointer to the actual vtable changes during the execution of a constructor and destructor of a derived polymorphic class, so that overridden methods in a derived class do not get executed by the base class's constructor/destructor while the derived class is not yet constructed/has been destructed.
If you want objects to change class during run time then you have a number of options:
objective-c(++)
hand-code your own dispatch mechanism
python/javascript etc al.
(the best option) reconsider your design.
This question already has answers here:
Is there any reason not to make a member function virtual?
(7 answers)
Closed 4 years ago.
I hope this question is not too vague, but coming from java, I can not think of any reason why I would use non-virtual functions in C++.
Is there a nice example which demonstrates the benefit of non-virtual functions in C++.
Virtual functions have a runtime cost associated with them. They are dispatched at runtime and thus are slower to call. They are similar to calling regular functions through a function pointer, where the address is determined at runtime according to the actual type of the object. This incurs overhead.
One of the C++ design decisions has always been that you should not pay for things you don't need. In contrast, Java does not concern itself much with this kind of low-level optimization.
It is true that calling a virtual function can be slower, but not nearly as slow as most C++ programmers think.
Modern CPUs have gotten pretty good at branch prediction. If, every time you execute a particular call to a virtual function, you are actually calling the same implementation, the CPU will figure that out and start "guessing" (speculatively executing) the call before it even computes the address. This can often hide the cost of the virtual call completely, making it exactly as fast as a non-virtual call. (If you doubt this, try it for yourself on a current-generation processor.)
If you are not calling the same implementation, then you are actually relying on the virtual dispatch, so you could not directly replace it with a non-virtual function anyway.
The only common exception to this is inlined functions, where the compiler can perform constant propagation, CSE, etc. between the caller and callee. Obviously it cannot do this if it does not know the destination of the call at compile time.
But as a rule of thumb, your instinct that you always want to use virtual functions is not all that bad. The times when the performance difference is noticeable are rare.
Very few member functions in the standard library are virtual.
Offhand I can only remember the destructor and what function of standard exceptions.
As of 2012 the only good reason to have a virtual member function is to support overriding of that member function in a derived class, i.e. a customization point, and that can often be achieved in other ways (e.g. parameterization, templating).
However, I can remember at one time, like 15 years ago, being very frustrated with the design of Microsoft's MFC class framework. I wanted every member function to be virtual so as to be able to override the functionality and in order to be able to more easily debug things, as an alternative to non-existing or very low quality documentation. Thus, I argued that virtual should be the default, also in other software.
I have since understood that MFC was not representative and is not representative of C++ software in general, so the MFC-specific reasons do not apply in general. :-)
The efficiency cost of virtual function is, like, virtually non-existent. :-) See for example the international standarization committee's Technical Report on C++ Performance. However, there is a real cost in providing this freedom for derived classes, because freedom implies responsibility: any derived class then has to ensure that overriding the member function respects the contract of the base class.
Well, one of the principles on which C++ language is based is that you should not pay for something you don't use.
Virtual function call is more expensive than non-virtual function call, since in a typical implementation it comes through two (or three) additional levels of indirection. A virtual call cannot be inlined, meaning that the expenses can grow even higher due to fact that we have to call a full-fledged function.
Adding virtual functions to an class makes it polymorphic, thus creating some invisible internal structures inside objects of that class. These structures incur additional household expenses and preclude low-level processing of class objects.
Finally, separating functions into virtual and non-virtual ones (i.e into overridable and non-overridable ones) is a matter of your design. It simply makes no sense whatsoever to unconditionally make all functions in our class overridable in the derived classes.
C++ is meant to be both fast like C, and support OO and generic programming (templates). To achieve both this goals, C++ member functions by default can't be inerited, unless you mark them as virtual, in which case the virtual table gets into business. So you can build classes that don't involve virtual functions, when not needed.
Although not as efficient as non-virtual calls, virtual functions calls using the virtual table are very fast. You might notice the different only within tight loops that do nothing but calling a member function. So the Java way - all members are "virtual" - is indeed more practical IMO.
If i call an inherited method on a derived class instance does the code require the use of a vtable? Or can the method calls be 'static' (Not sure if that is the correct usage of the word)
For example:
Derived derived_instance;
derived_instance.virtual_method_from_base_class();
I am using msvc, but i guess that most major compilers implement this roughly the same way.
I am (now) aware that the behavior is implementation-specific, i'm curious about the implementation.
EDIT:
I should probaby add that the reason that we are interested is that the function is called a lot of times, and it is very simple, and i am not allowed to edit the function itself in any way, i was just wondering if would be possible, and if there would be any benifit to eliminating the dynamic-dispach anyway.
I have profiled and counted functions etc etc before you all get on my backs about optomization.
Both of your examples will require that Derived has a constructor accepting a Base and create a new instance of Derived. Assuming that you have such a constructor and that this is what you want, then the compiler would "probably" be able to determine the dynamic object type statically and avoid the virtual call (if it decides to make such optimizations).
Note that the behavior is not undefined, it's just implementation-specific. There's a huge difference between the two.
If you want to avoid creating a new instance (or, more likely, that's not what you want) then you could use a reference cast static_cast<Derived&>(base_instance).virtual_method_from_base_class(); but while that avoids creating a new object it won't allow you to avoid the virtual call.
If you really want to cast at compile time what you're looking for is most likely the CRTP http://en.wikipedia.org/wiki/Curiously_recurring_template_pattern which allows you to type everything at compile time, avoiding virtual calls.
EDIT for updated question: In the case you've shown now, I would suspect many compilers capable of statically determining the dynamic type and avoiding the virtual call.
Vtable only come into play when you use pointers or references. For objects, it's always the specific class method which is invoked.
You can simply qualify the call, then there is no virtual function dispatch:
Derived derived_instance;
derived_instance.Derived::virtual_method_from_base_class();
However, I suspect that that would be premature optimization.
Do measure.
There is one little related question. But the topic is entirely different.
Now, one concept is about the function resolution and another is about class resolution ? I am wondering that how is it possible if they are using the same vtable (at least in gcc-4.5) ? Is this a compiler dependent terminology ?
I know that it might appear as basic silly question, but I had never thought of it.
A good reference for this sort of thing is the Itanium ABI - see eg http://mentorembedded.github.com/cxx-abi/abi.html#vtable. Despite the name it's a widely used ABI for C++ and it describes a good, working implementation (although obviously other implementations are possible).
You can solve both problems (virtual function calls and virtual inheritance) if you know the dynamic type of an object given just a pointer to it. Every (polymorphic) object in C++ has precisely one dynamic type, which is determined at the moment when it's constructed. E.g. when you write new Foo, that object has the dynamic type Foo even if you store just a void*.
A vtable is a mechanism to store information about the dynamic type of an object in such a way that it can be retrieved via a base pointer. You can store quite some things in a vtable: function pointers, cast offsets, std::type_info objects even.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
A question about virtual mechanism in C++
Is using vtable the only way to implement virtual member functions mechanism in C++? What other ways exist?
Technically, all that's required for dynamic dispatch is the ability to identify the dynamic type of an object, given a pointer to it. Thus, any sort of hidden (or not-so-hidden) typeid field would work.
Dynamic dispatch would use that typeid to find the associated functions. That association could be a hastable or an array where typeid is the index, or any other suitable relationship. vptr just happens to be the way to achieve this in the least number of steps.
Another known mechanism is type dispatch functions. Effectively, you replace the vtable pointer by a typeid (small enum). The (dyanmic) linker collects all overrides of a given virtual function, and wraps them in one big switch statement on the typeid field.
The theoretical justifcation is that this replaces an indirect jump (non-predicatble) by lots of predicatable jumps. With some smarts in choosing enum values, the switch statement can be fairly effective too (i.e. better than lineair)
Another possible implementation would be to store the pointers to virtual functions directly into the objects. Of course, this solution is never used in practice (at least in no languages I'm aware of) since it would lead to a dramatic increase of the memory footprint. However, it is interesting to note that a code using this implementation could actually run faster since it removes an indirection layer by suppressing the need for the vptr.
I'm not aware of any compiler which implements virtual functions without using vtable approach.
Theoretically, however, one could create an internal map of object pointers and a table of pointers to virtual functions, something like map<objPtr, functionTable*>, to implement dynamic polymorphism through virtual functions. But then it would make the dynamic dispatching slower than vtable-approach.
It seems vtable-approach is probably the fastest mechanism to implement dynamic polymorphism. Maybe, that is why all compilers employ this approach!