Vtable modifications at run time - c++

For those compiler implementations that use vtables: are there any cases when virtual functions tables are changed at run time? Or are vtables only filled at compile time, and no actions are performed to modify them at run time?

I am not aware of any C++ ABI with a polymorphism implementation that employs virtual tables changing at runtime.
It wouldn't be very useful, anyway, since virtual tables typically describe a property of the code (the relationship of member functions to each other w.r.t. position in class hierarchy) and C++ code doesn't change at runtime.
And because it wouldn't be useful, it would be wasteful.

The short answer is no.
A slightly longer (and probably implementation specific) answer is that the object's pointer to the actual vtable changes during the execution of a constructor and destructor of a derived polymorphic class, so that overridden methods in a derived class do not get executed by the base class's constructor/destructor while the derived class is not yet constructed/has been destructed.
If you want objects to change class during run time then you have a number of options:
objective-c(++)
hand-code your own dispatch mechanism
python/javascript etc al.
(the best option) reconsider your design.

Related

Motivation for not using virtual functions [duplicate]

This question already has answers here:
Is there any reason not to make a member function virtual?
(7 answers)
Closed 4 years ago.
I hope this question is not too vague, but coming from java, I can not think of any reason why I would use non-virtual functions in C++.
Is there a nice example which demonstrates the benefit of non-virtual functions in C++.
Virtual functions have a runtime cost associated with them. They are dispatched at runtime and thus are slower to call. They are similar to calling regular functions through a function pointer, where the address is determined at runtime according to the actual type of the object. This incurs overhead.
One of the C++ design decisions has always been that you should not pay for things you don't need. In contrast, Java does not concern itself much with this kind of low-level optimization.
It is true that calling a virtual function can be slower, but not nearly as slow as most C++ programmers think.
Modern CPUs have gotten pretty good at branch prediction. If, every time you execute a particular call to a virtual function, you are actually calling the same implementation, the CPU will figure that out and start "guessing" (speculatively executing) the call before it even computes the address. This can often hide the cost of the virtual call completely, making it exactly as fast as a non-virtual call. (If you doubt this, try it for yourself on a current-generation processor.)
If you are not calling the same implementation, then you are actually relying on the virtual dispatch, so you could not directly replace it with a non-virtual function anyway.
The only common exception to this is inlined functions, where the compiler can perform constant propagation, CSE, etc. between the caller and callee. Obviously it cannot do this if it does not know the destination of the call at compile time.
But as a rule of thumb, your instinct that you always want to use virtual functions is not all that bad. The times when the performance difference is noticeable are rare.
Very few member functions in the standard library are virtual.
Offhand I can only remember the destructor and what function of standard exceptions.
As of 2012 the only good reason to have a virtual member function is to support overriding of that member function in a derived class, i.e. a customization point, and that can often be achieved in other ways (e.g. parameterization, templating).
However, I can remember at one time, like 15 years ago, being very frustrated with the design of Microsoft's MFC class framework. I wanted every member function to be virtual so as to be able to override the functionality and in order to be able to more easily debug things, as an alternative to non-existing or very low quality documentation. Thus, I argued that virtual should be the default, also in other software.
I have since understood that MFC was not representative and is not representative of C++ software in general, so the MFC-specific reasons do not apply in general. :-)
The efficiency cost of virtual function is, like, virtually non-existent. :-) See for example the international standarization committee's Technical Report on C++ Performance. However, there is a real cost in providing this freedom for derived classes, because freedom implies responsibility: any derived class then has to ensure that overriding the member function respects the contract of the base class.
Well, one of the principles on which C++ language is based is that you should not pay for something you don't use.
Virtual function call is more expensive than non-virtual function call, since in a typical implementation it comes through two (or three) additional levels of indirection. A virtual call cannot be inlined, meaning that the expenses can grow even higher due to fact that we have to call a full-fledged function.
Adding virtual functions to an class makes it polymorphic, thus creating some invisible internal structures inside objects of that class. These structures incur additional household expenses and preclude low-level processing of class objects.
Finally, separating functions into virtual and non-virtual ones (i.e into overridable and non-overridable ones) is a matter of your design. It simply makes no sense whatsoever to unconditionally make all functions in our class overridable in the derived classes.
C++ is meant to be both fast like C, and support OO and generic programming (templates). To achieve both this goals, C++ member functions by default can't be inerited, unless you mark them as virtual, in which case the virtual table gets into business. So you can build classes that don't involve virtual functions, when not needed.
Although not as efficient as non-virtual calls, virtual functions calls using the virtual table are very fast. You might notice the different only within tight loops that do nothing but calling a member function. So the Java way - all members are "virtual" - is indeed more practical IMO.

Do derived objects cast from base need to use a vtable

If i call an inherited method on a derived class instance does the code require the use of a vtable? Or can the method calls be 'static' (Not sure if that is the correct usage of the word)
For example:
Derived derived_instance;
derived_instance.virtual_method_from_base_class();
I am using msvc, but i guess that most major compilers implement this roughly the same way.
I am (now) aware that the behavior is implementation-specific, i'm curious about the implementation.
EDIT:
I should probaby add that the reason that we are interested is that the function is called a lot of times, and it is very simple, and i am not allowed to edit the function itself in any way, i was just wondering if would be possible, and if there would be any benifit to eliminating the dynamic-dispach anyway.
I have profiled and counted functions etc etc before you all get on my backs about optomization.
Both of your examples will require that Derived has a constructor accepting a Base and create a new instance of Derived. Assuming that you have such a constructor and that this is what you want, then the compiler would "probably" be able to determine the dynamic object type statically and avoid the virtual call (if it decides to make such optimizations).
Note that the behavior is not undefined, it's just implementation-specific. There's a huge difference between the two.
If you want to avoid creating a new instance (or, more likely, that's not what you want) then you could use a reference cast static_cast<Derived&>(base_instance).virtual_method_from_base_class(); but while that avoids creating a new object it won't allow you to avoid the virtual call.
If you really want to cast at compile time what you're looking for is most likely the CRTP http://en.wikipedia.org/wiki/Curiously_recurring_template_pattern which allows you to type everything at compile time, avoiding virtual calls.
EDIT for updated question: In the case you've shown now, I would suspect many compilers capable of statically determining the dynamic type and avoiding the virtual call.
Vtable only come into play when you use pointers or references. For objects, it's always the specific class method which is invoked.
You can simply qualify the call, then there is no virtual function dispatch:
Derived derived_instance;
derived_instance.Derived::virtual_method_from_base_class();
However, I suspect that that would be premature optimization.
Do measure.

performance implications of deep inheritance tree in c++

Is there any efficiency disadvantage associated with deep inheritance trees (in c++), i.e, a large set of classes A, B, C, and so on, such that B extends A, C extends B, and so one. One efficiency implication that I can think of is, that when we instantiate the bottom most class, say C, then the constructors of B and A are also called, which will have performance implications.
Let's enumerate the operations we should consider:
Construction/destruction
Each constructor/destructor will call its base class equivalents. However, as James McNellis pointed out, you were obviously going to do that work anyway. You didn't derived from A just because it was there. So the work is going to get done one way or another.
Yes, it will involve a few more function calls. But function call overhead will be nothing compared to the actual work any significantly deep class hierarchy will have to actually do. If you're at the point where function call overhead is actually important for performance, I would strongly suggest that calling constructors at all is probably not what you want to be doing in that code.
Object Size
In general, the overhead for a derived class is nothing. The overhead for virtual members is a pointer or for virtual inheritance.
Member Function Calls, Static
By this, I mean calling non-virtual member functions, or calling virtual member functions with class names (ClassName::FunctionName syntax). Both of these allow the compiler to know at compile time which function to call.
The performance of this is invariant with the size of the hierarchy, since it's compile-time determined.
Member Function Calls, Dynamic
This is calling virtual functions with the full and complete expectation of runtime calls.
Under most sane C++ implementations, this is invariant with the size of the object hierarchy. Most implementations use a v-table for each class. Each object has a v-table pointer as a member. For any particular dynamic call, the compiler accesses the v-table pointer, picks out the method, and calls it. Since the v-table is the same for each class, it won't be any slower for a class that has a deep hierarchy than one with a shallow one.
Virtual inheritance plays a bit with this.
Pointer Casts, Static
This refers to static_cast or any equivalent operation. This means the implicit cast from a derived class to a base class, the explicit use of static_cast or C-style casts, etc.
Note that this technically includes reference casting.
The performance of static casts between classes (up or down) is invariant with the size of the hierarchy. Any pointer offsets will be compile-time generated. This should be true for virtual inheritance as well as non-virtual inheritance, but I'm not 100% certain of that.
Pointer Casts, Dynamic
This obviously refers to the explicit use of dynamic_cast. This is typically used when casting from a base class to a derived one.
The performance of dynamic_cast will likely change for a large hierarchy. But sane implementations should only check the classes between the current class and the requested one. So it's simply linear in the number of classes between the two, not linear in the number of classes in the hierarchy.
Typeof
This means the use of the typeof operator to fetch the std::type_info object associated with an object.
The performance of this will be invariant with the size of the hierarchy. If the class is a virtual one (has virtual functions or virtual base classes), then it will simply pull it out of the vtable. If it's not virtual, then it's compile-time defined.
Conclusion
In short, most operations are invariant with the size of the hierarchy. But even in the cases where it has an impact, it's not a problem.
I'd be more concerned with some design ethic where you felt the need to build such a hierarchy. In my experience, hierarchies like this come from two lines of design.
The Java/C# ideal of having everything derived from a common base class. This is a horrible idea in C++ and should never be used. Each object should derive from what it needs to, and only that. C++ was built on the "pay for what you use" principle, and deriving from a common base works against that. In general, anything you could do with such a common base class is either something you shouldn't be doing period, or something that could be done with function overloading (using operator<< to convert to strings, for example).
Misuse of inheritance. Using inheritance when you should be using containment. Inheritance creates an "is a" relationship between objects. More often than not, "has a" relationships (one object having another as a member) are far more useful and flexible. They make it easier to hide data, and you don't allow the user to pretend one class is another.
Make sure that your design does not fall afoul of one of these principles.
There will be but not as bad as the programmer performance implications.
As #Nicol points out, it may be doing a number of things.
If those are things that you require to be done, regardless of design, because they are all precisely necessary steps in getting the program from call main to exit within the fewest possible cycles, then your design is simply a matter of coding clarity (or maybe lack of it :).
In my experience performance tuning, as in this example, what I often see as a huge source of wasted time is over-design of data (i.e. class) structures.
Wierdly enough, the justification for the data structures is often (guess what?) - performance!
In my experience, the thing to do with data structure is keep it as simple as possible and as normalized as possible. If it is completely normalized, then any single change to it can't make it inconsistent. You can't always achieve complete normality, in which case you have to deal with the possibility that the data can be temporarily inconsistent.
This is why people write notification handlers, and this is encouraged in OOP.
The idea is, if you change something in one place, that can trigger notifications that "automatically" propagate the change to other places, trying to maintain consistency.
The problem with notifications is they can run away. Simply changing some boolean property from true to false can cause a fire-storm of notifications ripping through the data structure in ways no one programmer understands, updating databases, painting windows, zipping files, etc. etc. I often find this is where most clock cycles go.
I think it is simpler and far more efficient to temporarily tolerate inconsistency, and periodically repair it with some kind of sweeping process.
Another way data structures go along with huge inefficiency is if the data is effectively being interpreted by some process to produce some output.
This is very common in graphics.
If the data changes at a very slow rate, it may make sense to "compile" it rather than "interpret" it.
In other words, translate it into a simpler instruction set, or source code which is compiled "on the fly", which can then execute far more quickly to produce the desired output.

When is it appropriate to use virtual methods?

I understand that virtual methods allow a derived class to override methods inherited from a base class. When is it appropriate/inappropriate to use virtual methods? It's not always known whether or not a class will be sub classed. Should everything be made virtual, just "in case?" Or will that cause significant overhead?
First a slightly pedantic remark - in C++ standardese we call them member functions, not methods, though the two terms are equivalent.
I see two reasons NOT to make a member function virtual.
"YAGNI" - "You Ain't Gonna Need It". If you are not sure a class will be derived from, assume it won't be and don't make member functions virtual. Nothing says "don't derive from me" like a non-virtual destructor by the way (edit: In C++11 and up, you have the final keyword] which is even better). It's also about intent. If it's not your intent to use the class polymorphically, don't make anything virtual. If you arbitrarily make members virtual you are inviting abuses of the Liskov Substitution Principle and those classes of bugs are painful to track down and solve.
Performance / memory footprint. A class that has no virtual member functions does not require a VTable (virtual table, used to redirect polymorphic calls through a base class pointer) and thus (potentially) takes up less space in memory. Also, a straight member function call is (potentially) faster than a virtual member function call.
Don't prematurely pessimize your class by pre-emptively making member functions virtual.
When you design a class you should have a pretty good idea as to whether it represents an interface (in which case you mark the appropriate overrideable methods and destructor virtual) OR it's intended to be used as-is, possibly composing or composed with other objects.
In other words your intent for the class should be your guide. Making everything virtual is often overkill and sometimes misleading regarding which methods are intended to support runtime polymorphism.
It's a tricky question. But there are some guidelines / rule of thumbs to follow.
As long as you do not need to derive from a class, then don't write any virtual method, once you need to derive, only make virtual those methods you need to customize in the child class.
If a class has a virtual method, then the destructor shall be virtual (end of discussion).
Try to follow NVI (Non-Virtual Interface) idiom, make virtual method non-public and provide public wrappers in charge of assessing pre and post conditions, so that derived classes cannot accidentally break them.
I think those are simple enough. I definitely let the ABI part of the reflexion away, it's only useful when delivering DLLs.
If your code is following a particular design pattern, then your choice should reflect the DP's own principles. For example, if you are coding a Decorator pattern, the function that should be virtual are the ones that belong to the Component interface.
Otherwise, I'd like to follow an evolutional approach, IOW I don't have virtual methods until I see that a hierarchy is trying to emerge from your code.
For instance member functions in Java are 100% virtual. In C++ it is considered as a code size/function call time penalty. Additionally a non virtual function guarantees that the function implementation will always be the same (using the base class object/reference). Scott Meyers in "Effective C++" discusses it in more details.
A sanity test I mostly use is - In case a class I'm defining is derived from in future, would the behavior (function) remain same or would it need to be redefined. If it would be, the function is a strong contender for being virtual, if not then no, if I do not know - I probably need to look into the problem domain for better understanding of behavior I'm planning to implement. Mostly problem domain gives me the answer - in cases where it does not the behavior is generally non critical.
I guess one possible way to determine quickly would be to consider if you are going to deal with a bunch of similar classes that you are going to use to perform the same tasks, with the change being the way you go about doing those tasks.
One trivial example would be the problem of computing areas for various geometric figures. You need the areas of squares, circles, rectangles, triangles etc. and the only thing that changes here is the math formulae (the way) you use to compute the area. Therefore it would be a good decision to have each of these shapes inherit from a common base class and to add a virtual method in the base class that returns the area (which you can then implement in each of the child with the respective math formula).
Making everything virtual "just in case" will make your objects take up more memory. Additionally, there is a small (but non-zero) overhead when calling virtual functions. So, IMHO, making everything virtual "just in case" would be bad idea when performance/memory constraints are important (which basically means in always every real-world program that you write).
However, this again is debatable based on how clearly the requirements are spelled out and how often code-changes expected. For example, in a quick-and-dirty tool or an initial prototype where a few extra bytes of memory and a few milliseconds of lost time do not mean much, it would be OK to have a bunch (unnecessarily) virtual functions for the sake of flexibility.
My point is if you want to use the parent class pointer to point at child class instance and use their methods, then you should use virtual methods.
Virtual methods are a way to achieve polymorphism. They are used when you want to define some action at a more abstract level such that it is impossible to actually implement because it is too general. Only in derived classes you can tell how to perform that action. But with the definition of a virtual method you create a requirement, that adds rigidity to the hierarchy of classes. This can advisable or not, it depends on what you are trying to obtain, and on your own taste.
Have a look at Design Patterns. If your code/design is one of these or similar go use virtual function. Otherwise, try this

Does several levels of base classes slow down a class/struct in c++?

Does having several levels of base classes slow down a class? A derives B derives C derives D derives F derives G, ...
Does multiple inheritance slow down a class?
Non-virtual function-calls have absolutely no performance hit at run-time, in accordance with the c++ mantra that you shouldn't pay for what you don't use.
In a virtual function call, you generally pay for an extra pointer lookup, no matter how many levels of inheritance, or number of base classes you have.
Of course this is all implementation defined.
Edit: As noted elsewhere, in some multiple inheritance scenarios, an adjustment to the 'this' pointer is required before making the call. Raymond Chen describes how this works for COM objects. Basically, calling a virtual function on an object that inherits from multiple bases can require an extra subtraction and a jmp instruction on top of the extra pointer lookup required for a virtual call.
[Deep inheritance hierarchies] greatly increases the maintenance burden by adding unnecessary complexity, forcing users to learn the interfaces of many classes even when all they want to do is use a specific derived class. It can also have an impact on memory use and program performance by adding unnecessary vtables and indirection to classes that do not really need them. If you find yourself frequently creating deep inheritance hierarchies, you should review your design style to see if you've picked up this bad habit. Deep hierarchies are rarely needed and almost never good. And if you don't believe that but think that "OO just isn't OO without lots of inheritance," then a good counter-example to consider is the [C++] standard library itself. -- Herb Sutter
Does multiple inheritance slow down
a class?
As mentioned several times, a deeply nested single inheritance hierarchy should impose no additional overhead for a virtual call (above the overhead imposed for any virtual call).
However, when multiple inheritance is involved, there is sometimes a very slight additional overhead when calling the virtual function through a base class pointer. In this case some implementations have the virtual function go through a small thunk that adjusts the 'this' pointer since
(static_cast<Base*>( this) == this)
Is not necessarily true depending on the object layout.
Note that all of this is very, very implementation dependent.
See Lippman's "Inside the C++ Object Model" Chapter 4.2 - Virtual Member Functions/Virtual Functions under MI
There is no speed difference between virtual calls at different levels since they all get flattened out into the vtable (pointing to the most derived versions of the overridden methods). So, calling ((A*)inst)->Method() when inst is an instance of B is the same overhead as when inst is an instance of D.
Now, a virtual call is more expensive than a non-virtual call, but this is because of the pointer dereference and not a function of how deep the class hierarchy actually is.
Virtual calls themselves are more time consuming than normal calls because it has to lookup the address of the actual function to call from the vtable
Additionally compiler optimizations like inlining might be hard to perform due to the lookup requirement. Situations where inlining is not possible itself can lead to quite a high overhead due to stack pop and push and jump operations
Here is a proper study which says the overhead can be as high as 50% http://www.cs.ucsb.edu/~urs/oocsb/papers/oopsla96.pdf
Here is another resource that looks at a side effect of having a large library of virtual classes http://keycorner.org/pub/text/doc/kde-slow.txt
The dispatching of the virtual calls with multiple inheritances is compiler specific, so the implementation will also have an effect in this case.
Regarding your specific question of having a large no of base classes, usually the memory layout of a class object would have the vtbl ptrs for all the other constituent classes within it.
Check this page for a sample vtable layout - http://www.codesourcery.com/public/cxx-abi/cxx-vtable-ex.html
So a call to a method implemented by a class deeper into the heierarchy would still only be a single indirection and not multiple indirections as you seem to think. The call does not have to navigate from class to class to finally find the exact function to call.
However if you are using composition instead of inheritance each pointer call would be a virtual call and that overhead would be present and if within that virtual call if that class uses more compositio,n more virtual calls would be made. That kindof a design would be slower depending on the amount of calls you made.
Almost all answers point toward whether or not virtual methods would be slower in the OP's example, but I think the OP is simply asking if having several level of inheritance in and of itself is slow. The answer is of course no since this all happens at compile-time in C++. I suspect the question is driven from experience with script languages where such inheritance graphs can be dynamic, and in that case, it potentially could be slower.
If there are no virtual functions, then it shouldn't. If there are then there is a performance impact in calling the virtual functions as these are called via function pointers or other indirect methods (depends on the situation). However, I do not think that the impact is related to the depth of the inheritance hierarchy.
Brian, to be clear and answer your comment. If there are no virtual functions anywhere in your inheritance tree, then there is no performance impact.
Having non-trivial constructors in a deep inheritance tree can slow down object creation, when every creation of a child results in function calls to all the parent constructors all the way up to the base.
Yes, if you're referencing it like this:
// F is-a E,
// E is-a D and so on
A* aObject = new F();
aObject->CallAVirtual();
Then you're working with a pointer to an A type object. Given you're calling a function that is virtual, it has to look up the function table (vtable) to get the correct pointers. There is some overhead to that, yes.
Calling a virtual function is slightly slower than calling a nonvirtual function. However, I don't think it matters how deep your inheritance tree is.
But this is not a difference that you should normally be worried about.
While I'm not completely sure, I think that unless you're using virtual methods, the compiler should be able to optimize it well enough that inheritance shouldn't matter too much.
However, if you're calling up to functions in the base class which call functions in its base class, and so on, a lot, it could impact performance.
In essence, it highly depends on how you've structured your inheritance tree.
As pointed out by Corey Ross the vtable is known at compile time for any leaf derived class, and so the cost of the virtual call really should be the same irrespective of the structure of the hierarchy.
This, however, cannot be said for dynamic_cast. If you consider how you might implement dynamic_cast, a basic approach will be have an O(n) search through your hierarchy!
In the case of a multiple inheritance hierarchy, you are also paying a small cost to convert between different classes in the hierarchy:
sturct A { int i; };
struct B { int j; };
struct C : public A, public B { int k ; };
// Let's assume that the layout of C is: { [ int i ] [ int j ] [int k ] }
void foo (C * c) {
A * a = c; // Probably has zero cost
B * b = c; // Compiler needed to add sizeof(A) to 'c'
c = static_cast<B*> (b); // Compiler needed to take sizeof(A)' from 'b'
}