I always mark my classes with a virtual destructor even when it is not needed. Other than maybe a small performance hit could there be a situation where having a virtual destructor when you do not need one cause memory errors or something terrible?
Thanks
It’s a fundamental flaw to make all classes extensible. Most classes are simply not suitable to be inherited from and it makes no sense to facilitate this if you don’t design classes for extension up front.
This is just misleading the users of your API who will take this as a hint that the class is meaningfully inheritable. In reality, this is rarely the case and will bring no benefit, or break code in the worst case.
Once you make a class inheritable, you’re settled with it for the rest of your life: you cannot change its interface, and you must never break the (implicit!) semantics it has. Essentially, the class is no longer yours.
On the other hand, inheritance is way overrated anyway. Apart from its use for public interfaces (pure virtual classes), you should generally prefer composition over inheritance.
Another, more fundamental case where a virtual destructor is undesirable is when the code you have requires a POD to work – such as when using it in a union, when interfacing with C code or when performing POD-specific optimisations (only PODs are blittable, meaning they can be copied very efficiently).
[Hat tip to Andy]
A word about performance overhead: there are situations in which lots of small objects are created, or objects are created in tight loops. In those cases, the performance overhead of virtual destructors can be a crucial mistake.
Classes which have virtual function tables are also larger than classes without, which can also lead to unnecessary performance impact.
All in all, there are no compelling reasons to make destructors virtual, and some compelling reasons not to.
There's no point in declaring it virtual if you don't plan on inheriting from the class (or if the class is not meant to be inherited from).
On the other hand, if you want to access this class polymorphically, then yes, virtual destructor is a good thing to have.
But to answer your question precisely, it can not cause any "terrible memory errors" and marking it virtual all the time can't really hurt you.
But i see no reason to use a virtual destructor all the time. It's up to you.
Also, this post by Herb brings some light into the matter.
No, AFAIK. The virtual destructor either behaves exactly the same way as the nonvirtual one (ie. the virtual and direct calls call the same function) or you get undefined behavior. So you cannot "do something terrible" by changing a nonvirtual destructor to a virtual one.
It can, however, expose errors caused by other parts of the code, ie. when you accidentally overwrite the virtual table pointer of an object.
Related
When I define a class in C++ I always define the dtor as virtual.
This is my way to protect myself in case I will write an inheriting class.
I wonder whether I pay the performance-overhead even in case I won't be inheriting the class.
For example:
class A final
{
A();
virtual ~A(){printf("dtor");}
};
When I use this class, will the dtor actually get called through a vtable or will it be implemented as a static dtor?
When I define a class in C++ I always define the dtor as virtual.
This is very bad practice. Classes should either be designed to be polymorphic... or not. It's not just an issue of design either - polymorphism adds overhead.
Now, good compilers when they see delete a; if they can prove that a will only ever be of type A, will remove the virtual call and directly call ~A(). This is called devirtualization. But what they won't do is remove the vtable. Adding unnecessarily polymorphism means all your types now have vtables which means they're all using extra space. In your simple example, the presence of virtual increases sizeof(A) from 1 to 8. If you have a lot of As, you're now messing with cache effects. This is bad.
In short, design your classes according to their use. Not according to some problems that you may or may not eventually have if they are misused.
This is my way to protect myself in case I will write an inheriting class.
Note also that not all inheritance must be polymorphic - not even classes that intend to be inherited from from need to have a virtual destructor. That's only necessary if the usage is to hold onto a Base* and then delete it. It's perfectly safe for me to inherit from something like std::vector<> to provide a different interface - as long as I'm not trying to delete my inherited type through std::vector<>.
On the other hand, this
class A final { ... };
is good practice! If A isn't intended to be inherited from so explicitly make it ill-formed to inherit from it. Now, when you need to inherit from A, you have to make a conscious effort to think about the consequences of doing so.
As soon as you declared the class as final, it cannot be used as base class for any other one. So the virtual does not make sense.
Because of the as if rule, the compiler is then free to ignore the virtual keyboard, but it is not required to do it. BTW the mere existence of a vtable is an implementation detail and is not required by the standard.
TL/DR: it depends on the compiler implementation.
Consider
class A
{
public:
virtual void foo () = 0;
};
At this point it is absolutely obvious that A is an abstract class and will never be instantiated on it's own. So why the standard doesn't demand that automatically generated destructor must be virtual as well?
I ask myself this question every time I need to define a dummy virtual desctuctor in my interface classes and can't see why the commetee did't do this.
So the question: why generated destructor in an abstract class is not virtual?
Because in C++ you don't pay for what you don't need, and a virtual destructor adds overhead (even in already polymorphic classes) that isn't needed in many cases. For example you might not need polymorphic destruction and choose to have a protected destructor instead.
Further, as an alternative scenario, imagine that you have a class with a virtual method that does desire polymorphic destruction. Now imagine that the other virtual method is no longer needed and removed but polymorphic destruction is still needed. Now you have to remember to go back and add a virtual destructor or suffer undefined behavior.
Finally I think it would be hard to justify changing the default virtualness of the destructor (and it alone) based on whether a class is polymorphic or not rather than always and consistently making a destructor non-vurtual unless requested otherwise.
A virtual Destructor would cause dereferencing every time this class would be destructed. Rather small overhead, but C++ wants to save as much time as possible. Anyway, being explicit is always better, than trusting implicit compiler magic.
C++'s motto: "Trust the programmer".
LG ntor
When the c++ standard was written, it was written by keeping in my mind that it will be used on various platforms. Some of which might has memory constraints.By adding virtual-ism we are increasing the overhead.That why at that time every method/dtor needs to be explicitly made virtual by the programmer, whenever we do require polymorphism.
Now question comes to why can not standard c++ implementation of abstract class default destructor. Dont you think it will strange to have different implementation, and also it will cause confusion.And what about the case(however small it is) , when you dont need the distructor to be virtaul(so as to save memory).Why waste the memory
Is there any efficiency disadvantage associated with deep inheritance trees (in c++), i.e, a large set of classes A, B, C, and so on, such that B extends A, C extends B, and so one. One efficiency implication that I can think of is, that when we instantiate the bottom most class, say C, then the constructors of B and A are also called, which will have performance implications.
Let's enumerate the operations we should consider:
Construction/destruction
Each constructor/destructor will call its base class equivalents. However, as James McNellis pointed out, you were obviously going to do that work anyway. You didn't derived from A just because it was there. So the work is going to get done one way or another.
Yes, it will involve a few more function calls. But function call overhead will be nothing compared to the actual work any significantly deep class hierarchy will have to actually do. If you're at the point where function call overhead is actually important for performance, I would strongly suggest that calling constructors at all is probably not what you want to be doing in that code.
Object Size
In general, the overhead for a derived class is nothing. The overhead for virtual members is a pointer or for virtual inheritance.
Member Function Calls, Static
By this, I mean calling non-virtual member functions, or calling virtual member functions with class names (ClassName::FunctionName syntax). Both of these allow the compiler to know at compile time which function to call.
The performance of this is invariant with the size of the hierarchy, since it's compile-time determined.
Member Function Calls, Dynamic
This is calling virtual functions with the full and complete expectation of runtime calls.
Under most sane C++ implementations, this is invariant with the size of the object hierarchy. Most implementations use a v-table for each class. Each object has a v-table pointer as a member. For any particular dynamic call, the compiler accesses the v-table pointer, picks out the method, and calls it. Since the v-table is the same for each class, it won't be any slower for a class that has a deep hierarchy than one with a shallow one.
Virtual inheritance plays a bit with this.
Pointer Casts, Static
This refers to static_cast or any equivalent operation. This means the implicit cast from a derived class to a base class, the explicit use of static_cast or C-style casts, etc.
Note that this technically includes reference casting.
The performance of static casts between classes (up or down) is invariant with the size of the hierarchy. Any pointer offsets will be compile-time generated. This should be true for virtual inheritance as well as non-virtual inheritance, but I'm not 100% certain of that.
Pointer Casts, Dynamic
This obviously refers to the explicit use of dynamic_cast. This is typically used when casting from a base class to a derived one.
The performance of dynamic_cast will likely change for a large hierarchy. But sane implementations should only check the classes between the current class and the requested one. So it's simply linear in the number of classes between the two, not linear in the number of classes in the hierarchy.
Typeof
This means the use of the typeof operator to fetch the std::type_info object associated with an object.
The performance of this will be invariant with the size of the hierarchy. If the class is a virtual one (has virtual functions or virtual base classes), then it will simply pull it out of the vtable. If it's not virtual, then it's compile-time defined.
Conclusion
In short, most operations are invariant with the size of the hierarchy. But even in the cases where it has an impact, it's not a problem.
I'd be more concerned with some design ethic where you felt the need to build such a hierarchy. In my experience, hierarchies like this come from two lines of design.
The Java/C# ideal of having everything derived from a common base class. This is a horrible idea in C++ and should never be used. Each object should derive from what it needs to, and only that. C++ was built on the "pay for what you use" principle, and deriving from a common base works against that. In general, anything you could do with such a common base class is either something you shouldn't be doing period, or something that could be done with function overloading (using operator<< to convert to strings, for example).
Misuse of inheritance. Using inheritance when you should be using containment. Inheritance creates an "is a" relationship between objects. More often than not, "has a" relationships (one object having another as a member) are far more useful and flexible. They make it easier to hide data, and you don't allow the user to pretend one class is another.
Make sure that your design does not fall afoul of one of these principles.
There will be but not as bad as the programmer performance implications.
As #Nicol points out, it may be doing a number of things.
If those are things that you require to be done, regardless of design, because they are all precisely necessary steps in getting the program from call main to exit within the fewest possible cycles, then your design is simply a matter of coding clarity (or maybe lack of it :).
In my experience performance tuning, as in this example, what I often see as a huge source of wasted time is over-design of data (i.e. class) structures.
Wierdly enough, the justification for the data structures is often (guess what?) - performance!
In my experience, the thing to do with data structure is keep it as simple as possible and as normalized as possible. If it is completely normalized, then any single change to it can't make it inconsistent. You can't always achieve complete normality, in which case you have to deal with the possibility that the data can be temporarily inconsistent.
This is why people write notification handlers, and this is encouraged in OOP.
The idea is, if you change something in one place, that can trigger notifications that "automatically" propagate the change to other places, trying to maintain consistency.
The problem with notifications is they can run away. Simply changing some boolean property from true to false can cause a fire-storm of notifications ripping through the data structure in ways no one programmer understands, updating databases, painting windows, zipping files, etc. etc. I often find this is where most clock cycles go.
I think it is simpler and far more efficient to temporarily tolerate inconsistency, and periodically repair it with some kind of sweeping process.
Another way data structures go along with huge inefficiency is if the data is effectively being interpreted by some process to produce some output.
This is very common in graphics.
If the data changes at a very slow rate, it may make sense to "compile" it rather than "interpret" it.
In other words, translate it into a simpler instruction set, or source code which is compiled "on the fly", which can then execute far more quickly to produce the desired output.
I understand that virtual methods allow a derived class to override methods inherited from a base class. When is it appropriate/inappropriate to use virtual methods? It's not always known whether or not a class will be sub classed. Should everything be made virtual, just "in case?" Or will that cause significant overhead?
First a slightly pedantic remark - in C++ standardese we call them member functions, not methods, though the two terms are equivalent.
I see two reasons NOT to make a member function virtual.
"YAGNI" - "You Ain't Gonna Need It". If you are not sure a class will be derived from, assume it won't be and don't make member functions virtual. Nothing says "don't derive from me" like a non-virtual destructor by the way (edit: In C++11 and up, you have the final keyword] which is even better). It's also about intent. If it's not your intent to use the class polymorphically, don't make anything virtual. If you arbitrarily make members virtual you are inviting abuses of the Liskov Substitution Principle and those classes of bugs are painful to track down and solve.
Performance / memory footprint. A class that has no virtual member functions does not require a VTable (virtual table, used to redirect polymorphic calls through a base class pointer) and thus (potentially) takes up less space in memory. Also, a straight member function call is (potentially) faster than a virtual member function call.
Don't prematurely pessimize your class by pre-emptively making member functions virtual.
When you design a class you should have a pretty good idea as to whether it represents an interface (in which case you mark the appropriate overrideable methods and destructor virtual) OR it's intended to be used as-is, possibly composing or composed with other objects.
In other words your intent for the class should be your guide. Making everything virtual is often overkill and sometimes misleading regarding which methods are intended to support runtime polymorphism.
It's a tricky question. But there are some guidelines / rule of thumbs to follow.
As long as you do not need to derive from a class, then don't write any virtual method, once you need to derive, only make virtual those methods you need to customize in the child class.
If a class has a virtual method, then the destructor shall be virtual (end of discussion).
Try to follow NVI (Non-Virtual Interface) idiom, make virtual method non-public and provide public wrappers in charge of assessing pre and post conditions, so that derived classes cannot accidentally break them.
I think those are simple enough. I definitely let the ABI part of the reflexion away, it's only useful when delivering DLLs.
If your code is following a particular design pattern, then your choice should reflect the DP's own principles. For example, if you are coding a Decorator pattern, the function that should be virtual are the ones that belong to the Component interface.
Otherwise, I'd like to follow an evolutional approach, IOW I don't have virtual methods until I see that a hierarchy is trying to emerge from your code.
For instance member functions in Java are 100% virtual. In C++ it is considered as a code size/function call time penalty. Additionally a non virtual function guarantees that the function implementation will always be the same (using the base class object/reference). Scott Meyers in "Effective C++" discusses it in more details.
A sanity test I mostly use is - In case a class I'm defining is derived from in future, would the behavior (function) remain same or would it need to be redefined. If it would be, the function is a strong contender for being virtual, if not then no, if I do not know - I probably need to look into the problem domain for better understanding of behavior I'm planning to implement. Mostly problem domain gives me the answer - in cases where it does not the behavior is generally non critical.
I guess one possible way to determine quickly would be to consider if you are going to deal with a bunch of similar classes that you are going to use to perform the same tasks, with the change being the way you go about doing those tasks.
One trivial example would be the problem of computing areas for various geometric figures. You need the areas of squares, circles, rectangles, triangles etc. and the only thing that changes here is the math formulae (the way) you use to compute the area. Therefore it would be a good decision to have each of these shapes inherit from a common base class and to add a virtual method in the base class that returns the area (which you can then implement in each of the child with the respective math formula).
Making everything virtual "just in case" will make your objects take up more memory. Additionally, there is a small (but non-zero) overhead when calling virtual functions. So, IMHO, making everything virtual "just in case" would be bad idea when performance/memory constraints are important (which basically means in always every real-world program that you write).
However, this again is debatable based on how clearly the requirements are spelled out and how often code-changes expected. For example, in a quick-and-dirty tool or an initial prototype where a few extra bytes of memory and a few milliseconds of lost time do not mean much, it would be OK to have a bunch (unnecessarily) virtual functions for the sake of flexibility.
My point is if you want to use the parent class pointer to point at child class instance and use their methods, then you should use virtual methods.
Virtual methods are a way to achieve polymorphism. They are used when you want to define some action at a more abstract level such that it is impossible to actually implement because it is too general. Only in derived classes you can tell how to perform that action. But with the definition of a virtual method you create a requirement, that adds rigidity to the hierarchy of classes. This can advisable or not, it depends on what you are trying to obtain, and on your own taste.
Have a look at Design Patterns. If your code/design is one of these or similar go use virtual function. Otherwise, try this
I see from this entry that virtual inheritance adds sizeof(pointer) to an object's memory footprint. Other than that, are there any drawbacks to me just using virtual inheritance by default, and conventional inheritance only when needed? It seems like it'd lead to more future-proof class design, but maybe I'm missing some pitfall.
The drawbacks are that
All classes will have to initialize all its virtual bases all the time (e.g. if A is virtual base of B, and C derives from B, it also have to initialize A itself).
You have to use more expensive dynamic_cast everywhere you use a static_cast (may or may not be the issue, depending on your system and whether your design requires it).
Point 1 alone makes it not worth it, since you can't hide your virtual bases. There is almost always a better way.
In my experience, virtual inheritance (as opposed to virtual methods) is almost never needed. In C++ it's used to address the "diamond inheritance problem", which if you avoid multiple inheritance cannot actually happen.
I'm pretty sure that I've never encountered virtual inheritance outside C++ books, which includes both code I write and million+ line systems I maintain.