In looking at the following simple code does it make sense to introduce a virtual destructor if I know that we are not deleting from a base pointer? It seems that we should try to avoid vtable look ups if possible for performance reasons. I understand about premature optimization etc. but this is just a question in general. I was wondering your thoughts on the following:
using a protected destructor if we are not deleting items through a base pointer
the overhead associated with introducing a single virtual method
Also, if my class only has the destructor as the virtual method would the lookup overhead only be for the destructor method and other methods would not incur a penalty or once you introduce a vptr everything suffers? I am assuming that each class would have an extra vptr inside of it but that it would only have to perform vptr lookups on the destructor.
class CardPlayer
{
public:
typedef std::vector<CardPlayer> CollectionType;
explicit CardPlayer()=default;
explicit CardPlayer(const Card::CollectionType& cards);
explicit CardPlayer(Card::CollectionType&& cards);
void receiveCard(const Card& card);
bool discardCard(Card&& card);
void foldCards();
inline const Card::CollectionType& getCards() { return cards_; }
// virtual ~CardPlayer() = default; // should we introduce vtable if not really needed?
protected:
~CardPlayer()=default;
Card::CollectionType cards_;
};
--------------------------------------------------------------------
#include "CardPlayer.h"
#include <functional>
class BlackJackPlayer : public CardPlayer
{
public:
typedef std::vector<BlackJackPlayer> CollectionType;
typedef std::function<bool(const Card::CollectionType&)> hitFnType;
BlackJackPlayer(hitFnType fn) : hitFn_(fn) {}
bool wantHit()
{
return hitFn_(getCards());
}
hitFnType hitFn_;
};
I'd avoid the virtual destructor and hence adding a vtbl to the class in your case. You can protect the class from being deleted through the base class pointer, so it seems that, without having any other virtual methods, it would be a premature pessimisation :)
Also, having one more pointer per instance (the vtbl) can add up in large projects. Performance is often depending on memory access and thus you should keep the object size as small as possible and also your memory access patterns as local as possible. The vtbl will be in a different memory location and in the worst case, you ask the processor to read another cache line just to delete an object.
To answer the other question you had: Only virtual methods are routed via the vtbl, all non-virtual calls are uneffected.
To look at it from the other direction, if you are assuming you will never call delete on a base class, there is no need to make the destructor virtual, and hiding the base class destructor is a good way to enforce your assumption.
Regarding simply making your destructor virtual, as soon as you declare any method virtual, that class will automatically be given a vtable, which can increase the memory footprint of the class. Only the virtual functions will incur a vtable lookup. However, both the extra memory ( a single pointer ) and the lookup have minimal performance costs, so you typically would not have to worry about it except in the most performance critical applications.
Making the destructor protected means that it can't be invoked via a base class pointer or reference, which means there's no strong need for it to be virtual.
In general, only public methods or methods that are called by other methods in the base class need to be virtual. Protected methods need only be virtual if they're called from other methods in the class.
Related
I read that there is minor performance hit when calling virtual functions from derived classes if called repeatedly. One thing that I am not clear about is whether this affects function calls from the base class itself. If I call a method in the base class with the virtual keyword, does this performance hit still occur?
If I call a method in the base class with the virtual keyword, does this performance hit still occur?
That the virtual function is being called from the base class will not prevent virtual lookup.
Consider this trivial example:
class Base
{
public:
virtual get_int() { return 1; }
void print_int()
{
// Calling a virtual function from the base class
std::cout << get_int();
}
};
class Derived : public Base
{
public:
virtual get_int() { return 2; }
};
int main()
{
Base().print_int();
Derived().print_int();
}
Is print_int() guaranteed to print 1? It is not.
That the function print_int() exists in the base class does not prove that the object its called from is not derived.
Yes, there will be a performance overhead.
This is due to the fact that virtual functions in an inheritance hierarchy may or may not be overloaded by any derived class.
This requires a lookup in a v-table, as the base class doesn't know any better as to which class is dynamically implementing the function.
Edit: As mentioned, there may be some optimization, but this shouldn't be relied on
Virtual functions are implemented by a virtual function table. Each class has a table of the virtual functions' addresses. An instance of a class with a virtual function table has a pointer to the table, which is set by the constructor.
When the code calls a regular function, its address is hard coded into it. When it calls a virtual function it has to calculate *(*vtbl + 8 * function-offset), which requires two memory accesses. That's the overhead, which can be avoided in cases mentioned by others above.
Point is, if the same function is called repeatedly, much of the overhead might be avoided. The first call would bring the virtual function table from RAM to the CPU's cache, meaning it would be as cheap as 1-2 CPU cycles to fetch again. Doing an integer shift and an addition is rather cheap. If the compiler knows its the same function of the same class, it could calculate it once and reuse the value.
Since the question is "Is there a performance hit?" and not "Can there be a performance hit?", it is surprisingly tricky to answer accurately. This is because compilers are given a lot of leeway when it comes time to optimize your code, and they often make use of it. The process of eliminating that specific overhead has a particular name: devirtualization.
Because of this, whether a cost is incurred or not will depend on:
Which compiler is being used
The compiler version
The compiler settings
The linker settings
How the class is used
Whether there are any subclasses that override the method.
So what should you do with so much uncertainty? Since the overhead is minor in the first place, the first thing is to not worry about it unless you need to improve performance in that specific area. Writing structurally sound and maintainable code trumps premature optimisation every single time.
But if you have to worry about it, then a good rule of thumb is that if you are calling a non-final virtual function from a pointer or reference (which includes this) to a non-final class, then you should write code with the assumption that the tiny overhead associated with an indirect lookup through a vtable will be
paid.
That doesn't mean that the overhead will necessarily occur, just that there is a non-zero chance that it will be there.
So in your scenario, given:
class Base {
public:
virtual void foo() { std::cout << "foo is called\n"; }
void bar() { foo(); }
};
int main() {
Base b;
b.bar();
}
Base is not final
Base::foo is not final.
this is a pointer.
So you should operate under the assumption that the overhead might be present, regardless of whether or not it ends up being there in the final built application.
I work within an environment where I know that all child classes of an abstract base class will never be deleted via a pointer to this abstract base class. So I don't see the need to have this base class to provide with an virtual destructor. Therefore I make the destructor protected, which seems to do what I want.
Here is a simplified example of this:
#include <iostream>
struct Base
{
virtual void x() = 0;
protected:
~Base() =default;
};
struct Child: Base
{
void x() override
{
std::cout << "Child\n";
}
};
int main()
{
// new and delete are here to make a simple example,
// the platform does not provide them
Child *c = new Child{};
Base *b=c;
b->x();
// delete b; // does not compile, as requested
delete c;
return 0;
}
Is it sufficient to make the destructor protected to be save against unwanted base class deletions, or do I miss something important here?
Is it a good idea to [...]
From a safety point of view I answer with a clear 'no':
Consider the case that the child class is inherited again – maybe someone else than you does that. That person might overlook that you violated good practice in Base and assume that the destructor of Child already is virtual – and delete GrandChild via pointer to Child...
To avoid that situation, you could
make the destructor virtual in Child again – so in the end, nothing gained anyway.
or declare Child final, imposing quite some limits that most likely are meaningless from any other point of view.
And you'd have to opt for one of these for any derived class. All for avoiding a single virtual function call on object deletion?
How often would you do that at all? If frequency of object deletion really is an issue, then consider the overhead of allocating and freeing the memory again. In such a case, it's most likely more efficient to allocate memory just once (sufficently large and appropriately aligned to hold any of the objects you consider to create), do placement new and explicit destructor calls instead and finally free the memory again just once when you are done completely and don't need it any more. That will compensate the virtual function call by much...
We use a framework that relies on memcpy in certain functions. To my understanding I can give everything that is trivially copyable into these functions.
Now we want to use a simple class hierarchy. We are not sure whether we can have a class hierarchy that results in trivially copyable types because of the safe destruction. The example code looks like this.
class Timestamp; //...
class Header
{
public:
uint8_t Version() const;
const Timestamp& StartTime();
// ... more simple setters and getters with error checking
private:
uint8_t m_Version;
Timestamp m_StartTime;
};
class CanData : public Header
{
public:
uint8_t Channel();
// ... more setters and getters with error checking
private:
uint8_t m_Channel;
};
The base class is used in several similar subclasses. Here I omitted all constructors and destructors. Thus the classes are trivially copyable. I suppose though that the user can write a code that results in a memory leak like this:
void f()
{
Header* h = new CanData();
delete h;
}
Is it right that the class hierarchy without the virtual destructor is a problem even if all classes use the compiler's default destructor? Is it therefore right that I cannot have a safe class hierarchy that is trivially copyable?
This code
Header* h = new CanData();
delete h;
will trigger undefined behavior since §5.3.5/p3 states:
In the first alternative (delete object), if the static type of the object to be deleted is different from its
dynamic type, the static type shall be a base class of the dynamic type of the object to be deleted and the
static type shall have a virtual destructor or the behavior is undefined
and regardless of not having dynamically allocated objects contained in your derived class (really bad if you have), you shouldn't do it. Having a class hierarchy without the base class virtual destructor is not a problem per se, it becomes a problem when you try to mix static and dynamic types with delete.
Doing memcpy on a derived class object smells of bad design to me, I would rather address the need for a "virtual constructor" (i.e. a virtual clone() function in your base class) to duplicate your derived objects.
You can have your class hierarchy that is trivially copyable if you make sure that your object, its subobjects and base classes are trivially copyable. If you want to prevent users referring to your derived objects via base classes you could, as Mark first suggested, render the inheritance protected
class Header
{
public:
};
class CanData : protected Header
{ ^^^^^^^^^
public:
};
int main() {
Header *pt = new CanData(); // <- not allowed
delete pt;
}
Notice that you won't be able to use base pointers at all to refer to derived objects due to §4.10/p3 - pointer conversions.
If you delete a pointer to a derived type held as its base type and you don't have a virtual destructor, the derived types destructor won't be called, whether it's implicitly generated or not. And whether its implicitly generated or not, you want it to be called. If the derived type's destructor wouldn't actually do anything anyway though, it might not leak anything or cause a problem. If the derived type holds something like a std::string, std::vector, or anything with a dynamic allocation, you want the dtors to be called. As a matter of good practice, you always want a virtual destructor for base classes whether or not the derived classes destructors need to be called (since a base class shouldn't know about what derives from it, it shouldn't make an assumption like this).
If you copy a type like so:
Base* b1 = new Derived;
Base b2 = *b1;
You will only invoke Bases copy ctor. The parts of the object which are actually from Derived will not be involved. b2 will not secretly be a Derived, it will just be a Base.
My first instinct is "don't do that - find another way, a different framework, or fix the framework". But just for fun let's assume that for certain your class copy doesn't depend in any way on the copy constructor of the class or any of its comprised parts being called.
Then since you're clearly inheriting to implement rather than to substitute the solution is easy: Use protected inheritance and your problem is solved, because they can no longer polymorphically access or delete your object, preventing the undefined behavior.
It's almost safe. In particular, there is no memory leak in
Header* h = new CanData();
delete h;
delete h calls the destructor of Header and then frees the memory pointed to by h. The amount of memory freed is the same as was initially allocated at that memory address, not the sizeof(Header). Since Header and CanData are trivial, their destructors do nothing.
However, you must provide a virtual destructor to base even if it does nothing (by requirement of the standard to avoid undefined behaviour). A common guideline is that a destructor for a base class must be either public and virtual or protected and nonvirtual
Of course, you must beware slicing as usual.
Thanks all for posting various suggestions. I try a summarizing answer with an additional proposal for the solution.
The prerequisite of my question was to reach a class hierarchy that is trivially copyable. See http://en.cppreference.com/w/cpp/concept/TriviallyCopyable and especially the requirement of a trivial destructor (http://en.cppreference.com/w/cpp/language/destructor#Trivial_destructor). The class cannot need a destructor implemented. This restricts the allowed data members, but is fine for me. The example shows only C-compatible types without dynamic memory allocation.
Some pointed out that the problem of my code is undefined behaviour, not necessarily a memory leak. Marco quoted the standard regarding this. Thanks, really helpful.
From my understanding of the answers, possible solutions are the following. Please correct me if I am wrong. The solution's point is that the implementation of the base class must avoid that its destructor can be called.
Solution 1: The proposed solutions use protected inheritance.
class CanData : protected Header
{
...
};
It works but avoids that people can access the public interface of Header. This was the original intention to have a base class. CanData needs to forward these functions to Header. In the consequece, I would reconsider to use composition instead of inheritance here. But the solution should work.
Solution 2: Header's destructor must be protected, not the base class as a whole.
class Header
{
public:
uint8_t Version() const;
const Timestamp& StartTime();
// ... more simple setters and getters with error checking
protected:
~Header() = default;
private:
uint8_t m_Version;
Timestamp m_StartTime;
};
Then no user can delete Header. This is fine for me, because Header has no purpose on its own. With public derivation, the public interface remains available to the user.
My understanding is that CanData needs not implement a destructor to call the base class's desctructor. All can use the default destructor. I am not completely sure about this though.
All in all, the answers to my questions in the end of the origial positing are:
Is it right that the class hierarchy without the virtual destructor is a problem even if all classes use the compiler's default destructor?
It is only a problem if your destructor is public. You must avoid that people can access you desctrutor, except for derived classes. And you must ensure that derived classes call (implicitely) the base class's destructor.
Is it therefore right that I cannot have a safe class hierarchy that is trivially copyable?
You can make your base class safe with protected inheritance or a protected desctructor. Then you can have a hierarchy of trivially copyable classes.
I find that almost every code snippet of virtual destructors has it as public member function, like this:
class Base
{
public:
virtual ~Base()
{
cout << "~Base()" << endl;
}
};
class Derived : public Base
{
public:
~Derived()
{
cout << "~Derived()" << endl;
}
};
Do virtual destructors have to be public or are there situations where a non-public virtual destructor makes sense?
Do virtual destructors have to be public or are there situations where a non-public virtual destructor makes sense?
Horses for courses. You use a public virtual destructor if you need polymorphic deletion if not then your destructor does not need to be virtual at all.
Follow Herb's advice:
Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual.
In brief, then, you're left with one of two situations. Either:
You want to allow polymorphic deletion through a base pointer, in which case the destructor must be virtual and public; or
You don't, in which case the destructor should be nonvirtual and protected, the latter to prevent the unwanted usage.
Just as non-virtual destructors, no they need not be public, but most of the time they are.
If your class is an exception to the rule and needs to take control of the lifetime of its instances for any reason then the destructor has to be non-public. This will affect how clients can (or cannot) utilize instances of the class, but that's of course the whole point. And since the destructor is virtual, the only other option would be virtual protected.
Related: Is there a use for making a protected destructor virtual?
If you plan to create/destroy objects via special methods (for example, create/destroy), it is not necessary. But if you create your object on stack or heap, you have to have public destructor.
The question here is about a virtual destructor, hence I assume the permutations of reasons on why such implementation is needed should include inheritance cases as well. The answer for the question depends on the following:
1) You may use a private constructor/destructor if you don't want class to be instantiated. Though, The instantiation can be done by another method in the same class. So, when you want to use a specific method like MyDestructor() within the class to call the destructor, a destructor can still be put under private.
For Ex: Singleton design pattern. Also, in this case, it prevents the class from being inherited
2) If at all the class is intended to be inherited, Private base class destructor cannot be allowed (throws a compile error).But, a protected base class destructor allows inheritance
3) The type of inheritance (public and protected) of a protected virtual destructor allows a safe way of multi level inheritance A->B->C so that when the destructor of C is called, the memory is cleaned up better.
4) A private destructor alone cannot allow delete for (I'm not sure about the auto_ptr, but I think even that should adhere to the same thought of using a "private" destructor) when the memory is dynamically allocated using new.
All around, I see using a private destructor may be error prone, especially, when someone who's not aware of such implementation is about to use such a class.
protected and public destructors are always welcome and the usage depends on the needs as given above.
Hope this clarifies.
There are two separate rules involved here. First, if your design calls for deleting objects of a derived type through a pointer to a base, the destructor in the base must be virtual. Second, if a member function (and by that I broadly include the destructor) is protected or private, then the contexts in which it can be called are more restricted than when it's public (of course, if the destructor is private, you can't derived from the class). For example:
class C {
protected:
virtual ~C();
friend void destroy_me(C*);
};
void destroy_me(C *cp) {
delete cp; // OK: destructor is accessible
}
void destroy_someone_else(C *cp) {
delete cp; // Error: destructor is not accessible
}
Is there ever a good reason to not declare a virtual destructor for a class? When should you specifically avoid writing one?
There is no need to use a virtual destructor when any of the below is true:
No intention to derive classes from it
No instantiation on the heap
No intention to store with access via a pointer to a superclass
No specific reason to avoid it unless you are really so pressed for memory.
To answer the question explicitly, i.e. when should you not declare a virtual destructor.
C++ '98/'03
Adding a virtual destructor might change your class from being POD (plain old data)* or aggregate to non-POD. This can stop your project from compiling if your class type is aggregate initialized somewhere.
struct A {
// virtual ~A ();
int i;
int j;
};
void foo () {
A a = { 0, 1 }; // Will fail if virtual dtor declared
}
In an extreme case, such a change can also cause undefined behaviour where the class is being used in a way that requires a POD, e.g. passing it via an ellipsis parameter, or using it with memcpy.
void bar (...);
void foo (A & a) {
bar (a); // Undefined behavior if virtual dtor declared
}
[* A POD type is a type that has specific guarantees about its memory layout. The standard really only says that if you were to copy from an object with POD type into an array of chars (or unsigned chars) and back again, then the result will be the same as the original object.]
Modern C++
In recent versions of C++, the concept of POD was split between the class layout and its construction, copying and destruction.
For the ellipsis case, it is no longer undefined behavior it is now conditionally-supported with implementation-defined semantics (N3937 - ~C++ '14 - 5.2.2/7):
...Passing a potentially-evaluated argument of class type (Clause 9) having a non-trivial copy constructor, a non-trivial move constructor, or a on-trivial destructor, with no corresponding parameter, is conditionally-supported with implementation-defined semantics.
Declaring a destructor other than =default will mean it's not trivial (12.4/5)
... A destructor is trivial if it is not user-provided ...
Other changes to Modern C++ reduce the impact of the aggregate initialization problem as a constructor can be added:
struct A {
A(int i, int j);
virtual ~A ();
int i;
int j;
};
void foo () {
A a = { 0, 1 }; // OK
}
I declare a virtual destructor if and only if I have virtual methods. Once I have virtual methods, I don't trust myself to avoid instantiating it on the heap or storing a pointer to the base class. Both of these are extremely common operations and will often leak resources silently if the destructor is not declared virtual.
A virtual destructor is needed whenever there is any chance that delete might be called on a pointer to an object of a subclass with the type of your class. This makes sure the correct destructor gets called at run time without the compiler having to know the class of an object on the heap at compile time. For example, assume B is a subclass of A:
A *x = new B;
delete x; // ~B() called, even though x has type A*
If your code is not performance critical, it would be reasonable to add a virtual destructor to every base class you write, just for safety.
However, if you found yourself deleteing a lot of objects in a tight loop, the performance overhead of calling a virtual function (even one that's empty) might be noticeable. The compiler cannot usually inline these calls, and the processor might have a difficult time predicting where to go. It is unlikely this would have a significant impact on performance, but it's worth mentioning.
Virtual functions mean every allocated object increases in memory cost by a virtual function table pointer.
So if your program involves allocating a very large number of some object, it would be worth avoiding all virtual functions in order to save the additional 32 bits per object.
In all other cases, you will save yourself debug misery to make the dtor virtual.
Not all C++ classes are suitable for use as a base class with dynamic polymorphism.
If you want your class to be suitable for dynamic polymorphism, then its destructor must be virtual. In addition, any methods which a subclass could conceivably want to override (which might mean all public methods, plus potentially some protected ones used internally) must be virtual.
If your class is not suitable for dynamic polymorphism, then the destructor should not be marked virtual, because to do so is misleading. It just encourages people to use your class incorrectly.
Here's an example of a class which would not be suitable for dynamic polymorphism, even if its destructor were virtual:
class MutexLock {
mutex *mtx_;
public:
explicit MutexLock(mutex *mtx) : mtx_(mtx) { mtx_->lock(); }
~MutexLock() { mtx_->unlock(); }
private:
MutexLock(const MutexLock &rhs);
MutexLock &operator=(const MutexLock &rhs);
};
The whole point of this class is to sit on the stack for RAII. If you're passing around pointers to objects of this class, let alone subclasses of it, then you're Doing It Wrong.
A good reason for not declaring a destructor as virtual is when this saves your class from having a virtual function table added, and you should avoid that whenever possible.
I know that many people prefer to just always declare destructors as virtual, just to be on the safe side. But if your class does not have any other virtual functions then there is really, really no point in having a virtual destructor. Even if you give your class to other people who then derive other classes from it then they would have no reason to ever call delete on a pointer that was upcast to your class - and if they do then I would consider this a bug.
Okay, there is one single exception, namely if your class is (mis-)used to perform polymorphic deletion of derived objects, but then you - or the other guys - hopefully know that this requires a virtual destructor.
Put another way, if your class has a non-virtual destructor then this is a very clear statement: "Don't use me for deleting derived objects!"
If you have a very small class with a huge number of instances, the overhead of a vtable pointer can make a difference in your program's memory usage. As long as your class doesn't have any other virtual methods, making the destructor non-virtual will save that overhead.
I usually declare the destructor virtual, but if you have performance critical code that is used in an inner loop, you might want to avoid the virtual table lookup. That can be important in some cases, like collision checking. But be careful about how you destroy those objects if you use inheritance, or you will destroy only half of the object.
Note that the virtual table lookup happens for an object if any method on that object is virtual. So no point in removing the virtual specification on a destructor if you have other virtual methods in the class.
If you absolutely positively must ensure that your class does not have a vtable then you must not have a virtual destructor as well.
This is a rare case, but it does happen.
The most familiar example of a pattern that does this are the DirectX D3DVECTOR and D3DMATRIX classes. These are class methods instead of functions for the syntactic sugar, but the classes intentionally do not have a vtable in order to avoid the function overhead because these classes are specifically used in the inner loop of many high-performance applications.
On operation that will be performed on the base class, and that should behave virtually, should be virtual. If deletion can be performed polymorphically through the base class interface, then it must behave virtually and be virtual.
The destructor has no need to be virtual if you don't intend to derive from the class. And even if you do, a protected non-virtual destructor is just as good if deletion of base class pointers isn't required.
The performance answer is the only one I know of which stands a chance of being true. If you've measured and found that de-virtualizing your destructors really speeds things up, then you've probably got other things in that class that need speeding up too, but at this point there are more important considerations. Some day someone is going to discover that your code would provide a nice base class for them and save them a week's work. You'd better make sure they do that week's work, copying and pasting your code, instead of using your code as a base. You'd better make sure you make some of your important methods private so that no one can ever inherit from you.