Why default destuctor for an abstract class is not virtual? - c++

Consider
class A
{
public:
virtual void foo () = 0;
};
At this point it is absolutely obvious that A is an abstract class and will never be instantiated on it's own. So why the standard doesn't demand that automatically generated destructor must be virtual as well?
I ask myself this question every time I need to define a dummy virtual desctuctor in my interface classes and can't see why the commetee did't do this.
So the question: why generated destructor in an abstract class is not virtual?

Because in C++ you don't pay for what you don't need, and a virtual destructor adds overhead (even in already polymorphic classes) that isn't needed in many cases. For example you might not need polymorphic destruction and choose to have a protected destructor instead.
Further, as an alternative scenario, imagine that you have a class with a virtual method that does desire polymorphic destruction. Now imagine that the other virtual method is no longer needed and removed but polymorphic destruction is still needed. Now you have to remember to go back and add a virtual destructor or suffer undefined behavior.
Finally I think it would be hard to justify changing the default virtualness of the destructor (and it alone) based on whether a class is polymorphic or not rather than always and consistently making a destructor non-vurtual unless requested otherwise.

A virtual Destructor would cause dereferencing every time this class would be destructed. Rather small overhead, but C++ wants to save as much time as possible. Anyway, being explicit is always better, than trusting implicit compiler magic.
C++'s motto: "Trust the programmer".
LG ntor

When the c++ standard was written, it was written by keeping in my mind that it will be used on various platforms. Some of which might has memory constraints.By adding virtual-ism we are increasing the overhead.That why at that time every method/dtor needs to be explicitly made virtual by the programmer, whenever we do require polymorphism.
Now question comes to why can not standard c++ implementation of abstract class default destructor. Dont you think it will strange to have different implementation, and also it will cause confusion.And what about the case(however small it is) , when you dont need the distructor to be virtaul(so as to save memory).Why waste the memory

Related

Can the 'virtual' keyword be optimized away if no class re-implements it?

When I define a class in C++ I always define the dtor as virtual.
This is my way to protect myself in case I will write an inheriting class.
I wonder whether I pay the performance-overhead even in case I won't be inheriting the class.
For example:
class A final
{
A();
virtual ~A(){printf("dtor");}
};
When I use this class, will the dtor actually get called through a vtable or will it be implemented as a static dtor?
When I define a class in C++ I always define the dtor as virtual.
This is very bad practice. Classes should either be designed to be polymorphic... or not. It's not just an issue of design either - polymorphism adds overhead.
Now, good compilers when they see delete a; if they can prove that a will only ever be of type A, will remove the virtual call and directly call ~A(). This is called devirtualization. But what they won't do is remove the vtable. Adding unnecessarily polymorphism means all your types now have vtables which means they're all using extra space. In your simple example, the presence of virtual increases sizeof(A) from 1 to 8. If you have a lot of As, you're now messing with cache effects. This is bad.
In short, design your classes according to their use. Not according to some problems that you may or may not eventually have if they are misused.
This is my way to protect myself in case I will write an inheriting class.
Note also that not all inheritance must be polymorphic - not even classes that intend to be inherited from from need to have a virtual destructor. That's only necessary if the usage is to hold onto a Base* and then delete it. It's perfectly safe for me to inherit from something like std::vector<> to provide a different interface - as long as I'm not trying to delete my inherited type through std::vector<>.
On the other hand, this
class A final { ... };
is good practice! If A isn't intended to be inherited from so explicitly make it ill-formed to inherit from it. Now, when you need to inherit from A, you have to make a conscious effort to think about the consequences of doing so.
As soon as you declared the class as final, it cannot be used as base class for any other one. So the virtual does not make sense.
Because of the as if rule, the compiler is then free to ignore the virtual keyboard, but it is not required to do it. BTW the mere existence of a vtable is an implementation detail and is not required by the standard.
TL/DR: it depends on the compiler implementation.

Could the implicit destructor of a polymorphic class be made virtual?

As far as I'm aware, it is always a mistake (or at the very least, asking for trouble) to define a class with virtual functions but a non-virtual destructor.
As such (and thinking about the newly-coined "rule of zero"), it seems to me that the implicitly generated destructor should automatically be virtual for any class with at least one other virtual function.
Would it be feasible for some future version of the C++ standard to mandate this? Or to put it another way, are there any good reasons to keep the default destructor non-virtual in a polymorphic class?
EDIT: Just to make it clear, I'm only suggesting what might happen if you don't write a destructor -- if you do write your own, you of course get to choose whether it's virtual or not, as ever. I'd just like to see the default match the common case (without preventing more advanced usage).
If you don't want or need to polymorphically delete such objects it's not needed that the destructor be virtual. Instead it can be protected non-virtual in the base class, allowing only to be deleted non-polymorphically. Requiring it to be automatically virtual would then impose an undue cost on applications that don't need polymorphic destruction.

Can a virtual destructor ever be a bad thing?

I always mark my classes with a virtual destructor even when it is not needed. Other than maybe a small performance hit could there be a situation where having a virtual destructor when you do not need one cause memory errors or something terrible?
Thanks
It’s a fundamental flaw to make all classes extensible. Most classes are simply not suitable to be inherited from and it makes no sense to facilitate this if you don’t design classes for extension up front.
This is just misleading the users of your API who will take this as a hint that the class is meaningfully inheritable. In reality, this is rarely the case and will bring no benefit, or break code in the worst case.
Once you make a class inheritable, you’re settled with it for the rest of your life: you cannot change its interface, and you must never break the (implicit!) semantics it has. Essentially, the class is no longer yours.
On the other hand, inheritance is way overrated anyway. Apart from its use for public interfaces (pure virtual classes), you should generally prefer composition over inheritance.
Another, more fundamental case where a virtual destructor is undesirable is when the code you have requires a POD to work – such as when using it in a union, when interfacing with C code or when performing POD-specific optimisations (only PODs are blittable, meaning they can be copied very efficiently).
[Hat tip to Andy]
A word about performance overhead: there are situations in which lots of small objects are created, or objects are created in tight loops. In those cases, the performance overhead of virtual destructors can be a crucial mistake.
Classes which have virtual function tables are also larger than classes without, which can also lead to unnecessary performance impact.
All in all, there are no compelling reasons to make destructors virtual, and some compelling reasons not to.
There's no point in declaring it virtual if you don't plan on inheriting from the class (or if the class is not meant to be inherited from).
On the other hand, if you want to access this class polymorphically, then yes, virtual destructor is a good thing to have.
But to answer your question precisely, it can not cause any "terrible memory errors" and marking it virtual all the time can't really hurt you.
But i see no reason to use a virtual destructor all the time. It's up to you.
Also, this post by Herb brings some light into the matter.
No, AFAIK. The virtual destructor either behaves exactly the same way as the nonvirtual one (ie. the virtual and direct calls call the same function) or you get undefined behavior. So you cannot "do something terrible" by changing a nonvirtual destructor to a virtual one.
It can, however, expose errors caused by other parts of the code, ie. when you accidentally overwrite the virtual table pointer of an object.

Are there any specific reasons to use non-virtual destructors?

As I know, any class that is designated to have subclasses should be declared with virtual destructor, so class instances can be destroyed properly when accessing them through pointers.
But why it's even possible to declare such class with non-virtual destructor? I believe compiler can decide when to use virtual destructors. So, is it a C++ design oversight, or am I missing something?
Are there any specific reasons to use non-virtual destructors?
Yes, there are.
Mainly, it boils down to performance. A virtual function cannot be inlined, instead you must first determined the correct function to invoke (which requires runtime information) and then invoke that function.
In performance sensitive code, the difference between no code and a "simple" function call can make a difference. Unlike many languages C++ does not assume that this difference is trivial.
But why it's even possible to declare such class with non-virtual destructor?
Because it is hard to know (for the compiler) if the class requires a virtual destructor or not.
A virtual destructor is required when:
you invoke delete on a pointer
to a derived object via a base class
When the compiler sees the class definition:
it cannot know that you intend to derive from this class -- you can after all derive from classes without virtual methods
but even more daunting: it cannot know that you intend to invoke delete on this class
Many people assume that polymorphism requires newing the instance, which is just sheer lack of imagination:
class Base { public: virtual void foo() const = 0; protected: ~Base() {} };
class Derived: public Base {
public: virtual void foo() const { std::cout << "Hello, World!\n"; }
};
void print(Base const& b) { b.foo(); }
int main() {
Derived d;
print(d);
}
In this case, there is no need to pay for a virtual destructor because there is no polymorphism involved at the destruction time.
In the end, it is a matter of philosophy. Where practical, C++ opts for performance and minimal service by default (the main exception being RTTI).
With regards to warning. There are two warnings that can be leveraged to spot the issue:
-Wnon-virtual-dtor (gcc, Clang): warns whenever a class with virtual function does not declare a virtual destructor, unless the destructor in the base class is made protected. It is a pessimistic warning, but at least you do not miss anything.
-Wdelete-non-virtual-dtor (Clang, ported to gcc too): warns whenever delete is invoked on a pointer to a class that has virtual functions but no virtual destructor, unless the class is marked final. It has a 0% false positive rate, but warns "late" (and possibly several times).
Why are destructors not virtual by default?
http://www2.research.att.com/~bs/bs_faq2.html#virtual-dtor
Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual.
http://www.gotw.ca/publications/mill18.htm
See also: http://www.erata.net/programming/virtual-destructors/
EDIT: possible duplicate? When should you not use virtual destructors?
Your question is basically this, "Why doesn't the C++ compiler force your destructor to be virtual if the class has any virtual members?" The logic behind this question is that one should use virtual destructors with classes that they intend to derive from.
There are many reasons why the C++ compiler doesn't try to out-think the programmer.
C++ is designed on the principle of getting what you pay for. If you want something to be virtual, you must ask for it. Explicitly. Every function in a class that is virtual must be explicitly declared so (unless its overriding a base class version).
if the destructor for a class with virtual members were automatically made virtual, how would you choose to make it non-virtual if that's what you so desired? C++ doesn't have the ability to explicitly declare a method non-virtual. So how would you override this compiler-driven behavior.
Is there a particular valid use case for a virtual class with a non-virtual destructor? I don't know. Maybe there's a degenerate case somewhere. But if you needed it for some reason, you wouldn't be able to say it under your suggestion.
The question you should really ask yourself is why more compilers don't issue warnings when a class with virtual members doesn't have a virtual destructor. That's what warnings are for, after all.
A non-virtual destructor seems to make sense, when a class is just non-virtual after all (Note 1).
However, I do not see any other good use for non-virtual destructors.
And I appreciate that question. Very interesting question!
EDIT:
Note 1:
In performance-critical cases, it may be favourable to use classes without any virtual function table and thus without any virtual destructors at all.
For example: think about a class Vector3 that contains just three floating point values. If the application stores an array of them, then that array could be store in compact fashion.
If we require a virtual function table, AND if we'd even require storage on heap (as in Java & co.), then the array would just contain pointers to actual elements "SOMEWHERE" in memory.
EDIT 2:
We may even have an inheritance tree of classes without any virtual methods at all.
Why?
Because, even if having "virtual" methods may seem to be the common and preferable case, it IS NOT the only case that we - the mankind - can imagine.
As in many details of that language, C++ offers you a choice. You can choose one of the provided options, usually you will choose the one that anyone else chooses. But sometimes you do not want that option!
In our example, a class Vector3 could inherit from class Vector2, and still would not have the overhead of virtual functions calls. Thought, that example is not very good ;)
Another reason I haven't seen mentioned here are DLL boundaries: You want to use the same allocator to free the object that you used to allocate it.
If the methods live in a DLL, but the client code instantiates the object with a direct new, then the client's allocator is used to obtain the memory for the object, but the object is filled in with the vtable from the DLL, which points to a destructor that uses the allocator the DLL is linked against to free the object.
When subclassing classes from the DLL in the client, the problem goes away as the virtual destructor from the DLL is not used.

Should I use virtual 'Initialize()' functions to initialize an object of my class?

I'm currently having a discussion with my teacher about class design and we came to the point of Initialize() functions, which he heavily promotes. Example:
class Foo{
public:
Foo()
{ // acquire light-weight resources only / default initialize
}
virtual void Initialize()
{ // do allocation, acquire heavy-weight resources, load data from disk
}
// optionally provide a Destroy() function
// virtual void Destroy(){ /*...*/ }
};
Everything with optional parameters of course.
Now, he also puts emphasis on extendability and usage in class hierarchies (he's a game developer and his company sells a game engine), with the following arguments (taken verbatim, only translated):
Arguments against constructors:
can't be overridden by derived classes
can't call virtual functions
Arguments for Initialize() functions:
derived class can completely replace initialization code
derived class can do the base class initialization at any time during its own initialization
I have always been taught to do the real initialization directly in the constructor and to not provide such Initialize() functions. That said, I for sure don't have as much experience as he does when it comes to deploying a library / engine, so I thought I'd ask at good ol' SO.
So, what exactly are the arguments for and against such Initialize() functions? Does it depend on the environment where it should be used? If yes, please provide reasonings for library / engine developers or, if you can, even game developer in general.
Edit: I should have mentioned, that such classes will be used as member variables in other classes only, as anything else wouldn't make sense for them. Sorry.
For Initialize: exactly what your teacher says, but in well-designed code you'll probably never need it.
Against: non-standard, may defeat the purpose of a constructor if used spuriously. More importantly: client needs to remember to call Initialize. So, either instances will be in an inconsistent state upon construction, or they need lots of extra bookkeeping to prevent client code from calling anything else:
void Foo::im_a_method()
{
if (!fully_initialized)
throw Unitialized("Foo::im_a_method called before Initialize");
// do actual work
}
The only way to prevent this kind of code is to start using factory functions. So, if you use Initialize in every class, you'll need a factory for every hierarchy.
In other words: don't do this if it's not necessary; always check if the code can be redesigned in terms of standard constructs. And certainly don't add a public Destroy member, that's the destructor's task. Destructors can (and in inheritance situations, must) be virtual anyway.
I"m against 'double initialization' in C++ whatsoever.
Arguments against constructors:
can't be overridden by derived classes
can't call virtual functions
If you have to write such code, it means your design is wrong (e.g. MFC). Design your base class so all the necessary information that can be overridden is passed through the parameters of its constructor, so the derived class can override it like this:
Derived::Derived() : Base(GetSomeParameter())
{
}
This is a terrible, terrible idea. Ask yourself- what's the point of the constructor if you just have to call Initialize() later? If the derived class wants to override the base class, then don't derive.
When the constructor finishes, it should make sense to use the object. If it doesn't, you've done it wrong.
One argument for preferring initialization in the constructor: it makes it easier to ensure that every object has a valid state. Using two-phase initialization, there's a window where the object is ill-formed.
One argument against using the constructor is that the only way of signalling a problem is through throwing an exception; there's no ability to return anything from a constructor.
Another plus for a separate initialization function is that it makes it easier to support multiple constructors with different parameter lists.
As with everything this is really a design decision that should be made with the specific requirements of the problem at hand, rather than making a blanket generalization.
A voice of dissension is in order here.
You might be working in an environment where you have no choice but to separate construction and initialization. Welcome to my world. Don't tell me to find a different environment; I have no choice. The preferred embodiment of the products I create is not in my hands.
Tell me how to initialize some aspects of object B with respect to object C, other aspects with respect to object A; some aspects of object C with respect to object B, other aspects with respect to object A. The next time around the situation may well be reversed. I won't even get into how to initialize object A. The apparently circular initialization dependencies can be resolved, but not by the constructors.
Similar concerns goes for destruction versus shutdown. The object may need to live past shutdown, it may need to be reused for Monte Carlo purposes, and it might need to be restarted from a checkpoint dumped three months ago. Putting all of the deallocation code directly in the destructor is a very bad idea because it leaks.
Forget about the Initialize() function - that is the job of the constructor.
When an object is created, if the construction passed successfully (no exception thrown), the object should be fully initialized.
While I agree with the downsides of doing initialization exclusively in the constructor, I do think that those are actually signs of bad design.
A deriving class should not need to override base class initialization behaviour entirely. This is a design flaw which should be cured, rather than introducing Initialize()-functions as a workaround.
Not calling Initialize may be easy to do accidentally and won't give you a properly constructed object. It also doesn't follow the RAII principle since there are separate steps in constructing/destructing the object: What happens if Initialize fails (how do you deal with the invalid object)?
By forcing default initialization you may end up doing more work than doing initialization in the constructor proper.
Ignoring the RAII implications, which others have adequately covered, a virtual initialization method greatly complicates your design. You can't have any private data, because for the ability to override the initialization routine to be at all useful, the derived object needs access to it. So now the class's invariants are required to be maintained not only by the class, but by every class that inherits from it. Avoiding that sort of burden is part of the point behind inheritance in the first place, and the reason constructors work the way they do with regard to subobject creation.
Others have argued at length against the use of Initialize, I myself see one use: laziness.
For example:
File file("/tmp/xxx");
foo(file);
Now, if foo never uses file (after all), then it's completely unnecessary to try and read it (and would indeed be a waste of resources).
In this situation, I support Lazy Initialization, however it should not rely on the client calling the function, but rather each member function should check if it is necessary to initialize or not. In this example name() does not require it, but encoding() does.
Only use initialize function if you don't have the data available at point of creation.
For example, you're dynamically building a model of data, and the data that determines the object hierarchy must be consumed before the data that describes object parameters.
If you use it, then you should make the constructor private and use factory methods instead that call the initialize() method for you. For example:
class MyClass
{
public:
static std::unique_ptr<MyClass> Create()
{
std::unique_ptr<MyClass> result(new MyClass);
result->initialize();
return result;
}
private:
MyClass();
void initialize();
};
That said, initializer methods are not very elegant, but they can be useful for the exact reasons your teacher said. I would not consider them 'wrong' per se. If your design is good then you probably will never need them. However, real-life code sometimes forces you to make compromises.
Some members simply must have values at construction (e.g. references, const values, objects designed for RAII without default constructors)... they can't be constructed in the initialise() function, and some can't be reassigned then.
So, in general it's not a choice of constructor vs. initialise(), it's a question of whether you'll end up having code split between the two.
Of bases and members that could be initialised later, for the derived class to do it implies they're not private; if you go so far as to make bases/members non-private for the sake of delaying initialisaton you break encapsulation - one of the core principles of OOP. Breaking encapsulation prevents base class developer(s) from reasoning about the invariants the class should protect; they can't develop their code without risking breaking derived classes - which they might not have visibility into.
Other times it's possible but sometimes inefficient if you must default construct a base or member with a value you'll never use, then assign it a different value soon after. The optimiser may help - particularly if both functions are inlined and called in quick succession - but may not.
[constructors] can't be overridden by derived classes
...so you can actually rely on them doing what the base class needs...
[constructors] can't call virtual functions
The CRTP allows derived classes to inject functionality - that's typically a better option than a separate initialise() routine, being faster.
Arguments for Initialize() functions:
derived class can completely replace initialization code
I'd say that's an argument against, as above.
derived class can do the base class initialization at any time during its own initialization
That's flexible but risky - if the base class isn't initialised the derived class could easily end up (due to oversight during the evolution of the code) calling something that relies on that base being initialised and consequently fails at run time.
More generally, there's the question of reliable invocation, usage and error handling. With initialise, client code has to remember to call it with failures evident at runtime not compile time. Issues may be reported using return types instead of exceptions or state, which can sometimes be better.
If initialise() needs to be called to set say a pointer to nullptr or a value safe for the destructor to delete, but some other data member or code throws first, all hell breaks loose.
initialise() also forces the entire class to be non-const in the client code, even if the client just wants to create an initial state and ensure it won't be further modified - basically you've thrown const-correctness out the window.
Code doing things like p_x = new X(values, for, initialisation);, f(X(values, for initialisation), v.push_back(X(values, for initialisation)) won't be possible - forcing verbose and clumsy alternatives.
If a destroy() function is also used, many of the above problems are exacerbated.