Can an empty destructor cause harm? [duplicate] - c++

This question already has answers here:
Will an 'empty' constructor or destructor do the same thing as the generated one?
(7 answers)
Closed 6 years ago.
I'm new to C++, and one of the concepts I'm working on understanding is destructors. Out of curiosity, can an unnecessary (e.g., when a class has no dynamically allocated memory, resources, or anything requiring a user-defined destructor) and empty destructor cause any unforeseen problems?
Edit: I know that part of this has been answered in Will an 'empty' constructor or destructor do the same thing as the generated one? but I wanted to broaden it to ask more about generalized negative consequences such as crashes or making an application slower. There is some overlap, but it is a slightly different question.

The question depends on several parameters. Emptiness isn't the only thing that has effect on the result. E.g., if you don't define virtual destructor (empty or not), you'll get problematic behavior when inheriting from the class. On the other hand, if you define an empty destructor in private or protected section, it will prevent creating instances of the class on stack.

There is also an interesting aspect (which do not seem to be talked about in the linked duplicate) of a triviality of destructor. Compiler-generated (or defaulted) destructors are considered trivial destructors, and having a trivial destructor is a pre-requisite of your class being a POD-type. The user-defined destructor, even if empty, prevents your class from being a POD-type.
And having a POD-type is sometimes very important. For example, POD-types can be memcpyed or entity-serialized.

No, all members of the instance are still destroyed after your destructor ran. The only thing a destructor must not do is throwing an exception, otherwise it may do anything a "proper" method can do, i.e. also doing nothing at all is fine. Not closing handles when you should have is another question.

No. In fact, if you don't declare and write a destructor for a class or struct, the compiler will do it for you - and it will be empty.

"Empty destructor" is a bit of a misnomer. Whether or not your destructor has a body, the compiler will still generate code to call the destructor of every non-static member variable and base class in reverse order of declaration. You only need a body if you wish to do something before those other destructors get called.

Related

Why does the deleting destructor occupy a second vtable slot besides the ordinary destructor?

In C++ ABI implementations modeled after the Itanium C++ ABI, which is followed by many ABIs for other processors, virtual destructors actually occupy two vtable slots. Besides the "complete object destructor", which does what you would expect, there is a second entry for the "deleting destructor", which calls the first, and then deletes the memory of the object.
There is a problem with this approach, which can be a nuisance in small memory systems: The dynamic memory manager is linked in, even when no other code uses it. This is dead code when there is no call to delete anywhere in the application. This is because the C++ compiler/linker usually isn't able to detect that a slot in the vtable isn't called from anywhere, and hence to remove the associated code. Clearly, it would be better if the deleting destructor could be implemented in a different way that doesn't involve a vtable entry, and allows the compiler/linker to omit this dead code.
One can of course implement a custom void operator delete(void *) {} to prevent the linker from bringing in the dynamic memory code, but this still doesn't prevent the deleting destructor code to be emitted entirely.
Hence my question: Is there no better way to implement deleting destructors? My idea would have been to return the pointer to the start of the memory block to delete from the complete object destructor. If the memory block is to be deleted after destruction, this returned address can be used by a nonvirtual function that calls operator delete. Essentially, having the memory address returned by the complete object destructor would allow the deleting destructor to be nonvirtual, and therefore eligible for dead code elimination.
But I guess I must have overlooked something, which makes that rather simple solution impossible. But what would that be? Can someone expound the the design decision in the Itanium ABI for me?
Edit: I have found information that provides a partial answer here:
The top answer contains this explanation:
When some class defines its own operator delete, the selection of a specific operator delete to call is done as if it was looked up from inside the class destructor. The end result of that is that for classes with virtual destructor operator delete behaves as if it were a virtual function (despite formally being a static member of the class).
Apparently, the way chosen by the Itanium API to make it behave like a virtual function, is to make the destructor that calls it an actual virtual function.
However, that is not the only way to implement it. The linked article centers around an implementation that uses a single virtual function with a hidden parameter, but that solution produces the same undesirable behaviour I was describing above. Another implementation might be to have the complete object destructor return the address of the operator delete() if there is a custom implementation for the class, and nullptr otherwise. This would avoid the problem I described above.
So, in a somewhat modified form, my question still stands.
Unlike normal virtual functions at destruction of a class, one has to call its destructor and destructors of all its parents in the correct order.
It is technically possible to unite all the function calls into a single function call but it would require knowledge of the implementation of each destructor or full structure of the object. Basically de-virtualizing the call. Compilers aren't good at it. Not too sure on all the details as it is quite complex question considering all the possible overly-virtual objects.

Throwing exception in destructor of derived class

I am compiling a C++ library, and I have an issue with destructors and noexcept.
It is my understanding that, since C++11, the default value of noexcept(...) inside destructors has changed. In particular, now destructors default to noexcept (that is noexcept(true)).
Now, I have a library for which I get many compiler warnings due to the fact that the destructor of some classes can throw, but it's not marked noexcept(false). So I decided to change that, and add the specification. However, these classes inherited from some common parent Base, so this forced me to add it also to the parent class destructor, otherwise I get the compiler error overriding ‘virtual Base::~Base() noexcept.
Of course, I can add noexcept(false) also to the base class (actually, it is enough to add it there, as far as I understand), but there are lots of classes that inherit from Base, and they would all inherit the noexcept(false) attribute (as far as I understand). However, the base class destructor itself will never throw, and only few of the derived classes can actually throw. Therefore, it appears a waste to mark the base class destructor noexcept(false) only for a few classes.
So, I have two questions:
1) Is there any way around this? I cannot remove the throws in the few derived classes that throw, since they are important.
2) I am not sure how much the compiler optimization could be degraded due to the addition of noexcept(false) to the base class of (virtually) all the classes in the library. When (if ever) should this concern me?
Edit: on a side note, is there a way to change the default value for noexcept inside a project? Perhaps a compiler option?
Edit: since there have been many feedback, I owe an edit, with a few remarks:
1) I did not write the library, and I certainly do not plan to rewrite the destructors of many classes. Having to add a noexcept(false) to a some base classes is already enough of a pain. And resorting to a different library is not an option (for different reasons that go beyond programming).
2) When I said "I cannot remove the throws in the few derived classes that throw, since they are important" I meant that I believe the code should terminate in those case, since something really bad happened and undefined behavior may follow anyways. The throw at least comes with a little explanation of what happened.
3) I understand that throwing in a destructor of a derived class is bad, since it may leak due to the fact that the parent class' destructor is not called (is this correct?). What would be a clean way around if one ought to terminate the program (to avoid obscure bugs later) and still let the user know what happened?
If the destructor is virtual, the derived destructors can't be noexcept(false) unless the base destructor is also noexcept(false).
Think about it: the point of virtual functions is that the derived class can be called even if the caller only knows about the base class. If something calls delete on a Base*, the call may be compiled without any exception-handling code if Base promises that its destructor won't throw exceptions. If the pointer points to a Derived instance whose destructor actually does throw an exception, that would be bad.
Try to change the derived destructors so they don't throw exceptions. If that's not possible, add noexcept(false) to the base destructor.
I am not sure how much the compiler optimization could be degraded due to the addition of noexcept(false) to the base class of (virtually) all the classes in the library. When (if ever) should this concern me?
You should not be concerned about compiler optimizations on the basis of noexcept declarations on destructors. Most of the optimizations around noexcept declarations come from code that detects that an operation is noexcept and does something more efficient based on that. That is, explicit metaprogramming.
Why doesn't that matter for destructors? Well, consider the C++ standard library. If you hand vector a type, and one of the destructors of the contained object throws, you get undefined behavior. This is true for everything in the standard library (unless explicitly noted).
Which means the standard library will not bother to check if your destructor is noexcept or not. It can, and almost certainly will, assume that no destructor emits an exception. And if one does... well, that's your fault.
So your code will likely not run slower just because you use noexcept(false) on a destructor. Your decision of whether to use noexcept(false) on the base class's destructor should not be affected by performance concerns.

Does a C++ destructor always or only sometimes call data member destructors?

I'm trying to validate my understanding of C++ destructors.
I've read many times that C++ supplies a default destructor if I don't write one myself. But does this mean that if I DO write a destructor that the compiler WON'T still provide the default cleanup of stack-allocated class fields?
My hunch is that the only sane behavior would be that all class fields are destroyed no matter what, whether I provide my own destructor or not. In which case the statement I've read so many times is actually a little misleading and could be better stated as:
"Whether or not you write your own destructor, the C++ compiler always
writes a default destructor-like sequence to deallocate the member
variables of your class. You may then specify additional
deallocations or other tasks as needed by defining your own destructor"
Is this correct?
When an object is cleaned up in C++, the language will
First call the destructor for the class, then
Call the destructors for all the fields of the class.
(This assumes no inheritance; if there's inheritance, the base class is then destroyed by recursively following this same procedure). Consequently, the destructor code that you write is just custom cleanup code that you'd like to do in addition to the normal cleanup code for individual data members. You won't somehow "lose" the destructors for those objects being called as normal.
Hope this helps!
Yes -- any object contained within your object will be destroyed as part of destroying your object, even if/though your destructor does nothing to destroy them.
In fact, your destructor won't normally do anything to destroy objects contained within the object; what it typically does is destroy objects that are remotely owned via something in the object (e.g., a pointer to an object, a handle to a network or database connection, etc.)
The only exception to this that's common at all is if your object contains a buffer of some sort, and you've used placement new to construct something into that buffer. If you use placement new, you normally plan on directly invoking the dtor as well. [Note that "common" is probably overstating how often you see/use this--it's really quite uncommon, but the other possibilities are much rarer still.]
Yes. Even if you DO write a destructor, the C++ compiler will create a sequence. Consider the following code:
class foo{
int a;
}
Write a destructor that deallocates a, a stack-allocated variable... its impossible. Therefore, even if you write your own destructor, a C++ compiler MUST generate one to deallocate stack objects.

C++: Can I use this safely in a destructor?

Todays' question is: Can I use this in a destructor, and if yes what are the restrictions I must obey to... For example, I know I'm not supposed to do anything with base classes, since they are gone. But what other restrictions apply? And can I safely assume that the this (as a pointer... ie. memory address... a number) is the same as in the constructor?
Can I use this in a destructor
Yes.
For example, I know I'm not supposed to do anything with base classes, since they are gone.
No, the base classes are still intact at this point. Members (and perhaps other base classes) of derived classes have already been destroyed, but members and base classes of this class remain until after the destructor has finished.
But what other restrictions apply?
Virtual functions are dispatched according to the class currently being destroyed, not the former most-derived class. So be careful calling them, and in particular don't call any functions that are pure virtual in this class.
Don't cast this to a derived type, since it is no longer a valid object of that type.
You can't delete this from the destructor, for obvious reasons.
And can I safely assume that the this (as a pointer... ie. memory address... a number) is the same as in the constructor?
Yes, an object's address remains the same from before its constructor runs until after its destructor runs.
As answered here it's perfectly valid.
You should avoid calling virtual functions, though.
Base classes are not gone in the destructor, you can use them normally.
Derived class are gone, so in particular virtual calls will not reach derived classes.
this has the same value as in the constructor and everywhere else in the class.
The main restriction is that you should not allow any exception to leave the destructor. This means that you have no means of indicating failure[*]. Generally you should only perform operations that are certain to succeed (such as freeing resources owned by the object): anything you do that can fail, it must be OK to ignore the failure. Anything you do that can throw, you should catch the exception. Hopefully you have fully documented the possible exceptions thrown by all the functions of this, so you know whether or not the things you want to do with this can throw.
[*] well, you could build a mechanism for the destructor to record somewhere
what happened, but users of the class would have to actively check it. This is unlikely to result in a pleasant user experience.
Since objects are destructed from the most derived class down to the base, derived classes destructors will already have executed. So you must make sure not to call methods overridden in derived classes on this. Apart from that, it's fine.
Destructor is a method, which is called before your object is physically destroyed, such that you can deinitialize it properly. In order to do so, you must have access to its fields, so you can safely access them by the this keyword.
The order of destructors is reverse to the constructors, so when your destructor runs, destructors of base classes haven't run yet - you should have access to all their fields. On the other hand, destructors of derived classes have already run, so - for example - calling virtual or abstract methods may result in undefined behavior.
Additionally, keep in mind, that it's very dangerous to throw exceptions in destructors. If you do so, you risk terminating your application.
Yes you can use it normally. You have the object there.
I don't see any problem using this in the destructor. At this time, the object is still there. The destructor is for you to free existing resources before the object is destroyed.
But avoid calling virtual functions.

Why exactly is calling the destructor for the second time undefined behavior in C++?

As mentioned in this answer simply calling the destructor for the second time is already undefined behavior 12.4/14(3.8).
For example:
class Class {
public:
~Class() {}
};
// somewhere in code:
{
Class* object = new Class();
object->~Class();
delete object; // UB because at this point the destructor call is attempted again
}
In this example the class is designed in such a way that the destructor could be called multiple times - no things like double-deletion can happen. The memory is still allocated at the point where delete is called - the first destructor call doesn't call the ::operator delete() to release memory.
For example, in Visual C++ 9 the above code looks working. Even C++ definition of UB doesn't directly prohibit things qualified as UB from working. So for the code above to break some implementation and/or platform specifics are required.
Why exactly would the above code break and under what conditions?
I think your question aims at the rationale behind the standard. Think about it the other way around:
Defining the behavior of calling a destructor twice creates work, possibly a lot of work.
Your example only shows that in some trivial cases it wouldn't be a problem to call the destructor twice. That's true but not very interesting.
You did not give a convincing use-case (and I doubt you can) when calling the destructor twice is in any way a good idea / makes code easier / makes the language more powerful / cleans up semantics / or anything else.
So why again should this not cause undefined behavior?
The reason for the formulation in the standard is most probably that everything else would be vastly more complicated: it’d have to define when exactly double-deleting is possible (or the other way round) – i.e. either with a trivial destructor or with a destructor whose side-effect can be discarded.
On the other hand, there’s no benefit for this behaviour. In practice, you cannot profit from it because you can’t know in general whether a class destructor fits the above criteria or not. No general-purpose code could rely on this. It would be very easy to introduce bugs that way. And finally, how does it help? It just makes it possible to write sloppy code that doesn’t track life-time of its objects – under-specified code, in other words. Why should the standard support this?
Will existing compilers/runtimes break your particular code? Probably not – unless they have special run-time checks to prevent illegal access (to prevent what looks like malicious code, or simply leak protection).
The object no longer exists after you call the destructor.
So if you call it again, you're calling a method on an object that doesn't exist.
Why would this ever be defined behavior? The compiler may choose to zero out the memory of an object which has been destructed, for debugging/security/some reason, or recycle its memory with another object as an optimisation, or whatever. The implementation can do as it pleases. Calling the destructor again is essentially calling a method on arbitrary raw memory - a Bad Idea (tm).
When you use the facilities of C++ to create and destroy your objects, you agree to use its object model, however it's implemented.
Some implementations may be more sensitive than others. For example, an interactive interpreted environment or a debugger might try harder to be introspective. That might even include specifically alerting you to double destruction.
Some objects are more complicated than others. For example, virtual destructors with virtual base classes can be a bit hairy. The dynamic type of an object changes over the execution of a sequence of virtual destructors, if I recall correctly. That could easily lead to invalid state at the end.
It's easy enough to declare properly named functions to use instead of abusing the constructor and destructor. Object-oriented straight C is still possible in C++, and may be the right tool for some job… in any case, the destructor isn't the right construct for every destruction-related task.
Destructors are not regular functions. Calling one doesn't call one function, it calls many functions. Its the magic of destructors. While you have provided a trivial destructor with the sole intent of making it hard to show how it might break, you have failed to demonstrate what the other functions that get called do. And neither does the standard. Its in those functions that things can potentially fall apart.
As a trivial example, lets say the compiler inserts code to track object lifetimes for debugging purposes. The constructor [which is also a magic function that does all sorts of things you didn't ask it to] stores some data somewhere that says "Here I am." Before the destructor is called, it changes that data to say "There I go". After the destructor is called, it gets rid of the information it used to find that data. So the next time you call the destructor, you end up with an access violation.
You could probably also come up with examples that involve virtual tables, but your sample code didn't include any virtual functions so that would be cheating.
The following Class will crash in Windows on my machine if you'll call destructor twice:
class Class {
public:
Class()
{
x = new int;
}
~Class()
{
delete x;
x = (int*)0xbaadf00d;
}
int* x;
};
I can imagine an implementation when it will crash with trivial destructor. For instance, such implementation could remove destructed objects from physical memory and any access to them will lead to some hardware fault. Looks like Visual C++ is not one of such sort of implementations, but who knows.
Standard 12.4/14
Once a destructor is invoked for an
object, the object no longer exists;
the behavior is undefined if the
destructor is invoked for an object
whose lifetime has ended (3.8).
I think this section refers to invoking the destructor via delete. In other words: The gist of this paragraph is that "deleting an object twice is undefined behavior". So that's why your code example works fine.
Nevertheless, this question is rather academic. Destructors are meant to be invoked via delete (apart from the exception of objects allocated via placement-new as sharptooth correctly observed). If you want to share code between a destructor and second function, simply extract the code to a separate function and call that from your destructor.
Since what you're really asking for is a plausible implementation in which your code would fail, suppose that your implementation provides a helpful debugging mode, in which it tracks all memory allocations and all calls to constructors and destructors. So after the explicit destructor call, it sets a flag to say that the object has been destructed. delete checks this flag and halts the program when it detects the evidence of a bug in your code.
To make your code "work" as you intended, this debugging implementation would have to special-case your do-nothing destructor, and skip setting that flag. That is, it would have to assume that you're deliberately destroying twice because (you think) the destructor does nothing, as opposed to assuming that you're accidentally destroying twice, but failed to spot the bug because the destructor happens to do nothing. Either you're careless or you're a rebel, and there's more mileage in debug implementations helping out people who are careless than there is in pandering to rebels ;-)
One important example of an implementation which could break:
A conforming C++ implementation can support Garbage Collection. This has been a longstanding design goal. A GC may assume that an object can be GC'ed immediately when its dtor is run. Thus each dtor call will update its internal GC bookkeeping. The second time the dtor is called for the same pointer, the GC data structures might very well become corrupted.
By definition, the destructor 'destroys' the object and destroy an object twice makes no sense.
Your example works but its difficult that works generally
I guess it's been classified as undefined because most double deletes are dangerous and the standards committee didn't want to add an exception to the standard for the relatively few cases where they don't have to be.
As for where your code could break; you might find your code breaks in debug builds on some compilers; many compilers treat UB as 'do the thing that wouldn't impact on performance for well defined behaviour' in release mode and 'insert checks to detect bad behaviour' in debug builds.
Basically, as already pointed out, calling the destructor a second time will fail for any class destructor that performs work.
It's undefined behavior because the standard made it clear what a destructor is used for, and didn't decide what should happen if you use it incorrectly. Undefined behavior doesn't necessarily mean "crashy smashy," it just means the standard didn't define it so it's left up to the implementation.
While I'm not too fluent in C++, my gut tells me that the implementation is welcome to either treat the destructor as just another member function, or to actually destroy the object when the destructor is called. So it might break in some implementations but maybe it won't in others. Who knows, it's undefined (look out for demons flying out your nose if you try).
It is undefined because if it weren't, every implementation would have to bookmark via some metadata whether an object is still alive or not. You would have to pay that cost for every single object which goes against basic C++ design rules.
The reason is because your class might be for example a reference counted smart pointer. So the destructor decrements the reference counter. Once that counter hits 0 the actual object should be cleaned up.
But if you call the destructor twice then the count will be messed up.
Same idea for other situations too. Maybe the destructor writes 0s to a piece of memory and then deallocates it (so you don't accidentally leave a user's password in memory). If you try to write to that memory again - after it has been deallocated - you will get an access violation.
It just makes sense for objects to be constructed once and destructed once.
The reason is that, in absence of that rule, your programs would become less strict. Being more strict--even when it's not enforced at compile-time--is good, because, in return, you gain more predictability of how program will behave. This is especially important when the source code of classes is not under your control.
A lot of concepts: RAII, smart pointers, and just generic allocation/freeing of memory rely on this rule. The amount of times the destructor will be called (one) is essential for them. So the documentation for such things usually promises: "Use our classes according to C++ language rules, and they will work correctly!"
If there wasn't such a rule, it would state as "Use our classes according to C++ lanugage rules, and yes, don't call its destructor twice, then they will work correctly." A lot of specifications would sound that way.
The concept is just too important for the language in order to skip it in the standard document.
This is the reason. Not anything related to binary internals (which are described in Potatoswatter's answer).