The std::enable_shared_from_this class is a (template) mixin, recommended for use to enable creating shared pointers from a given object (or its address), which all have common ownership of the object.
The thing is, that if you have an class T which:
Has virtual methods
inherits from std::enable_shared_from_this<T> (and the inheritance must be public as detailed at the link above; otherwise the mixin is useless)
Gets compiled with GCC with -Wnon-virtual-dtor (perhaps also with clang, I'm not sure)
you get warnings about the non-virtual destructor of std::enable_shared_from_this.
My question is - where is the fault here? That is...
Should std::enable_shared_from_this have a virtual destructor? (I don't think so)
Should the non-virtual-destructor warning employ some criterion for when it is emitted (if at all enabled, that is)?
Should the destructor of std::enable_shared_from_this be made protected? (And will this even work?)
Should classes with this mixin not have virtual methods at all?
I'm confused.
There is no fault; your code is fine. It's merely a false-positive. The point of a warning is to detect pieces of code which, although valid C++, usually indicate problems. But "usually" does not mean "always", so most warnings have cases of false-positives, where it thinks there is misuse when there actually is not.
Should std::enable_shared_from_this have a virtual destructor?
No code is expected to delete a pointer to enable_shared_from_this. So no.
Should the non-virtual-destructor warning employ some criterion for when it is emitted (if at all enabled, that is)?
It's not reasonable for a compiler to know everything about what you're intending to do. It's just seeing something that's usually a problem, which you've decided to have it flag. In this case, it's not a problem.
Should the destructor of std::enable_shared_from_this be made protected?
No.
Should classes with this mixin not have virtual methods at all?
No.
Related
I am compiling a C++ library, and I have an issue with destructors and noexcept.
It is my understanding that, since C++11, the default value of noexcept(...) inside destructors has changed. In particular, now destructors default to noexcept (that is noexcept(true)).
Now, I have a library for which I get many compiler warnings due to the fact that the destructor of some classes can throw, but it's not marked noexcept(false). So I decided to change that, and add the specification. However, these classes inherited from some common parent Base, so this forced me to add it also to the parent class destructor, otherwise I get the compiler error overriding ‘virtual Base::~Base() noexcept.
Of course, I can add noexcept(false) also to the base class (actually, it is enough to add it there, as far as I understand), but there are lots of classes that inherit from Base, and they would all inherit the noexcept(false) attribute (as far as I understand). However, the base class destructor itself will never throw, and only few of the derived classes can actually throw. Therefore, it appears a waste to mark the base class destructor noexcept(false) only for a few classes.
So, I have two questions:
1) Is there any way around this? I cannot remove the throws in the few derived classes that throw, since they are important.
2) I am not sure how much the compiler optimization could be degraded due to the addition of noexcept(false) to the base class of (virtually) all the classes in the library. When (if ever) should this concern me?
Edit: on a side note, is there a way to change the default value for noexcept inside a project? Perhaps a compiler option?
Edit: since there have been many feedback, I owe an edit, with a few remarks:
1) I did not write the library, and I certainly do not plan to rewrite the destructors of many classes. Having to add a noexcept(false) to a some base classes is already enough of a pain. And resorting to a different library is not an option (for different reasons that go beyond programming).
2) When I said "I cannot remove the throws in the few derived classes that throw, since they are important" I meant that I believe the code should terminate in those case, since something really bad happened and undefined behavior may follow anyways. The throw at least comes with a little explanation of what happened.
3) I understand that throwing in a destructor of a derived class is bad, since it may leak due to the fact that the parent class' destructor is not called (is this correct?). What would be a clean way around if one ought to terminate the program (to avoid obscure bugs later) and still let the user know what happened?
If the destructor is virtual, the derived destructors can't be noexcept(false) unless the base destructor is also noexcept(false).
Think about it: the point of virtual functions is that the derived class can be called even if the caller only knows about the base class. If something calls delete on a Base*, the call may be compiled without any exception-handling code if Base promises that its destructor won't throw exceptions. If the pointer points to a Derived instance whose destructor actually does throw an exception, that would be bad.
Try to change the derived destructors so they don't throw exceptions. If that's not possible, add noexcept(false) to the base destructor.
I am not sure how much the compiler optimization could be degraded due to the addition of noexcept(false) to the base class of (virtually) all the classes in the library. When (if ever) should this concern me?
You should not be concerned about compiler optimizations on the basis of noexcept declarations on destructors. Most of the optimizations around noexcept declarations come from code that detects that an operation is noexcept and does something more efficient based on that. That is, explicit metaprogramming.
Why doesn't that matter for destructors? Well, consider the C++ standard library. If you hand vector a type, and one of the destructors of the contained object throws, you get undefined behavior. This is true for everything in the standard library (unless explicitly noted).
Which means the standard library will not bother to check if your destructor is noexcept or not. It can, and almost certainly will, assume that no destructor emits an exception. And if one does... well, that's your fault.
So your code will likely not run slower just because you use noexcept(false) on a destructor. Your decision of whether to use noexcept(false) on the base class's destructor should not be affected by performance concerns.
As I know, any class that is designated to have subclasses should be declared with virtual destructor, so class instances can be destroyed properly when accessing them through pointers.
But why it's even possible to declare such class with non-virtual destructor? I believe compiler can decide when to use virtual destructors. So, is it a C++ design oversight, or am I missing something?
Are there any specific reasons to use non-virtual destructors?
Yes, there are.
Mainly, it boils down to performance. A virtual function cannot be inlined, instead you must first determined the correct function to invoke (which requires runtime information) and then invoke that function.
In performance sensitive code, the difference between no code and a "simple" function call can make a difference. Unlike many languages C++ does not assume that this difference is trivial.
But why it's even possible to declare such class with non-virtual destructor?
Because it is hard to know (for the compiler) if the class requires a virtual destructor or not.
A virtual destructor is required when:
you invoke delete on a pointer
to a derived object via a base class
When the compiler sees the class definition:
it cannot know that you intend to derive from this class -- you can after all derive from classes without virtual methods
but even more daunting: it cannot know that you intend to invoke delete on this class
Many people assume that polymorphism requires newing the instance, which is just sheer lack of imagination:
class Base { public: virtual void foo() const = 0; protected: ~Base() {} };
class Derived: public Base {
public: virtual void foo() const { std::cout << "Hello, World!\n"; }
};
void print(Base const& b) { b.foo(); }
int main() {
Derived d;
print(d);
}
In this case, there is no need to pay for a virtual destructor because there is no polymorphism involved at the destruction time.
In the end, it is a matter of philosophy. Where practical, C++ opts for performance and minimal service by default (the main exception being RTTI).
With regards to warning. There are two warnings that can be leveraged to spot the issue:
-Wnon-virtual-dtor (gcc, Clang): warns whenever a class with virtual function does not declare a virtual destructor, unless the destructor in the base class is made protected. It is a pessimistic warning, but at least you do not miss anything.
-Wdelete-non-virtual-dtor (Clang, ported to gcc too): warns whenever delete is invoked on a pointer to a class that has virtual functions but no virtual destructor, unless the class is marked final. It has a 0% false positive rate, but warns "late" (and possibly several times).
Why are destructors not virtual by default?
http://www2.research.att.com/~bs/bs_faq2.html#virtual-dtor
Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual.
http://www.gotw.ca/publications/mill18.htm
See also: http://www.erata.net/programming/virtual-destructors/
EDIT: possible duplicate? When should you not use virtual destructors?
Your question is basically this, "Why doesn't the C++ compiler force your destructor to be virtual if the class has any virtual members?" The logic behind this question is that one should use virtual destructors with classes that they intend to derive from.
There are many reasons why the C++ compiler doesn't try to out-think the programmer.
C++ is designed on the principle of getting what you pay for. If you want something to be virtual, you must ask for it. Explicitly. Every function in a class that is virtual must be explicitly declared so (unless its overriding a base class version).
if the destructor for a class with virtual members were automatically made virtual, how would you choose to make it non-virtual if that's what you so desired? C++ doesn't have the ability to explicitly declare a method non-virtual. So how would you override this compiler-driven behavior.
Is there a particular valid use case for a virtual class with a non-virtual destructor? I don't know. Maybe there's a degenerate case somewhere. But if you needed it for some reason, you wouldn't be able to say it under your suggestion.
The question you should really ask yourself is why more compilers don't issue warnings when a class with virtual members doesn't have a virtual destructor. That's what warnings are for, after all.
A non-virtual destructor seems to make sense, when a class is just non-virtual after all (Note 1).
However, I do not see any other good use for non-virtual destructors.
And I appreciate that question. Very interesting question!
EDIT:
Note 1:
In performance-critical cases, it may be favourable to use classes without any virtual function table and thus without any virtual destructors at all.
For example: think about a class Vector3 that contains just three floating point values. If the application stores an array of them, then that array could be store in compact fashion.
If we require a virtual function table, AND if we'd even require storage on heap (as in Java & co.), then the array would just contain pointers to actual elements "SOMEWHERE" in memory.
EDIT 2:
We may even have an inheritance tree of classes without any virtual methods at all.
Why?
Because, even if having "virtual" methods may seem to be the common and preferable case, it IS NOT the only case that we - the mankind - can imagine.
As in many details of that language, C++ offers you a choice. You can choose one of the provided options, usually you will choose the one that anyone else chooses. But sometimes you do not want that option!
In our example, a class Vector3 could inherit from class Vector2, and still would not have the overhead of virtual functions calls. Thought, that example is not very good ;)
Another reason I haven't seen mentioned here are DLL boundaries: You want to use the same allocator to free the object that you used to allocate it.
If the methods live in a DLL, but the client code instantiates the object with a direct new, then the client's allocator is used to obtain the memory for the object, but the object is filled in with the vtable from the DLL, which points to a destructor that uses the allocator the DLL is linked against to free the object.
When subclassing classes from the DLL in the client, the problem goes away as the virtual destructor from the DLL is not used.
C++03 5.3.5.3
In the first alternative (delete
object), if the static type of the
operand is different from its dynamic
type, the static type shall be a base
class of the operand’s dynamic type
and the static type shall have a
virtual destructor or the behavior is
undefined.
This is the theory. The question, however, is a practical one. What if the derived class adds no data members?
struct Base{
//some members
//no virtual functions, no virtual destructor
};
struct Derived:Base{
//no more data members
//possibly some more nonvirtual member functions
};
int main(){
Base* p = new Derived;
delete p; //UB according to the quote above
}
The question: is there any existing implementation on which this would really be dangerous?
If so, could you please describe how the internals are implemented in that implementation which makes this code crash/leak or whatever? I beg you to believe, I swear that I have no intentions to rely on this behavior :)
One example is if you provide a custom operator new in struct Derived. Obviously calling wrong operator delete will likely produce devastating results.
I know of no implementation on which the above would be dangerous, and I think it unlikely that there ever will be such an implementation.
Here's why:
"undefined behaviour" is a catch-all phrase meaning (as everyone knows), anything could happen. The code could eat your lunch, or do nothing at all.
However, compiler writers are sane people, and there's a difference between undefined behaviour at compile-time, and undefined behaviour at run-time. If I was writing a compiler for an implementation where the code snippet above was dangerous, it would be easy to catch and prevent at compile time. I can says it's a compilation error (or warning, maybe): Error 666: Cannot derive from class with non-virtual destructor.
I think I'm allowed to do that, because the compiler's behaviour in this case is not defined by the standard.
I can't answer for specific compilers, you'd have to ask the compiler writers. Even if a compiler works now, it might not do so in the next version so I would not rely on it.
Do you need this behaviour?
Let me guess that
You want to be able to have a base class pointer without seeing the derived class and
Not have a v-table in Base and
Be able to clean up in the base class pointer.
If those are your requirements it is possible to do, with boost::shared_ptr or your own adaptation.
At the point you pass the pointer you pass in a boost::shared_ptr with an actual "Derived" underneath. When it is deleted it will use the destructor that was created when the pointer was created which uses the correct delete. You should probably give Base a protected destructor though to be safe.
Note that there still is a v-table but it is in the shared pointer deleter base not in the class itself.
To create your own adaptation, if you use boost::function and boost::bind you don't need a v-table at all. You just get your boost::bind to wrap the underlying Derived* and the function calls delete on it.
In your particular case, where you do not have any data member declared in the derived class and if you do not have any custom new/delete operators (as mentioned by Sharptooth), you may not have any problems ,but do you guarantee that no user will ever derive your class? If you do not make your Base's destructor virtual, there is no way for any of the classes derived from Derived to call their destructors in case the objects of derived classes are used via a Base pointer.
Also, there is a general notion that if you have virtual functions in your base class, the destructor should be made virtual. So better not surprise anybody :)
I totally agree with 'Roddy'.
Unless you're writing the code for perverted compiler designed for a non-existing virtual machine just to prove that so-called undefined behavior can bite - there's no problem.
The point of 'sharptooth' about custom new/delete operators is inapplicable here. Because virtual d'tor and won't solve in any way the problem he/she describes.
However it's a good point though. It means that the model where you provide a virtual d'tor and by such enable the polymorphic object creating/deletion is defective by design.
A more correct design is to equip such objects with a virtual function that does two things at once: call its (correct) destructor, and also free its memory the way it should be freed. In simple words - destroy the object by the appropriate means, which are known for the object itself.
I have a pet project with which I experiment with new features of C++11. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1):
-std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors
This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++):
base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor
So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because:
A destructor of a class that is intended for subclassing should always be virtual, IMHO.
The destructor is empty, why have it at all?
I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this.
But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?
A destructor of a class that is intended for subclassing should always be virtual, IMHO.
A virtual destructor in a base class is only needed if an instance of the derived class is going to be deleted via a pointer to the base class.
Having any virtual function in a class, including a destructor, requires overhead. Boost (and the TR1 and C++11 Standard Library) doesn't want to force you to have that overhead just because you need to be able to obtain a shared_ptr from the this pointer.
The destructor is empty, why have it at all?
If you don't have a user-defined constructor, the compiler provides one for you, so it doesn't really matter.
I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this.
Exactly.
As for the compiler warning, I would ignore the warning or suppress it (with a comment in the code explaining why you are doing so). Occasionally, especially at "pedantic" warning levels, compiler warnings are unhelpful, and I'd say this is one of those cases.
I agree with Jame's description, but would add
A virtual destructor is only required if you want to destroy an instance of that class virtually. This is not always the case, however if a base class is not intended to be destroyed virtually it should protect against it
So I would change
A destructor of a class that is
intended for subclassing should always
be virtual, IMHO.
this to be:
A destructor of a class that is
intended for subclassing should
always be virtual or protected.
is there a proper way to deal with this?
Don't use -Weffc++ all the time. It's useful to turn it on sometimes to check your code, but not really to use it permanently. It gives false positives and is not really maintained these days. Use it now and then, but be aware you might have to ignore some warnings. Ideally, just memorise all the advice in the Meyers books and then you don't need it anyway ;-)
am I correct in thinking that this destructor is bogus, or is there a real purpose to it?
No, it's not bogus and has a real purpose. If it wasn't defined it would be implicity-declared as public, to prevent that it is explicitly declared as protected.
I have defined an interface in C++, i.e. a class containing only pure virtual functions.
I want to explicitly forbid users of the interface to delete the object through a pointer to the interface, so I declared a protected and non-virtual destructor for the interface, something like:
class ITest{
public:
virtual void doSomething() = 0;
protected:
~ITest(){}
};
void someFunction(ITest * test){
test->doSomething(); // ok
// deleting object is not allowed
// delete test;
}
The GNU compiler gives me a warning saying:
class 'ITest' has virtual functions but non-virtual destructor
Once the destructor is protected, what is the difference in having it virtual or non-virtual?
Do you think this warning can be safely ignored or silenced?
It's more or less a bug in the compiler. Note that in more recent versions of the compiler this warning does not get thrown (at least in 4.3 it doesn't). Having the destructor be protected and non-virtual is completely legitimate in your case.
See here for an excellent article by Herb Sutter on the subject. From the article:
Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual.
Some of the comments on this answer relate to an earlier answer I gave, which was wrong.
A protected destructor means that it can only be called from a base class, not through delete. That means that an ITest* cannot be directly deleted, only a derived class can. The derived class may well want a virtual destructor. There is nothing wrong with your code at all.
However, since you cannot locally disable a warning in GCC, and you already have a vtable, you could consider just making the destructor virtual anyway. It will cost you 4 bytes for the program (not per class instance), maximum. Since you might have given your derived class a virtual dtor, you may find that it costs you nothing.
If you insist on doing this, go ahead and pass -Wno-non-virtual-dtor to GCC. This warning doesn't seem to be turned on by default, so you must have enabled it with -Wall or -Weffc++. However, I think it's a useful warning, because in most situations this would be a bug.
It's an interface class, so it's reasonable you should not delete objects implementing that interface via that interface. A common case of that is an interface for objects created by a factory which should be returned to the factory. (Having objects contain a pointer to their factory might be quite expensive).
I'd agree with the observation that GCC is whining. Instead, it should simply warn when you delete an ITest*. That's where the real danger lies.
My personal view is that you'd doing the correct thing and the compiler is broken. I'd disable the warning (locally in the file which defines the interface) if possible,
I find that I use this pattern (small 'p') quite a lot. In fact I find that it's more common for my interfaces to have protected dtors than it is for them to have public ones. However I don't think it's actually that common an idiom (it doesn't get spoken about that much) and I guess back when the warning was added to GCC it was appropriate to try and enforce the older 'dtor must be virtual if you have virtual functions' rule. Personally I updated that rule to 'dtor must be virtual if you have virtual functions and wish users to be able to delete instances of the interface through the interface else the dtor should be protected and non virtual' ages ago ;)
If the destructor is virtual it makes sure that the base class destructor is also called fore doing the cleanup, otherwise some leaks can result from that code. So you should make sure that the program has no such warnings (prefferably no warnings at all).
If you had code in one of ITest's methods that tried to delete itself (a bad idea, but legal), the derived class's destructor wouldn't be called. You should still make your destructor virtual, even if you never intend to delete a derived instance via a base-class pointer.