The examples I've found of enable_shared_from_this show it used via inheritance. For example:
struct Good : enable_shared_from_this<Good> {
shared_ptr<Good> getptr() {
return shared_from_this();
}
};
int main() {
// Good: the two shared_ptr's share the same object
shared_ptr<Good> gp1(new Good);
shared_ptr<Good> gp2 = gp1->getptr();
cout << "gp2.use_count() = " << gp2.use_count() << endl;
}
I've been warned a lot in my day about the dangers of inheriting from the standard library. This code certainly seems to share those dangers, for example:
struct A : enable_shared_from_this<A> {};
struct B : enable_shared_from_this<B> {};
If I want to create struct C : A, B {}; the sticking point would obviously be C::shared_from_this(). Obviously we can work around this, but there is some inherent complexity.
So my question is, is there a way to use enable_shard_from_this as a has-a relationship instead of an is-a relationship?
is there a way to use enable_shard_from_this as a has-a relationship instead of an is-a relationship?
No.
enable_shared_from_this is supposed to be used as a base class, so mindlessly applying a guideline meant for other situations doesn't work in this case.
Even if there was a good reason to want to do this (and there isn't) it wouldn't work. The magic that causes the enable_shared_from_this base to share ownership with the shared_ptr that owns the derived object works by checking for inheritance.
enable_shared_from_this doesn't model an IS-A relationship anyway, because it has no interface defined in terms of virtual functions. IS-A means a derived type that extends a base interface, but that isn't the case here. A Good IS-NOT-A enable_shared_from_this<Good>.
i.e. using inheritance does not always imply an IS-A relationship.
enable_shared_from_this doesn't have a virtual destructor
A virtual destructor is irrelevant unless you plan to delete the object through a pointer to the enable_shared_from_this base class, which would be insane. There is no reason to ever pass around a Good as a pointer to the enable_shared_from_this<Good> base class, and still less reason to ever use delete on that base pointer (usually the type would be stored in a shared_ptr<Good> as soon as it's created, so you would never use delete at all).
enable_shared_from_this is a mixin type, not an abstract base. It provides the shared_from_this (and soon weak_from_this) member, that's all. You are not supposed to use it as an abstract base or an interface type, nor use the base type to access polymorphic behaviour of the derived type. The fact it has no virtual functions at all, not just no virtual destructor, should tell you that.
Furthermore, as n.m. commented above, the destructor is protected, so you can't delete it via the base class even if you tried (a protected destructor is the idiomatic way of preventing that type of misuse of mixin classes intended to be non-polymorphic base classes).
The enable_shared_from_this destructor destroys *this, meaning it must always be the last destructor called
Huh? Not sure what you mean, but it isn't responsible for destroying anything except itself and its own data member.
Inheritance from two classes that both inherit from enable_shared_from_this can become a bit of a sticking point
It should work OK (although you might not get the magic ownership sharing if there isn't a single unambiguous base class that is a specialization of enable_shared_from_this). The GCC standard library had a bug (now fixed) where it fails to compile, rather than just fail to share ownership, but that's not a problem with enable_shared_from_this.
Related
So the basic rule that I find everywhere is that to inherit from a base class, the base class must have a virtual destructor so that the following works:
Base *base = new Inherited();
delete base;
However I am certain I have seen at least one other possibility that allows safe inheritance. However I can't find it anywhere and I feel like I am going mad trying to find it. I thought the other option might have been that the base class had a trivial destructor, but according to Non-virtual trivial destructor + Inheritance, this isn't the case. Even though there wouldn't be a memory leak for this case, it appears this is still undefined behaviour.
Does anyone else know what the other case is or can you definitively tell me that I dreamt it?
I guess an example can be the one that involves shared_ptrs, for it is good to show both the sides of the issue.
Suppose you have a class B with a trivial non virtual destructor and a derived class D with its own complex one.
Let's define the following function somewhere:
shared_ptr<B> factory () {
// some complex rules at the very end of which you decide to instantiate class D
return make_shared<D>();
}
In that case you are dealing with all the interesting features due to the polymorphism, but the pointer you are working with has inherited the deleter from the one constructed with type D.
Even though, thanks to the type erasure, the type is buried somewhere and everything works fine, the actual invoked destructor is the one of D, so everything should work fine also from that point of view, even though the destructor of B was not virtual.
Instead, if you define the above factory as:
B* factory () {
return new D{};
}
The called destructor (well, supposing that someone will delete it) will be the one of B, that is not what you want.
That said, defining as virtual the destructor of a class that is meant to be inherited from is a good practice, otherwise put a final in the class definition and stop there the hierarchy.
There also a lot of other examples, this is not the only case where it works, but it can help to explain why it works.
Perhaps when the inheritance is private. In such a case, the user can't convert Derived* to Base* so there is no chance of trying to delete the derived class through the base class pointer. Of course, you still have to watch that you don't do this anywhere within your implementation of Derived.
My take on this is pragmatic rather than anything to do with what is or isn't allowed by the standards.
So, pragmatically, if a class doesn't have a virtual destructor - even an empty one - then my default assumption is that it hasn't been designed to be used as a base class. This may have more implications than just destruction and in more cases than not, just opens a can of worms for you to fall in later.
If you want or need to use functionality from a class without a virtual destructor, it would be safer to use composition rather than inheritance. In fact, that's the preferred route anyway.
The other case I've seen mentioned is making the base-class destructor protected. That way, you prevent deletion through a base class.
This is actually item 50 in the book C++ Coding Standards by Herb Sutter et al: "Make base class destructors public and virtual or protected and non-virtual", so it is quite likely that you have heard of it before.
You can always inherit from a class. There are rules to obey though, e.g. without a virtual destructor you can't invoke the destructor polymorphically. In order to avoid this, you could e.g. use private derivation for baseclasses that were not intended as baseclasses, like e.g. the containers from the STL.
As others have mentioned, as long as you delete the class through it's own destructor - in other words you do
Inherited *ip = new Inherited();
Base *p = ip;
...
delete ip;
you'll be fine. There are several different ways to do that, but you have to be quite careful to ensure that is the case.
However, having an empty destructor in the baseclass [and your inherited type is immediately inheriting] only works as long as it is TRULY empty, and not just that you have an { } for the body of the destructor. [See Edit2 below!]
For example, if you have a vector or std::string, or whatever other class that needs destruction, in your baseclass, then you will leak the content of that class. In other words, you need to make 100% sure that the destructor of the baseclass is empty. I don't know of a programmatic way to determine that (beyond analysing the generated code, that is).
Edit:
Also beware of "changes in the future" - for example, adding a string or vector inside Base or changing the base class from Base to SomethingInheritedFromBase that has a destructor "with content" will ruin the "empty destructor" concept.
Edit2:
It should be noted that for the "destructor is empty", you have to have true empty destuctors in all derived clases too. There are classes that have no members that need destruction (interface classes typically have no data members, for example, so would not need destruction in themselves), so you could construct such a case, but again, we have to be VERY careful to avoid the destructor of a derived class adding a destructor into the class.
Classes with non-virtual destructors are a source for bugs if they are used as a base class (if a pointer or reference to the base class is used to refer to an instance of a child class).
With the C++11 addition of a final class, I am wondering if it makes sense to set down the following rule:
Every class must fulfil one of these two properties:
be marked final (if it is not (yet) intended to be inherited from)
have a virtual destructor (if it is (or is intended to) be inherited from)
Probably there are cases were neither of these two options makes sense, but I guess they could be treated as exceptions that should be carefully documented.
The probably most common actual issue attributed to the lack of a virtual destructor is deletion of an object through a pointer to a base class:
struct Base { ~Base(); };
struct Derived : Base { ~Derived(); };
Base* b = new Derived();
delete b; // Undefined Behaviour
A virtual destructor also affects the selection of a deallocation function. The existence of a vtable also influences type_id and dynamic_cast.
If your class isn't use in those ways, there's no need for a virtual destructor. Note that this usage is not a property of a type, neither of type Base nor of type Derived. Inheritance makes such an error possible, while only using an implicit conversion. (With explicit conversions such as reinterpret_cast, similar problems are possible without inheritance.)
By using smart pointers, you can prevent this particular problem in many cases: unique_ptr-like types can restrict conversions to a base class for base classes with a virtual destructor (*). shared_ptr-like types can store a deleter suitable for deleting a shared_ptr<A> that points to a B even without virtual destructors.
(*) Although the current specification of std::unique_ptr doesn't contain such a check for the converting constructor template, it was restrained in an earlier draft, see LWG 854. Proposal N3974 introduces the checked_delete deleter, which also requires a virtual dtor for derived-to-base conversions. Basically, the idea is that you prevent conversions such as:
unique_checked_ptr<Base> p(new Derived); // error
unique_checked_ptr<Derived> d(new Derived); // fine
unique_checked_ptr<Base> b( std::move(d) ); // error
As N3974 suggests, this is a simple library extension; you can write your own version of checked_delete and combine it with std::unique_ptr.
Both suggestions in the OP can have performance drawbacks:
Mark a class as final
This prevents the Empty Base Optimization. If you have an empty class, its size must still be >= 1 byte. As a data member, it therefore occupies space. However, as a base class, it is allowed not to occupy a distinct region of memory of objects of the derived type. This is used e.g. to store allocators in StdLib containers.
C++20 has mitigated this with the introduction of [[no_unique_address]].
Have a virtual destructor
If the class doesn't already have a vtable, this introduces a vtable per class plus a vptr per object (if the compiler cannot eliminate it entirely). Destruction of objects can become more expensive, which can have an impact e.g. because it's no longer trivially destructible. Additionally, this prevents certain operations and restricts what can be done with that type: The lifetime of an object and its properties are linked to certain properties of the type such as trivially destructible.
final prevents extensions of a class via inheritance. While inheritance is typically one of the worst ways to extend an existing type (compared to free functions and aggregation), there are cases where inheritance is the most adequate solution. final restricts what can be done with the type; there should be a very compelling and fundamental reason why I should do that. One cannot typically imagine the ways others want to use your type.
T.C. points out an example from the StdLib: deriving from std::true_type and similarly, deriving from std::integral_constant (e.g. the placeholders). In metaprogramming, we're typically not concerned with polymorphism and dynamic storage duration. Public inheritance often just the simplest way to implement metafunctions. I do not know of any case where objects of metafunction type are allocated dynamically. If those objects are created at all, it's typically for tag dispatching, where you'd use temporaries.
As an alternative, I'd suggest using a static analyser tool. Whenever you derive publicly from a class without a virtual destructor, you could raise a warning of some sort. Note that there are various cases where you'd still want to derive publicly from some base class without a virtual destructor; e.g. DRY or simply separation of concerns. In those cases, the static analyser can typically be adjusted via comments or pragmas to ignore this occurrence of deriving from a class w/o virtual dtor. Of course, there need to be exceptions for external libraries such as the C++ Standard Library.
Even better, but more complicated is analysing when an object of class A w/o virtual dtor is deleted, where class B inherits from class A (the actual source of UB). This check is probably not reliable, though: The deletion can happen in a Translation Unit different to the TU where B is defined (to derive from A). They can even be in separate libraries.
The question that I usually ask myself, is whether an instance of the class may be deleted via its interface. If this is the case, I make it public and virtual. If this is not the case, I make it protected. A class only needs a virtual destructor if the destructor will be invoked through its interface polymorphically.
Well, to be strictly clear, it's only if the pointer is deleted or the object is destructed (through the base class pointer only) that the UB is invoked.
There could be some exceptions for cases where the API user cannot delete the object, but other than that, it's generally a wise rule to follow.
Why C++ standard allow object slice ?
Please don't explain c++ object slice concept to me as I knew that.
I am just wondering what's the intention behind this c++ feature(object slice) design ?
To get novice more bugs?
Wouldn't it be more type safe for c++ to prevent object slice ?
Below is just a standard and basic slice example:
class Base{
public:
virtual void message()
{
MSG("Base ");
}
private:
int m_base;
};
class Derived : public Base{
public:
void message()
{
MSG("Derived ");
}
private:
int m_derive;
};
int main (void)
{
Derived dObj;
//dObj get the WELL KNOWN c++ slicing below
//evilDerivedOjb is just a Base object that cannot access m_derive
Base evilDerivedOjb = dObj; //evilDerivedObj is type Base
evilDerivedOjb.message(); //print "Baes" here of course just as c++ standard says
}
Thanks in advance.
=================================================================================
After reading all the answers and comments I think I should express my question better in the first place but here it comes:
When there is a is-a relationship(public inheritnace), instead of private/protected inheritance , you can do the following:
class Base{
public:
virtual void foo(){MSG("Base::foo");}
};
class Derived : public Base{
public:
virtual void foo(){MSG("Derived::foo");}
};
int main (void)
{
Base b;
Derived d;
b = d; //1
Base * pB = new Derived(); //2
Base& rB = d; //3
b.foo(); //Base::foo
pB->foo(); //Derived::foo
rB.foo(); //Derived::foo
}
It's well known that only 2 & 3 works polymorphically while one is the infamous object
slicing which produce nothing but a bug !
Note 1, 2 and 3 NEED is-a relationship to work.
If you are using private/protect inheritance, you will get compile error for all of them :
'type cast' : conversion from 'Derived *' to 'const Base &' exists, but is inaccessible
'type cast' : conversion from 'Derived *' to 'Base *' exists, but is inaccessible
'type cast' : conversion from 'Derived *' to 'Base &' exists, but is inaccessible
So my question(original intention) was to ask would it be better if c++ standard
make 1 a compile error while keep allowing 2 and 3 ?
Hope I have expressed my question better this time.
Thanks
I think you're looking at it backwards.
Nobody sat down and said "OK, we need slicing in this language." Slicing in itself isn't a language feature; it's the name of what happens when you meant to use objects polymorphically but instead went wrong and copied them. You might say that it's the name of a programmer bug.
That objects can be copied "statically" is a fundamental feature of C++ and C, and you wouldn't be able to do much otherwise.
Edit: [by Jerry Coffin (hopefully Tomalak will forgive my hijacking his answer a bit)]. Most of what I'm adding is along the same lines, but a bit more directly from the source. The one exception (as you'll see) is that, strangely enough, somebody did actually say "we need slicing in this language." Bjarne talks a bit about slicing in The Design and Evolution of C++ (§11.4.4). Among other things he says:
I'm leery of slicing from a practical point of view, but I don't see any way of preventing it except by adding a very special rule. Also, at the time, I had an independent request for exactly these "slicing semantics" from Ravi Sethi who wanted it from a theoretical and pedagogical point of view: Unless you can assign an object of a derived class to an object of its public base class, then that would be the only point in C++ where a derived object can't be used in place of a base object.
I'd note that Ravi Sethi is one of the authors of the dragon book (among many other things), so regardless of whether you agree with him, I think it's easy to understand where his opinion about language design would carry a fair amount of weight.
It's allowed because of is-a relationship.
When you publicly1 derive Derived from Base, you're annoucing to the compiler that Derived is a Base. Hence it should be allowed to do this:
Base base = derived;
and then use base as it is. that is:
base.message(); //calls Base::message();
Read this:
Is-A Relationship
1. If you privately derive Derived from Base, then it is has-a relationship. That is sort of composition. Read this and this.
However, in your case, if you don't want slicing, then you can do this:
Base & base = derived;
base.message(); //calls Derived::message();
From your comment :
Wouldn't it better for C++ to prevent object slicing while only allow the pointer/reference to work for is-a relationshp ???
No. Pointer and Reference doesn't maintain is-a relationship if the base has virtual function(s).
Base *pBase = &derived;
pBase->message(); //doesn't call Base::message().
//that indicates, pBase is not a pointer to object of Base type.
When you want one object of one type to behave like an object of it's base type, then that is called is-a relationship. If you use pointer or reference of base type, then it will not call the Base::message(), which indicates, pointer or reference doesn't have like a pointer or reference to an object of base type.
How would you prevent object slicing within the language? If a function is expecting 16 bytes on the stack (as a parameter for example) and you pass a bigger object that's say 24 bytes how on Earth would the callee know what to do? C++ isn't like Java where everything is a reference under the hood. The short answer is that there's just no way to avoid object slicing assuming that C++, like C, allows value and reference semantics for objects.
EDIT: Sometimes you don't care if the object slices and prohibiting it outright would possibly prevent a useful feature.
Object slicing is a natural consequence of inheritance and substitutability, it is not limited to C++, and it was not introduced deliberately. Methods accepting a Base only see the variables present in Base. So do copy constructors and assignment operators. However they propagate the problem by making copies of this sliced object that may or may not be valid.
This most often arises when you treat polymorphic objects as value types, involving the copy constructor or the assignment operator in the process, which are often compiler generated. Always use references or pointers (or pointer wrappers) when you work with polymorphic objects, never mix value semantics in the game. If you want copies of polymorphic objects, use a dynamic clone method instead.
One half-solution is to check the typeid of both the source and the destination objects when assigning, throwing an exception if they do not match. Unfortunately this is not applicable to copy constructors, you can not tell the type of the object being constructed, it will report Base even for Derived.
Another solution is to disallow copying and assigning, by inheriting privately from boost::noncopyable or making the copy constructor and assignment operator private. The former disallows the compiler generated copy constructor and assignment operator from working in all subclasses as well, but you can define custom ones in subclasses.
Yet another solution is to make the copy constructor and assignment operator protected. This way you can still use them to ease the copying of subclasses, but an outsider can not accidentally slice an object this way.
Personally I derive privately from my own NonCopyable, which is almost the same as the Boost one. Also, when declaring value types, I publicly and virtually derive them from Final<T> (and ValueType) to prevent any kind of polymorphism. Only in DEBUG mode though, since they increase the size of objects, and the static structure of the program doesn't change anyway in release mode.
And I must repeat: object slicing can occur anywhere where you read the variables of Base and do something with them, be sure your code does not propagate it or behave incorrectly when this occurs.
Exactly what access does the base object have to m_base?
You can't do baseObj.m_base = x; It is a private member. You can only use public methods from the base class, so it is not much different to just creating a base object.
C++03 5.3.5.3
In the first alternative (delete
object), if the static type of the
operand is different from its dynamic
type, the static type shall be a base
class of the operand’s dynamic type
and the static type shall have a
virtual destructor or the behavior is
undefined.
This is the theory. The question, however, is a practical one. What if the derived class adds no data members?
struct Base{
//some members
//no virtual functions, no virtual destructor
};
struct Derived:Base{
//no more data members
//possibly some more nonvirtual member functions
};
int main(){
Base* p = new Derived;
delete p; //UB according to the quote above
}
The question: is there any existing implementation on which this would really be dangerous?
If so, could you please describe how the internals are implemented in that implementation which makes this code crash/leak or whatever? I beg you to believe, I swear that I have no intentions to rely on this behavior :)
One example is if you provide a custom operator new in struct Derived. Obviously calling wrong operator delete will likely produce devastating results.
I know of no implementation on which the above would be dangerous, and I think it unlikely that there ever will be such an implementation.
Here's why:
"undefined behaviour" is a catch-all phrase meaning (as everyone knows), anything could happen. The code could eat your lunch, or do nothing at all.
However, compiler writers are sane people, and there's a difference between undefined behaviour at compile-time, and undefined behaviour at run-time. If I was writing a compiler for an implementation where the code snippet above was dangerous, it would be easy to catch and prevent at compile time. I can says it's a compilation error (or warning, maybe): Error 666: Cannot derive from class with non-virtual destructor.
I think I'm allowed to do that, because the compiler's behaviour in this case is not defined by the standard.
I can't answer for specific compilers, you'd have to ask the compiler writers. Even if a compiler works now, it might not do so in the next version so I would not rely on it.
Do you need this behaviour?
Let me guess that
You want to be able to have a base class pointer without seeing the derived class and
Not have a v-table in Base and
Be able to clean up in the base class pointer.
If those are your requirements it is possible to do, with boost::shared_ptr or your own adaptation.
At the point you pass the pointer you pass in a boost::shared_ptr with an actual "Derived" underneath. When it is deleted it will use the destructor that was created when the pointer was created which uses the correct delete. You should probably give Base a protected destructor though to be safe.
Note that there still is a v-table but it is in the shared pointer deleter base not in the class itself.
To create your own adaptation, if you use boost::function and boost::bind you don't need a v-table at all. You just get your boost::bind to wrap the underlying Derived* and the function calls delete on it.
In your particular case, where you do not have any data member declared in the derived class and if you do not have any custom new/delete operators (as mentioned by Sharptooth), you may not have any problems ,but do you guarantee that no user will ever derive your class? If you do not make your Base's destructor virtual, there is no way for any of the classes derived from Derived to call their destructors in case the objects of derived classes are used via a Base pointer.
Also, there is a general notion that if you have virtual functions in your base class, the destructor should be made virtual. So better not surprise anybody :)
I totally agree with 'Roddy'.
Unless you're writing the code for perverted compiler designed for a non-existing virtual machine just to prove that so-called undefined behavior can bite - there's no problem.
The point of 'sharptooth' about custom new/delete operators is inapplicable here. Because virtual d'tor and won't solve in any way the problem he/she describes.
However it's a good point though. It means that the model where you provide a virtual d'tor and by such enable the polymorphic object creating/deletion is defective by design.
A more correct design is to equip such objects with a virtual function that does two things at once: call its (correct) destructor, and also free its memory the way it should be freed. In simple words - destroy the object by the appropriate means, which are known for the object itself.
Guideline #4 link text, states:
A base class destructor should be
either public and virtual, or
protected and nonvirtual.
Probably I'm missing something, but what if I just create a concrete class, that is not designed to be used as base class.
Should I declare it's destructor public and virtual? By this I'm implicitly declate that my class is "ready to be used as base class", while this is not necessary true.
The link text specifically says"A base class destructor should be"...
The guidelines are only meant for a class which is designed to be used as a base class. If you are making a single, concrete class that will not be used as a base class, you should leave the public constructor non-virtual.
If nothing else in your class is virtual, I don't think the destructor should be virtual either.
Consider it another way around: Do you know that no one will absolutely ever try to derive from your class and when somebody does do you think he will remember to take a closer look at your dtor? Sometimes people use inheritance over composition for a good reason (provide the full interface of your class without having ugly getter syntax).
Another point for the virtual dtor is the Open/Closed Principle.
I'd go with the virtual dtor if you are not concerned with hard real-time performance or something alike.
Destructor SHALL BE virtual in any of the following cases:
Your class contains ANY virtual method.
Even if nothing is virtual you plan to use class as base.
Rare exception:
You are trying to save 4 bytes and virtual table pointer is NOT ACCEPTABLE solution because of this (example - your class HAS to fit in 32 bits because of some reason). But be prepared for hell.
Regarding public or protected - in general it is more question of how you intend to control access to destructor.
Your destructor only needs to be virtual if your class will be extended later. I'm not aware of a case where you'd want a protected/private destructor.
It's worth noting that if you have even one virtual method, you lose nothing (with most compilers) making the destructor virtual as well (but it will protect you in case somebody extends later).
The advice refers to classes with virtual functions, intended to be polymorphic base classes. You have to make sure that if someone calls delete on a base class pointer, then the destructor of the actual class is called; otherwise, resources allocated by the derived classes won't be freed.
There are two ways to achieve this:
a public virtual destructor, so the correct destructor is found at runtime; or
a protected non-virtual destructor, which prevents calling delete on a base class pointer.
For a concrete class that won't be used as a base class, you will only ever call delete on a pointer to the actual type, so the advice doesn't apply. It should have a public non-virtual destructor if it needs one.