How often is non-public C++ inheritance used in practice? [duplicate] - c++

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When should I use C++ private inheritance?
I wanted to make this community-wiki but don't see the button... can someone add it?
I can't think of any case I've derived from a class in a non-public way, and I can't recall off-hand seeing code which does this.
I'd like to hear real-world examples and patterns where it is useful.

Your mileage may vary...
The hard-core answer would be that non-public inheritance is useless.
Personally, I use it in either of two cases:
I would like to trigger the Empty Base Optimization if possible (usually, in template code with predicates passed as parameters)
I would like to override a virtual function in the class
In either cases, I thus use private inheritance because the inheritance itself is an implementation detail.
I have seen people using private inheritance more liberally, and near systematically, instead of composition when writing wrappers or extending behaviors. C++ does not provide an "easy" delegate syntax, so doing so allow you to write using Base::method; to immediately provide the method instead of writing a proper forwarding call (and all its overloads). I would argue it is bad form, although it does save time.

If you chose inheritance for developing a wrapper, private inheritance is the way to go. You no longer need or want access to your base class' methods and members from outside your wrapper class.
class B;
class A
{
public:
A();
void foo(B b);
};
class BWrap;
class AWrap : private A
{
public:
AWrap();
void foo(BWrap b);
};
//no longer want A::foo to be accessible by mistake here, so make it private

Since private inheritance has as its only known use implementation inheritance, and since this could always be done using containment instead (which is less simple to use, but better encapsulates the relationship), I'd say it's used too often.
(Since nobody ever told me what protected inheritance means, let's assume nobody knows what it is and pretend it doesn't exist.)

Sometimes inheriting the classes which are neither having any virtual functions nor a virtual destructor (e.g. STL containers), you may have to go for non-public inheritance. e.g.
template<typename T>
struct MyVector : private std::vector<T>
{ ... };
This will disallow, handles (pointer or reference) of base (vector<>) to get hold of derived class (MyVector<>):
vector<int> *p = new MyVector<int>; // compiler error
...
delete p; // undefined behavior: ~vector() is not 'virtual'!
Since, we get compiler error at the first line itself, we will be saved from the undefined behavior in the subsequent line.

If you are deriving from a class without a virtual destructor then Public inheritance leads to a chance that users of the class can call delete on a pointer-to-base, which leads to undefined behaviour.
In such an scenario it makes sense to use private Inheritance.
Most common example of this is to derive privately from STL containers which do not have virtual destructors.
C++FAQ has an excellent example of Private Inheritance which extends to many real live scenarios.
A legitimate, long-term use for private inheritance is when you want to build a class Fred that uses code in a class Wilma, and the code from class Wilma needs to invoke member functions from your new class, Fred. In this case, Fred calls non-virtuals in Wilma, and Wilma calls (usually pure virtuals) in itself, which are overridden by Fred. This would be much harder to do with composition.
Code Example:
class Wilma {
protected:
void fredCallsWilma()
{
std::cout << "Wilma::fredCallsWilma()\n";
wilmaCallsFred();
}
virtual void wilmaCallsFred() = 0; // A pure virtual function
};
class Fred : private Wilma {
public:
void barney()
{
std::cout << "Fred::barney()\n";
Wilma::fredCallsWilma();
}
protected:
virtual void wilmaCallsFred()
{
std::cout << "Fred::wilmaCallsFred()\n";
}
};

Non-public (almost always private) inheritance is used when inheriting
(only) behavior, and not interface. I've used it mostly, but not
exclusively, in mixins.
For a good discussion on the topic, you might want to read Barton and
Nackman (Scientific and Engineering C++: An Introduction with Advanced
Techniques and Examples, ISBN 0-201-53393-6. Despite the name, large parts of the book are
applicable to all C++, not just scientific and engineering
applications. And despite its date, it's still worth reading.)

Related

multiple inheritance is convenient?

i want to implement an Algorithm class which uses some utility classes.
but one class may need member variable or function of other utility class. So instead of
composition is it better to use inheritance as below ?
class A{
public:
void setA(int var){ a = var;}
int a;
};
class B{
public:
void foo(int var){
if (var==1){
//bla bla...
}else{
//bik bik...
}
};
class Algo : public A , public B{
public :
void run(){
setA(1);
foo(a);
}
};
Your class Algo should only inherit from A and B if it is a true IS-A relationship to A and B. If you are just wanting to use functionality from A or B, consider composition instead (or at least private inheritance).
For example, if I want to create a class, and that class needs to do some logging, then my class HAS-A logger, but it's not the case that it IS-A logger. Thus I wouldn't want to inherit from logger, but use composition instead.
In your case it doesn't make sense to use inheritance because Algo isn't a A or B, it merely uses them.
Inheritance is meant to express an "is a" relationship and should adhere to the Liskov substitution principle. Can you say Algo is an A and Algo is a B? In general I feel it's a bad idea for a derived class to muck with the base classes private variables (you may set them as protected but they should probably be private). You can always write getters and setters. You can get into trouble with multiple inheritance in so many ways and I think your approach while convenient now will lead you to program in a less maintainable way. I try to reserve inheritance for when I need to treat classes polymorphically, in most other cases I prefer composition.
I like this concept of inheritance:
Commonly thought of as a way to "reuse existing code" by creating a new class that inherits from another existing class. This way you can extend the functionality of an existing class w/o touching the existing class's code. But Herb Sutter has a bit of a different take on the use of inheritance--"Inherit, not to reuse, but to be reused. Don't inherit publicly to reuse code (that exists in the base class); inherit publicly in order to be reused (by existing code that already uses base objects polymorphically)." [C++ Coding Standards, p. 64]. He also says "In correct inheritance, a derived class models a special case of a more general base concept." [ibid, p. 66]
http://cpp.strombergers.com/
so if you don't need to reuse code polymorphically, or to make more special case for base, better use a composition.
Don't pick inheritance over composition as long as you do not need polymorphism. In your case, as you don't have and virtual functions in classes A and B that needs to be changed in class Algo, composition may be a better design choice.

Effective C++: discouraging protected inheritance?

I was reading Scott Meyers' Effective C++ (third edition), and in a paragraph in Item 32: Make sure public inheritance is "is-a" on page 151 he makes the comment (which I've put in bold):
This is true only for public inheritance. C++ will behave as I've described only if Student is publicly derived from Person. Private inheritance means something entirely different (see Item 39), and protected inheritance is something whose meaning eludes me to this day.
The question: how should I interpret this comment? Is Meyers trying to convey that protected inheritance is seldom considered useful and should be avoided?
(I've read the question
Difference between private, public, and protected inheritance as well as C++ FAQ Lite's private and protected inheritance section, both of which explain what protected inheritance means, but hasn't given me much insight into when or why it would be useful.)
Some scenarios where you'd want protected:
You have a base class with methods where you know you never want to expose the functionality outside, but you know will be useful for any derived class.
You have a base class with members that should be logically used by any class that extends that class, but should never be exposed outside.
Thanks to multiple inheritance you can play around with base classes' inheritance type and construct a more diversed class with existing logic and implementation.
A more concrete example:
You could create a few abstract classes that follow Design Pattern logic, lets say you have:
Observer
Subject
Factory
Now you want these all to be public, because in general, you could use the pattern on anything.
But, with protected inheritance, you can create a class that is an Observer and Subject, but only protected factory, so the factory part is only used in inherited classes. (Just chose random patterns for the example)
Another example:
Lets say for example you want to inherit from a library class (not that I encourage it). Lets say you want to make you own cool extension of std::list<> or a "better" shared_ptr.
You could derive protectedly from the base class (which is designed to have public methods).
This will give you the option to use your own custom methods, use the class' logic, and pass the logic to the to any derived class.
You could probably use encapsulation instead, but inheritance follows the proper logic of IS A
(or in this case IS sort of A)
He isn't exactly discouraging protected inheritance, he just says that he hasn't found any good use for it. I haven't seen anyone either elsewhere.
If you happen to find a couple of really useful use cases, you might have material to write a book too. :-)
Yup, there aren't many uses for protected or private inheritance. If you ever think about private inheritance, chances are composition is better suited for you. (Inheritance means 'is-a' and composition means 'has-a'.)
My guess is that the C++ committee simply added this in because it was very easy to do and they figured, "heck, maybe someone will find a good use for this". It's not a bad feature, it doesn't do any harm, just that no one has found any real use for it yet. :P
Yes and no. I myself think that protected inheritance is a bad feature too. It basicly imports all the base class's public and protected members as protected members.
I usually avoid protected members, but on the lowest levels requiring extreme efficiency with a compiler with bad link-time optimization they are useful. But everything built on that shouldn't be messing with the original base class's (data) members and use the interface instead.
What I think Scott Meyer is trying to say, is that you can still use protected inheritance if it solves a problem, but make sure to use comments to describe the inheritance because it's not semantically clear.
I don't know if Meyers is advising us to refrain from using protected inheritance, but it seems you should avoid it if Meyers is on your team, because at least he won't be able to understand your code.
Unless your co-workers know C++ better than he does, you should probably also stay away from protected inheritance. Everybody will understand the alternative, i.e. using composition.
I can imagine however a case where it could make sense: You need access to a protected member in a class whose code you can't change but you don't want to expose an IS-A relationship.
class A {
protected:
void f(); // <- I want to use this from B!
};
class B : protected A {
private:
void g() { f(); }
};
Even in that case, I would still consider making a wrapper with public inheritance that exposes the protected base members with public methods, then compose with these wrappers.
/*! Careful: exposes A::f, which wasn't meant to be public by its authors */
class AWithF: public A {
public:
void f() { A::f(); }
};
class B {
private:
AWithF m_a;
void g() { m_a.f(); }
};

Use-cases of pure virtual functions with body?

I recently came to know that in C++ pure virtual functions can optionally have a body.
What are the real-world use cases for such functions?
The classic is a pure virtual destructor:
class abstract {
public:
virtual ~abstract() = 0;
};
abstract::~abstract() {}
You make it pure because there's nothing else to make so, and you want the class to be abstract, but you have to provide an implementation nevertheless, because the derived classes' destructors call yours explicitly. Yeah, I know, a pretty silly textbook example, but as such it's a classic. It must have been in the first edition of The C++ Programming Language.
Anyway, I can't remember ever really needing the ability to implement a pure virtual function. To me it seems the only reason this feature is there is because it would have had to be explicitly disallowed and Stroustrup didn't see a reason for that.
If you ever feel you need this feature, you're probably on the wrong track with your design.
Pure virtual functions with or without a body simply mean that the derived types must provide their own implementation.
Pure virtual function bodies in the base class are useful if your derived classes wants to call your base class implementation.
One reason that an abstract base class (with a pure virtual function) might provide an implementation for a pure virtual function it declares is to let derived classes have an easy 'default' they can choose to use. There isn't a whole lot of advantage to this over a normal virtual function that can be optionally overridden - in fact, the only real difference is that you're forcing the derived class to be explicit about using the 'default' base class implementation:
class foo {
public:
virtual int interface();
};
int foo::interface()
{
printf( "default foo::interface() called\n");
return 0;
};
class pure_foo {
public:
virtual int interface() = 0;
};
int pure_foo::interface()
{
printf( "default pure_foo::interface() called\n");
return 42;
}
//------------------------------------
class foobar : public foo {
// no need to override to get default behavior
};
class foobar2 : public pure_foo {
public:
// need to be explicit about the override, even to get default behavior
virtual int interface();
};
int foobar2::interface()
{
// foobar is lazy; it'll just use pure_foo's default
return pure_foo::interface();
}
I'm not sure there's a whole lot of benefit - maybe in cases where a design started out with an abstract class, then over time found that a lot of the derived concrete classes were implementing the same behavior, so they decided to move that behavior into a base class implementation for the pure virtual function.
I suppose it might also be reasonable to put common behavior into the pure virtual function's base class implementation that the derived classes might be expected to modify/enhance/augment.
One use case is calling the pure virtual function from the constructor or the destructor of the class.
The almighty Herb Sutter, former chair of the C++ standard committee, did give 3 scenarios where you might consider providing implementations for pure virtual methods.
Gotta say that personally – I find none of them convincing, and generally consider this to be one of C++'s semantic warts. It seems C++ goes out of its way to build and tear apart abstract-parent vtables, then briefly exposes them only during child construction/destruction, and then the community experts unanimously recommend never to use them.
The only difference of virtual function with body and pure virtual function with body is that existence of second prevent instantiation. You can't mark class abstract in c++.
This question can really be confusing when learning OOD and C++. Personally, one thing constantly coming in my head was something like:
If I needed a Pure Virtual function to also have an implementation, so why make it "Pure" in first place ? Why not just leaving it only "Virtual" and have derivatives, both benefit and override the base implementation ?
The confusion comes to the fact that many developers consider the no body/implementation as the primary goal/benefit of defining a pure virtual function. This is not true!
The absence of body is in most cases a logical consequence of having a pure virtual function. The main benefit of having a pure virtual function is defining a contract: By defining a pure virtual function, you want to force every derivative to always provide their own implementation of the function. This "contract aspect" is very important especially if you are developing something like a public API. Making the function only virtual is not so sufficient because derivatives are no longer forced to provide their own implementation, therefore you may loose the contract aspect (which can be limiting in the case of a public API).
As commonly said :
"Virtual functions can be overrided, Pure Virtual functions must be overrided."
And in most cases, contracts are abstract concepts so it doesn't make sense for the corresponding pure virtual functions to have any implementation.
But sometimes, and because life is weird, you may want to establish a strong contract among derivatives and also want them to somehow benefit from some default implementation while specifying their own behavior for the contract. Even if most book authors recommend to avoid getting yourself into these situations, the language needed to provide a safety net to prevent the worst! A simple virtual function wouldn't be enough since there might be risk of escaping the contract. So the solution C++ provided was to allow pure virtual functions to also be able to provide a default implementation.
The Sutter article cited above gives interesting use cases of having Pure Virtual functions with body.

Private virtual method in C++

What is the advantage of making a private method virtual in C++?
I have noticed this in an open source C++ project:
class HTMLDocument : public Document, public CachedResourceClient {
private:
virtual bool childAllowed(Node*);
virtual PassRefPtr<Element> createElement(const AtomicString& tagName, ExceptionCode&);
};
Herb Sutter has very nicely explained it here.
Guideline #2: Prefer to make virtual functions private.
This lets the derived classes override the function to customize the
behavior as needed, without further exposing the virtual functions
directly by making them callable by derived classes (as would be
possible if the functions were just protected). The point is that
virtual functions exist to allow customization; unless they also need
to be invoked directly from within derived classes' code, there's no
need to ever make them anything but private
If the method is virtual it can be overridden by derived classes, even if it's private. When the virtual method is called, the overridden version will be invoked.
(Opposed to Herb Sutter quoted by Prasoon Saurav in his answer, the C++ FAQ Lite recommends against private virtuals, mostly because it often confuses people.)
Despite all of the calls to declare a virtual member private, the argument simply doesn't hold water. Frequently, a derived class's override of a virtual function will have to call the base class version. It can't if it's declared private:
class Base
{
private:
int m_data;
virtual void cleanup() { /*do something*/ }
protected:
Base(int idata): m_data (idata) {}
public:
int data() const { return m_data; }
void set_data (int ndata) { m_data = ndata; cleanup(); }
};
class Derived: public Base
{
private:
void cleanup() override
{
// do other stuff
Base::cleanup(); // nope, can't do it
}
public:
Derived (int idata): base(idata) {}
};
You have to declare the base class method protected.
Then, you have to take the ugly expedient of indicating via a comment that the method should be overridden but not called.
class Base
{
...
protected:
// chained virtual function!
// call in your derived version but nowhere else.
// Use set_data instead
virtual void cleanup() { /* do something */ }
...
Thus Herb Sutter's guideline #3...But the horse is out of the barn anyway.
When you declare something protected you're implicitly trusting the writer of any derived class to understand and properly use the protected internals, just the way a friend declaration implies a deeper trust for private members.
Users who get bad behavior from violating that trust (e.g. labeled 'clueless' by not bothering to read your documentation) have only themselves to blame.
Update: I've had some feedback that claims you can "chain" virtual function implementations this way using private virtual functions. If so, I'd sure like to see it.
The C++ compilers I use definitely won't let a derived class implementation call a private base class implementation.
If the C++ committee relaxed "private" to allow this specific access, I'd be all for private virtual functions. As it stands, we're still being advised to lock the barn door after the horse is stolen.
I first came across this concept while reading Scott Meyers' 'Effective C++', Item 35: Consider alternatives to virtual functions. I wanted to reference Scott Mayers for others that may be interested.
It's part of the Template Method Pattern via the Non-Virtual Interface idiom: the public facing methods aren't virtual; rather, they wrap the virtual method calls which are private. The base class can then run logic before and after the private virtual function call:
public:
void NonVirtualCalc(...)
{
// Setup
PrivateVirtualCalcCall(...);
// Clean up
}
I think that this is a very interesting design pattern and I'm sure you can see how the added control is useful.
Why make the virtual function private? The best reason is that we've already provided a public facing method.
Why not simply make it protected so that I can use the method for other interesting things? I suppose it will always depend on your design and how you believe the base class fits in. I would argue that the derived class maker should focus on implementing the required logic; everything else is already taken care of. Also, there's the matter of encapsulation.
From a C++ perspective, it's completely legitimate to override a private virtual method even though you won't be able to call it from your class. This supports the design described above.
I use them to allow derived classes to "fill in the blanks" for a base class without exposing such a hole to end users. For example, I have highly stateful objects deriving from a common base, which can only implement 2/3 of the overall state machine (the derived classes provide the remaining 1/3 depending on a template argument, and the base cannot be a template for other reasons).
I NEED to have the common base class in order to make many of the public APIs work correctly (I'm using variadic templates), but I cannot let that object out into the wild. Worse, if I leave the craters in the state machine- in the form of pure virtual functions- anywhere but in "Private", I allow a clever or clueless user deriving from one of its child classes to override methods that users should never touch. So, I put the state machine 'brains' in PRIVATE virtual functions. Then the immediate children of the base class fill in the blanks on their NON-virtual overrides, and users can safely use the resulting objects or create their own further derived classes without worrying about messing up the state machine.
As for the argument that you shouldn't HAVE public virtual methods, I say BS. Users can improperly override private virtuals just as easily as public ones- they're defining new classes after all. If the public shouldn't modify a given API, don't make it virtual AT ALL in publicly accessible objects.

Is it possible to forbid deriving from a class at compile time?

I have a value class according to the description in "C++ Coding Standards", Item 32. In short, that means it provides value semantics and does not have any virtual methods.
I don't want a class to derive from this class. Beside others, one reason is that it has a public nonvirtual destructor. But a base class should have a destructor that is public and virtual or protected and nonvirtual.
I don't know a possibility to write the value class, such that it is not possible to derive from it. I want to forbid it at compile time. Is there perhaps any known idiom to do that? If not, perhaps there are some new possibilities in the upcoming C++0x? Or are there good reasons that there is no such possibility?
Bjarne Stroustrup has written about this here.
The relevant bit from the link:
Can I stop people deriving from my class?
Yes, but why do you want to? There are two common answers:
for efficiency: to avoid my function
calls being virtual.
for safety: to ensure that my class is not used as a
base class (for example, to be sure
that I can copy objects without fear
of slicing)
In my experience, the efficiency reason is usually misplaced fear. In C++, virtual function calls are so fast that their real-world use for a class designed with virtual functions does not to produce measurable run-time overheads compared to alternative solutions using ordinary function calls. Note that the virtual function call mechanism is typically used only when calling through a pointer or a reference. When calling a function directly for a named object, the virtual function class overhead is easily optimized away.
If there is a genuine need for "capping" a class hierarchy to avoid virtual function calls, one might ask why those functions are virtual in the first place. I have seen examples where performance-critical functions had been made virtual for no good reason, just because "that's the way we usually do it".
The other variant of this problem, how to prevent derivation for logical reasons, has a solution. Unfortunately, that solution is not pretty. It relies on the fact that the most derived class in a hierarchy must construct a virtual base. For example:
class Usable;
class Usable_lock {
friend class Usable;
private:
Usable_lock() {}
Usable_lock(const Usable_lock&) {}
};
class Usable : public virtual Usable_lock {
// ...
public:
Usable();
Usable(char*);
// ...
};
Usable a;
class DD : public Usable { };
DD dd; // error: DD::DD() cannot access
// Usable_lock::Usable_lock(): private member
(from D&E sec 11.4.3).
If you are willing to only allow the class to be created by a factory method you can have a private constructor.
class underivable {
underivable() { }
underivable(const underivable&); // not implemented
underivable& operator=(const underivable&); // not implemented
public:
static underivable create() { return underivable(); }
};
Even if the question is not marked for C++11, for people who get here it should be mentioned that C++11 supports new contextual identifier final. See wiki page
Take a good look here.
It's really cool but it's a hack.
Wonder for yourself why stdlib doesn't do this with it's own containers.
Well, i had a similar problem. This is posted here on SO. The problem was other way around; i.e. only allow those classes to be derived that you permit. Check if it solves your problem.
This is done at compile-time.
I would generally achieve this as follows:
// This class is *not* suitable for use as a base class
The comment goes in the header and/or in the documentation. If clients of your class don't follow the instructions on the packet, then in C++ they can expect undefined behavior. Deriving without permission is just a special case of this. They should use composition instead.
Btw, this is slightly misleading: "a base class should have a destructor that is public and virtual or protected and nonvirtual".
That's true for classes which are to be used as bases for runtime polymorphism. But it's not necessary if derived classes are never going to be referenced via pointers to the base class type. It might be reasonable to have a value type which is used only for static polymorphism, for instance with simulated dynamic binding. The confusion is that inheritance can be used for different purposes in C++, requiring different support from the base class. It means that although you don't want dynamic polymorphism with your class, it might nevertheless be fine to create derived classes provided they're used correctly.
This solution doesn't work, but I leave it as an example of what not to do.
I haven't used C++ for a while now, but as far as I remember, you get what you want by making destructor private.
UPDATE:
On Visual Studio 2005 you'll get either a warning or an error. Check up the following code:
class A
{
public:
A(){}
private:
~A(){}
};
class B : A
{
};
Now,
B b;
will produce an error "error C2248: 'A::~A' : cannot access private member declared in class 'A'"
while
B *b = new B();
will produce warning "warning C4624: 'B' : destructor could not be generated because a base class destructor is inaccessible".
It looks like a half-solutiom, BUT as orsogufo pointed, doing so makes class A unusable. Leaving answers