Example for non-virtual multiple inheritance - c++

Is there a real-world example where non-virtual multiple inheritance is being used? I'd like to have one mostly for didactic reasons. Slapping around classes named A, B, C, and D, where B and C inherit from A and D inherits from B and C is perfectly fine for explaining the question "Does/Should a D object have one or two A sub-objects?", but bears no weight about why we even have both options. Many examples care about why we do want virtual inheritance, but why would we not want virtual inheritance?
I know what virtual base classes are and how to express that stuff in code. I know about diamond inheritance and examples of multiple inheritance with a virtual base class are abundant.
The best I could find is vehicles. The base class is Vehicle which is inherited by Car and Boat. Among other things, a Vehicle has occupants() and a max_speed(). So an Amphibian that inherits from both Car and Boat inherits different max_speed() on land and water – and that makes sense –, but also different occupants() – and that does not make sense. So the Vehicle sub-objects aren't really independent; that is another problem which might be interesting to solve, but this is not the question.
Is there an example, that makes sense as a real-world model, where the two sub-objects are really independent?

You're thinking like an OOP programmer, trying to design abstract models of things. C++ multiple inheritance, like many things in C++, is a tool that has a particular effect. Whether it maps onto some OOP model is irrelevant next to the utility of the tool itself. To put it another way, you don't need a "real-world model" to justify non-virtual inheritance; you just need a real-world use case.
Because a derived class inherits the members of a base class, inheritance often is used in C++ as a means of collecting a set of common functionality together, sometimes with minimal interaction from the derived class, and injecting this functionality directly into the derived class.
The Curiously Recurring Template Pattern and other mixin-like constructs are mechanisms for doing this. The idea is that you have a base class that is a template, and its template parameter is the derived class that uses it. This allows the base class to have some access to the derived class itself without virtual functions.
The simplest example I can think of in C++ is enable_shared_from_this, which allows an object whose lifetime is currently managed by a shared_ptr to actually retrieve a shared_ptr to that object just from a pointer/reference to that object. That uses CRTP to add the various members and interfaces needed to make shared_from_this possible to the derived class. And since the inheritance is public, it also allows shared_ptr's various functions that "enable shared_from_this" to to detect that a particular type has the shared_from_this stuff in it and to properly initialize it.
enable_shared_from_this doesn't need virtual inheritance, and indeed would probably not work very well with it.
Now imagine that I have some other CRTP class that injects some other functionality into an object. This functionality has nothing to do with shared_ptr, but it uses CRTP and inheritance.
Well, if I now write some type that wants to inherit from both enable_shared_from_this and this other functionality, well, that works just fine. There is no need for virtual inheritance, and in fact doing so would only make composition that much harder.
Virtual inheritance is not free. It fundamentally changes a bunch of things about how a type relates to its base classes. If you inherit from such a type, your constructors have to initialize any virtual base classes directly. The layout of such a type is very odd and is highly unlikely to be standardized. And various other things. C++ tries not to make programmers pay for functionality they don't use, so if you don't need the special properties of virtual inheritance, you shouldn't be using it.

Its the same reason C++ has non-virtual methods -- because the implementation is simpler and more efficient if you use non-virtual inheritance, so you need to explicitly ask for virtual inheritance if you want it. Since you don't need it if your classes never use multiple inheritance, that is the default.

Related

Should classes be final by default?

Quoting the Effective C++ (Scott Meyers), third edition, item 7:
Declare destructors virtual in polymorphic base classes.
This implies that classes intended to be inherited from should, at a minimum, have the destructor virtual.
When writing some applications/libraries, some classes (I would say not few) are not designed with the intention of being inherited from. We usually rely on some conventions, in which one is not supposed to inherit classes that are not his own or without checking if it's safe to.
Now, a coding standard may require to write and design classes such that inheritance is always safe. I feel like this might be too much. C++11 has added the final keyword that makes sure classes are not inherited from. Would you recommend marking all classes that are not designed for inheritance as final by default?
This would make the compiler enforce what we have been doing by convention for ages. But that convention might be considered well enough understood and easy to follow (especially since inheritance is usually avoided, with composition being preferred) such that it might be just another thing to care of while writing the code.
Just because a class does not have any virtual functions does not mean that inheriting from it does not make sense. It just means it was probably not intended and that you have to be careful not to delete any instances of the derived objects through pointers to the base class. private inheritance would often be used in such a case, as an alternative to composition, to signal that inheriting from the base is purely an implementation detail and the derived class is not to be considered as having a "is-a" relationship with the base.
Marking a class final is a great way to express that a class is not intended to be derived from, at all, by making it impossible. So if that's what you want, slap a final on the class.
In some cases, marking a (derived) class final to prevent any further derivatives can also have the added benefit of making it easier for the compiler to devirtualize function calls, thus leading to better performance in some cases.
In my opinion, final should (usually) be the default and you can then remove it later if you find that you need/want to derive from the class after all.
But that's just opinion.

If a class might be inherited, should every function be virtual?

In C++, a coder doesn't know whether other coders will inherit his class. Should he make every function in that class virtual? Are there any drawbacks? Or is it just not acceptable at all?
In C++, you should only make a class inheritable from if you intend for it to be used polymorphically. The way that you treat polymorphic objects in C++ is very different from how you treat other objects. You don't tend to put polymorphic classes on the stack, or pass them by or return them from functions by value, since this can lead to slicing. Polymorphic objects tend to be heap-allocated, be passed around and returns by pointer or by reference, etc.
If you design a class to not be inherited from and then inherit from it, you cause all sorts of problems. If the destructor isn't marked virtual, you can't delete the object through a base class pointer without causing undefined behavior. Without the member functions marked virtual, they can't be overridden in a derived class.
As a general rule in C++, when you design the class, determine whether you want it be inherited from. If you do, mark the appropriate functions virtual and give it a virtual destructor. You might also disable the copy assignment operator to avoid slicing. Similarly, if you want the class not to be inheritable, don't give it any of these functions. In most cases it's a logic error to inherit from a class that wasn't designed to be inherited from, and most of the times you'd want to do this you can often use composition instead of inheritance to achieve this effect.
No, not usually.
A non-virtual function enforces class-invariant behavior. A virtual function doesn't. As such, the person writing the base class should think about whether the behavior of a particular function is/should be class invariant or not.
While it's possible for a design to allow all behaviors to vary in derived classes, it's fairly unusual. It's usually a pretty good clue that the person who wrote the class either didn't think much about its design, lacked the resolve to make a decision.
In C++ you design your class to be used either as a value type or a polymorphic type. See, for example, C++ FAQ.
If you are making a class to be used by other people, you should put a lot of thought into your interface and try to work out how your class will be used. Then make the decisions like which functions should be virtual.
Or better yet write a test case for your class, using it how you expect it to be used, and then make the interface work for that. You might be surprised what you find out doing it. Things you thought were absolutely necessary might turn out to be rarely needed and things that you thought were not going to be used might turn out to be the most useful methods. Doing it this way around will save you time not doing unnecessary work in the long run and end up with solid designs.
Jerry Coffin and Dominic McDonnell have already covered the most important points.
I'll just add an observation, that in the time of MFC (middle 1990s) I was very annoyed with the lack of ways hook into things. For example, the documentation suggested copying MFC's source code for printing and modifying, instead of overriding behavior. Because nothing was virtual there.
There are of course a zillion+1 ways to provide "hooks", but virtual methods are one easy way. They're needed in badly designed classes, so that the client code can fix things, but in those badly designed classes the methods are not virtual. For classes with better design there is not so much need to override behavior, and so for those classes making methods virtual by default (and non-virtual only as active choice) can be counter-productive; as Jerry remarked, virtuals provide opportunites for derived classes to screw up.
There are design patterns that can be employed to minimize the possibilities of screw-ups.
For example, wrapping internal virtuals in exposed non-virtual methods with sanity checks, and, for example, using decoupled event handling (where appropriate) instead of virtuals.
Cheers & hth.,
When you create a class, and you want that class to be used polymorphically you have to consider that the class has two different interfaces. The user interface is defined by the set of public functions that are available in your base class, and that should pretty much cover all operations that users want to perform on objects of your class. This interface is defined by the access qualifiers, and in particular the public qualifier.
There is a second interface, that defines how your class is to be extended. At that level you have to think on what behavior you want to be overridden by extending classes, and what elements of your object you want to provide to extending classes. You offer access to derived classes by means of the protected qualifier, and you offer extension points by means of virtual functions.
You should try to follow the Non-Virtual Interface idiom whenever possible. That idiom (google for it) basically tries to fully separate the two interfaces by not having public virtual functions. Users call non-virtual functions, and those in turn call on configurable functionalities by means of protected/private virtual functions. This clearly separates extension points from the class interface.
There is a single case, where virtual has to be part of the user interface: the destructor. If you want to offer your users the ability to destroy derived objects through pointers to the base, then you have to provide a virtual destructor. Else you just provide a protected non-virtual one.
He should code the functions as it is, he shouldn't make them virtual at all, as in the circumstances specified by you.
The reasons being
1> The CLASS CODER would obviously have certain use of functions he is using.
2> The inherited class may or may not make use of these functions as per requirement.
3> Any function may be overwritten in derived class without any errors.

Proper inheritance

What is meant by proper inheritance?
This thread gives a nice summary:
Proper inheritance occurs when the derived class "IS A" specialized type of the base class. Example Cat IS A Animal.
Improper inheritance occurs when a class is inherited from merely for code reuse without having any other relationship. Example Cat Inherits from Engine. A Cat is not an engine however both an engine and a cat purr.
I would like to add to what Justin and Baxter said.
The term proper inheritance is not really well-defined. Properly using inheritance is quite a subjective issue...
Consider the following example:
An interface: Bird
A concrete class: Ostrich
Should Ostrich inherits from Bird ? From a zoological point of view it makes sense, but from a Computer Science point of view... not so much. If Bird has a fly method, then how am I supposed to handle this in Ostrich::fly :x ?
There is somewhat of a war in the CS community. Indeed you'll regularly see books where Circle inherits from Ellipse (or the other way around) when it doesn't really makes sense from a CS point of view.
So my own little definition:
Considering that the interface defines precise semantics for each of its methods, a concrete class should only inherit from the interface if the implementation of each of the method matches the semantics specified.
Perhaps first we should recall what the value of inheritance itself is. Inheritance is a mechanism for both code reuse and interface reuse (polymorphism). But you can have code reuse without inheritance by using composition and delegation. And in many languages you can have polymorphism without inheritance, but not all languages. In a strongly typed language such as C++, polymorphism, which is itself another important code reuse mechanism, can only be achieved using inheritance. In fact, C++ distinguishes between public and private inheritance for interface and code reuse respectively. More on this later.
Generally one thinks of a class's methods as creating a contract with clients of the class: It guarantees that as long as you call a method having met certain preconditions, the method will deliver certain results. "Proper" inheritance is such that the subclass instance can be substituted for its parent class without violating the parent's contract. That is, none of the subclass's methods should override its base class's methods requiring stricter preconditions nor delivering more "less" results. Returning to C++, since there is a clear distinction between public and private inheritance where the former is required so that you may substitute a subclass instance for a base class instance, it is generally considered "proper" that the substitution be semantically correct. Otherwise why not use private inheritance?
Is Ostrich a valid subclass of Bird? It depends on what your abstraction of Bird is? If the Bird class has a method Bird::fly that does something tangible but Ostrich::fly overrides this to throw an exception and thus delivers "less" than its base class, I would say no. But if class Bird did not have a fly method (there is another subclass Flying_Bird that does), no problem.
My answer is that what holds for C++ is not a bad guideline for any object-oriented language. In a languages like Python where an inheritance hierarchy is not required for polymorphism, people might have a greater tendency to use inheritance just for code reuse. If this reuse is well-documented, it might not be an issue. But when I see a class hierarchy, my expectation is that a subclass is potentially going to be used polymorphically and thus an instance of the subclass can be substituted for an instance of the base class thereby obeyeing the Liskov substitution principle where a subclass is also a subtype.
When inheritance complies with a IS A relationship, as opposed to inheriting purely for code reuse without there being a logical subsumption of the child by the superclass.

Private inheritance and composition, which one is best and why? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
suppose i have a class engin and i inherit a class car from engin class
class engin
{
public:
engin(int nobofcylinders);
void start();
};
class car:private engin
{
public:
car():e(8){}
void start()
{
e.start();
}
private:
engin e;
};
now the same can be done by the composition, the question is which approch would be best and is mostly used in programming, and why???????
Composition is to be preferred for two main reasons:
the thing(s) being composed can have names
you can compose more than one thing of the same type
I prefer to think of inheritance as derived is a kind of base, that basically means public inheritance. In case of private inheritance it more like derived has a base, which IMHO doesn't sound right, because that's IMHO the work for composition not inheritance of any kind. So, since private inheritance and composition essentially mean same thing logically, which to choose? With the example you posted, I'd most certainly go for composition. Why? I tend to think of all kinds of inheritance as a kind of relationship, and with the example you posted, I can't think of a situation where I could say a car is kind of an engine, it simply isn't. It's indeed like a car has an engine, so why would a car inherit from an engine? I see no reason.
Now, indeed there are cases where it's good to have private inheritance, namely boost::noncopyable, with it's ctor/dtor being protected, you'd have hard time instantiating it, and indeed since we want our class to have a noncopyable part, that's the only way to go.
Some style guides (e.g. google c++ style guide) even recommend to never use private inheritance, for reasons similar to what I already written - private inheritance is just a bit confusing.
If you want to compare private inheritance with composition, read http://www.parashift.com/c++-faq-lite/private-inheritance.html#faq-24.3. I don't think private inheritance is good.
A Car has-an Engine, but a Car is-not-an Engine, so it should be better done with composition.
Inheritence is useful for "is-a" relationships, e.g. a Bus is-a Car, a Car is-a vehicle, etc.
Composition is useful for "has-a" relationships, e.g. a Car has Wheel-s, a Car has-an Engine, etc.
So a logical code should be like
class Car : public Vehicle {
Engine engine;
Wheel wheels[4];
...
};
Private inheritance, despite the name, isn’t really inheritance – at least not from the outside (of the class), where it matters.
For that reason, different rules apply. In C++, private inheritance is said to model an “is implemented in terms of” relationship. Thus, a priority queue which is implemented in terms of a heap, could look like this:
template <typename T, typename Comp = std::less<T> >
class priority_queue : private heap<T, Comp> {
// …
};
Personally, I don’t see the advantage of this pattern, and Neil has already stated that in most cases, composition actually has the advantage over private inheritance.
One advantage exists, though: since it’s such an established pattern, the meaning of a private inheritance is immediately clear to a seasoned C++ programmer; the above code would tell them that the priority queue is implemented in terms of a heap – which wouldn’t be obvious if the class just happened to use a heap as one of its members.
Private inheritance tends to get used in C++ primarily for policy classes. The classical example is allocators, which determine how a container class manages storage internally:
template <typename T, typename A = std::allocator<T> >
class vector : private A {
// …
};
No harm done. But once again, this could also have been done using composition.
Usually, composition is to be preferred (others gave the major reasons), but private inheritance allows things which can't be done by composition:
zero-size base class optimization (a base class of size zero will not increase the size of a class, a member of size zero will), that't the reason behind its use for policy classes which often have no data members
controlling initialization order so that what is composed is initialized before a public base
overriding a virtual member in what is composed
with private virtual inheritance, ensuring that there is only one composed thing even if one do it in several bases
Note that for the later two uses, the fact that the base class exist can be observed in a descendant.
Composition is used more than private inheritance. The general rule I follow and would recommend is that unless you have a specific reason to use private inheritance you should use composition.
Composition has many benefits over private inheritance:
You can have more than one instance of a particular class.
You don't pollute your class' namespace with a bunch of private functions that don't make sense for your class.
You can give names to the parts of your object
Your class is less coupled to the classes it's composed of than it is to a class it inherits from
If you discover you need to swap out an object that you've included by composition during the lifetime of your object, you can, with private inheritance you're stuck.
There are a lot of other benefits to composition. Basically, it's more flexible and cleaner.
There are reasons to use private inheritance though. They are very specific reasons, and if you think they apply, you should think carefully about your design to make sure you have to do it that way.
You can override virtual functions.
You can get access to protected members of the base class.
You need to pass yourself to something that wants an object of the class you're inheriting from (this usually goes hand-in-hand with overriding virtual functions).
And there are a few rather tricky ones as well:
If you use composition for a class that has 0 size, it still takes up space, but with private inheritance it doesn't.
You want to call a particular constructor for a virtual base class of the class you're going to privately inherit from.
If you want to initialize the private base before other base classes are initialized (with composition, all the variables in your class will be initialized after all your base classes are).
Using private virtual inheritance to make sure there's only one copy of a thing even when you have multiple base classes that have it. (In my opinion, this is better solved using pointers and normal composition.)
Private inheritance means
is-implemented-in-terms of. It's
usually inferior to composition, but
it makes sense when a derived class
needs access to protected base class
members or needs to redefine
inherited virtual functions.
Unlike composition, private
inheritance can enable the empty base
optimization. This can be important
for library developers who strive to
minimize object sizes.
Scott Meyers "Effective C++" Third Edition.
Base classes are evil.
In my mind, good OO design is about encapsulation and interfaces. The convenience of base classes are not worth the price you pay.
Here's a really good article about this:
http://www.javaworld.com/javaworld/jw-08-2003/jw-0801-toolbox.html

Using non-abstract class as base

I need to finish others developer work but problem is that he started in different way...
So now I found in situation to use existing code where he chooses to inherit a non-abstract class (very big class, without any virtual functions) that already implements bunch of interfaces or to dismiss that code (which shouldn't be to much work) and to write another class that implements interfaces I need.
What are the pros and cons that would help me to choose the better approach.
p.s. please note that I don't have to much experience
Many Thanks
Although it is very tempting to say write it from scratch again, don't do it! The existing code may be ugly, but it looks like it does work. Since the class is big, I assume there is fair bit of history behind it as well. It might have solutions for some very obscure cases which you might not have imagined till now. What I suggest is, if possible first talk to the person who developed that class, understand how it works, then derive from it (after making its destructor virtual of course) and complete your work. Then as and when time permits slowly refactor the parts of the class into smaller more manageable classes. Also, don't forget to write a good unit-tester before you start so that you can validate the new behavior against the existing class's behavior. One more thing, there is nothing wrong in inheriting from a non-abstract base class as long as it makes sense and the base class destructor is virtual.
If the other developer has written a base-class with no virtual functions, then those functions do not need to be overridden, and it is correct to define them in a non-abstract base class.
If those functions define functionality that all the child-classes require then it would be a mistake to get rid of the base class, as you would then need to implement those functions individually in each of the child classes.
I've seen a lot of developers go 'interface-mad' in the last couple of years, but base classes still serve a function over interfaces - to provide a concrete implementation that is common to all child classes. It would be a mistake to get rid of the base class and have seperate implementations of these functions in each of the child classes.
HOWEVER, if the child classes are inheriting functionality that they do not require, or require a separate implementation of, then the Base class is a mistake and interfaces would seem like the better option to divide the functionality between the child classes.
Despite this, I would agree with Naveen that its probably not worth the extra work this will give you, it may seem simple, but if this is a big class with a lot of inheritors then it could turn out to be a nightmare. Quite often in Software Engineering you have to deal with another developer's code that you might have implemented differently. If you re-implemented it ever time you will be a very unproductive developer. I say work with what you've got and get the project finished on time.
Is there anything at all you want to use from the base class or would you end up overriding everything?
Does it define some sort of type that you want to use for an "is-a" relationship?
(for example, base class is "animal" and you want to make "cat", but if it doesn't add any behavior to its interface, that doesn't seem likely)
Is the base class used in other interfaces you need to use? (like if someone is passing objects through a reference/pointer to the base class)
If not, I'd say there's no advantage in inheriting from that class over implementing the interface(s) yourself.
What are the pros and cons that would help me to choose the better approach.
It's legal to derive from a class with no virtual functions, but that doesn't make it a good idea. When you derive from a class with virtual functions, you often use that class through pointers (eg., a class Derived that inherits from Base is often manipulated through Base*s). That doesn't work when you don't use virtual functions. Also, if you have a pointer to the base class, delete-ing it can lead to a memory leak.
However, it sounds more like these classes aren't being used through pointers-to-the-base. Instead the base class is simply used to get a lot of built in functionality, although the classes aren't related in the normal sense. Inversion of control (and has-a relationships) is a more common way to do that nowadays (split the functionality of the base class into a number of interfaces -- pure virtual base classes -- and then have the objects that currently derive from the base class instead have member variables of those interfaces).
At the very least, you'll want to split the big base class into well-defined smaller classes and use those (like mixins), which sounds like your second option.
However, that doesn't mean rewrite all the other code that uses the blob base class all in one go. That's a big undertaking and you're likely to make small typos and similar mistakes. Instead, buy yourself copies of Working Effectively With Legacy Code and Large-Scale C++ Software Design, and do the work piecemeal.
From you question it is not too clear what the problem is - looking at the title (Using non-abstract class as base) I can tell you that using an abstract class (non pure virtual - when you talk about interfaces in C++ I am assuming pure virtual abstract classes) as base makes sense only if there is common functionality you can share between subclasses - meaning that a number of classes extend the same abstract class inheriting the common implementation. If that's not the case (and you're pretty confident it's never gonna happen) then it doesn't make sense to use an abstract class.
If you can extract out some of the functionality in you big class in such a way that leads to (even potential) code reuse then it could make sense - otherwise I wouldn't see the point.