I'm trying to implement a library where Class1 provides about five public methods Method1 to Method5. Class2 provides two methods - Methods6 and Method7. And Class3 provides one method - Method8. Now, for the end user, I want to expose methods from combination of these classes. E.g. If the end user instantiates a class called Class1Class2, they should have access to Method1 to Method7, if they instantiate a class called Class1Class3, they should have access to Method1 to Method5 and Method8.
There are 3 different approaches I could think of (please suggest any others as well):
Multiple inheritance: Keep each of Class1, Class2 and Class3 as it is. Then, create a new class Class1Class2 that publicly multiple inherits from Class1 and Class2. Similarly I can create a class Class1Class3 that publicly multiple inherits from Class1 and Class3.
Multi-level inheritance: I could derive Class2 from Class1, and call that Class1Class2. And Class3 from Class1 and call that Class1Class3. And if we need Class1Class2Class3, we inherit that class from Class2 and Class3, which have both derived from Class1. Here, we would use virtual inheritance to resolve the diamond problem. I don't expect to use Class2Class3, so that shouldn't be a problem here.
Composition: Keep each of Class1, Class2 and Class3 as it is. Create Class1Class2 that implements each of the methods Method1 to Method7 and internally delegate them to the objects of Class1 and Class2 accordingly. Similarly, a Class1Class3 would compose objects of Class1 and Class3. With this approach we need to provide implementations for all the methods and delegate them to the composed objects.
While "Composition over inheritance" guideline is generally great for loose coupling of classes, etc., in the above case, where we have to do code reuse from separate concrete implementations, Approach 1 or 2 seem like better options.
You are only talking about code reuse here. I would think that's because you don't actually need nor want actual polymorphism. If this is indeed the case, then consider private inheritance and using to expose the parent methods.
For example:
class Class1Class2 : Class1, Class2 {
public:
using Class1::Method1;
// ...
using Class2::Method6;
// ...
};
Private inheritance, while technically being called inheritance, is very different from public inheritance, which itself is conceptually different from subtyping.
In C++ (and a lot of other languages that support OOP), public inheritance usually provides both subtyping and code reuse. However, it is entirely possible to derive a class with incorrect behavior in places that expect the parent class. This potentially undermines subtyping. It is also entirely possible to derive a class that does not reuse any of the implementation of the parent class. This potentially undermines code reuse.
Conceptually, subtyping is expressed by interface inheritance, while implementation inheritance is only one way to reuse code. The "composition over inheritance" saying, as far as I understand it, is about using other tools to reuse code because implementation inheritance often leads to bad code. However, there isn't really another way to achieve true subtyping than inheritance, so it may still be useful there1.
On the other hand, private inheritance is just an odd form of composition. It simply replaces the member with a private base class. An advantage of this is the ability to use using to easily expose the parts of that "member" you want to expose.
1 I personally don't like either forms of (public) inheritance, prefering static polymorphism and compile-time duck typing. However, I can happily work with interface inheritance, whereas I usually stay far away from implementation inheritance.
As you want easy combination, you might use template as variant of your first proposal:
template <typename ... Bases>
struct Derived : Bases...
{
};
using Class1Class2 = Derived<Class1, Class2>;
using Class1Class2Class3 = Derived<Class1, Class2, Class3>;
To make things more interesting you may employ CRTP. Here is an example:
template<typename Base>
class ClassA {
public:
void MethodA1() {static_cast<Base*>(this)->MethodA1_Impl();}
void MethodA2() {static_cast<Base*>(this)->MethodA2_Impl();}
};
template<typename Base>
class ClassB {
public:
void MethodB1() {static_cast<Base*>(this)->MethodB1_Impl();}
void MethodB2() {static_cast<Base*>(this)->MethodB2_Impl();}
};
template<typename Base>
class ClassC {
public:
void MethodC1() {static_cast<Base*>(this)->MethodC1_Impl();}
void MethodC2() {static_cast<Base*>(this)->MethodC2_Impl();}
};
class ClassABC: public ClassA<ClassABC>, public ClassB<ClassABC>, public ClassC<ClassABC> {
public:
//void MethodA1_Impl();
//void MethodA2_Impl();
//void MethodB1_Impl();
//void MethodB2_Impl();
//void MethodC1_Impl();
//void MethodC2_Impl();
};
You may uncomment and implement ANY subset of MethodXY_Impl(), and that would compile. The client code may call any method from MethodXY(). If there is no corresponding implementation - the compiler would produce an error.
Related
I searched for this, but I feel I'm not finding the answer I'm after. So, simple version and hopefully someone can just say "here's how" and I'll be on my way :)
Essentially I want this:
class BaseObject
{
public:
BaseObject();
~BaseObject();
virtual bool FunctionX() =0;
virtual bool FunctionY() =0;
};
class ObjectA : BaseObject
{
public:
ObjectA();
~ObjectA();
bool FunctionX();
bool FunctionY();
bool FunctionZ();
};
.. same for ObjectB as above ..
...
vector<BaseObject*> myList;
ObjectA a;
ObjectB b;
myList.push_back((BaseObject*)&a);
myList.push_back((BaseObject*)&b);
myList.back()->FunctionX();
I know the code above is wrong, I'm just trying to get the overall concept over.
What I need:
A base class that defines functions that MUST be present in classes that inherit from it.
The ability to store the classes that inherit from it all in the same vector (cast as the base class).
The vector to know it can call the base classes defined functions.
The classes to be able to have their own, additional functions that the vector/base class do not need to be aware of.
I just noticed, you're deriving privately. BaseObject is a private base class of ObjectA. When you omit the inheritance specifier, you get private inheritance by default. Change the ObjectA declaration to
class ObjectA : public BaseObject...
Otherwise, code outside of the ObjectA scope is not allowed to know that ObjectA is-a BaseObject.
Your code is almost right. It misses the virtual for BaseObject's destructor, however, which will invoke undefined behaviour (e.g. crashes) in any typical usage scenario. This is the correct declaration:
virtual ~BaseObject();
Another thing you should consider is making your public functions non-virtual and your virtual functions private, with the public functions delegating to the private ones (called Non-Virtual Interface Idiom by Herb Sutter).
A few more things:
A base class that defines functions that MUST be present in classes
that inherit from it.
You won't be able to achieve this, at least in the literal sense, by any normal means. A class can derive from your abstract class but remain itself abstract by not defining your pure virtual functions.
The ability to store the classes that inherit from it all in the same
vector
Mind the difference between "class" and "object". A vector doesn't store classes but objects. In C++, classes cannot be used as objects (which is different in Java, for example). To "store classes" implies something like type lists in advanced template metaprogramming, a technique not related at all to your problem.
(cast as the base class).
You do not need to cast from subclass to base class.
I've a question regarding a concept. First, I'm a mechanical engineer and not a programmer, thus I have some C++ knowledge but not much experience. I use the finite element method (FEM) to solve partial differential equations.
I have a base class Solver and two child linSolver, for linear FEM, and nlinSolver for non-linear FEM. The members and methods that both children share are in the base class. The base class members are all protected. Thus using inheritance makes the child classes "easy to use", like there weren't any inheritance or other boundaries. The base class itself, Solver, is incomplete, meaning only the children are of any use to me.
The concept works actually pretty good - but I think that having an unusable class is a bad design. In addition I read that protected inheritance is not preferred and should be avoided if possible. I think the last point don't really apply to my specific use, since I will never use it allow and any attempt to do so will fail (since it is incomplete).
The questions are:
Is it common to use inheritance to reduce double code even if the base class will be unusable?
What are alternatives or better solutions to such a problem?
Is protected inheritance really bad?
Thank you for your time.
Dnaiel
Having "unusable" base classes is actually very common. You can have the base class to define a common interface usable by the classes that inherits the base-class. And if you declare those interface-functions virtual you can use e.g. references or pointers to the base-class and the correct function in the inherited class object will be called.
Like this:
class Base
{
public:
virtual ~Base() {}
virtual void someFunction() = 0; // Declares an abstract function
};
class ChildA : public Base
{
public:
void someFunction() { /* implementation here */ }
};
class ChildB : public Base
{
public:
void someFunction() { /* other implementation here */ }
};
With the above classes, you can do
Base* ptr1 = new ChildA;
Base* ptr2 = new ChildB;
ptr1->someFunction(); // Calls `ChildA::someFunction`
ptr2->someFunction(); // Calls `ChildB::someFunction`
However this will not work:
Base baseObject; // Compilation error! Base class is "unusable" by itself
While the (working) example above is simple, think about what you could do when passing the pointers to a function. Instead of having two overloaded functions each taking the actual class, you can have a single function which takes a pointer to the base class, and the compiler and runtime-system will make sure that the correct (virtual) functions are called:
void aGlobalFunction(Base* ptr)
{
// Will call either `ChildA::someFunction` or `ChildB::someFunction`
// depending on which pointer is passed as argument
ptr->someFunction();
}
...
aGlobalFunction(ptr1);
aGlobalFunction(ptr2);
Even though the base-class is "unusable" directly, it still provides some functionality that is part of the core of how C++ can be (and is) used.
Of course, the base class doesn't have to be all interface, it can contain other common (protected) helper or utility functions that can be used from all classes that inherits the base class. Remember that inheritance is a "is-a" relationship between classes. If you have two different classes that both "is-a" something, then using inheritance is probably a very good solution.
You should check the concept of Abstract class.
It's designed to provide base class that cannot be instantiated.
To do so you provide at least one method in the base class like this
virtual void f()=0;
Each child have to override the f function (or any pure virtual function from the base class) in order to be instantiable.
Don't think of the BaseClass as a class in its own right, but as an interface contract and some implementation help. Therefore, it should be abstract, if neccessary by declaring the dtor pure virtual but providing an implementation anyway. Some OO purists may frown upon any non-private element, but purity is not a good target.
In C++, if I have a class Base which is a private base class of Derived but Base has no virtual functions, would it be cleaner to instead replace having inheritance with encapsulation in class Encapsulate? I imagine the only benefit to inheritance in this case would be that the base class can be accessed directly in the derived class as opposed to through memberVariable. Is one or the other practice considered better, or is it more of a personal style question?
class Base {
public:
void privateWork();
// No virtual member functions here.
};
class Derived : Base {
public:
void doSomething() {
privateWork();
}
};
class Encapsulate {
Base memberVariable;
public:
void doSomething() {
memberVariable.privateWork()
}
};
Remember that inheritance models "Liskov substitution": Foo is a Bar if and only if you can pass a Foo variable to every function expecting a Bar. Private inheritance does not model this. It models composition (Foo is implemented in terms of Bar).
Now, you should pretty much always use the second version, since it is simpler and expresses the intent better: it is less confusing for people who don't know about it.
However, sometimes, private inheritance is handy:
class FooCollection : std::vector<Foo>
{
public:
FooCollection(size_t n) : std::vector<Foo>(n) {};
using std::vector<Foo>::begin;
using std::vector<Foo>::end;
using std::vector<Foo>::operator[];
};
This allows you to reuse some of the functionality of vector without having to forward manually the 2 versions (const + non const) of begin, end, and operator[].
In this case, you don't have polymorphism at all: this is not inheritance, this is composition is disguise; there is no way you can use a FooCollection as a vector. In particular, you don't need a virtual destructor.
If there are no virtual functions, then inheritance should not be used in OO. Note this does not mean that it must not be used, there are a few (limited) cases where you might need to (ab)use inheritance for other purposes than OO.
Why would I want to define a C++ interface that contains private methods?
Even in the case where the methods in the public scope will technically suppose to act like template methods that use the private methods upon the interface implementation, even so, we're telling the technical specs. right from the interface.
Isn't this a deviation from the original usage of an interface, ie a public contract between the outside and the interior?
You could also define a friend class, which will make use of some private methods from our class, and so force implementation through the interface. This could be an argument.
What other arguments are for defining a private methods within an interface in C++?
The common OO view is that an interface establishes a single contract that defines how objects that conform to that interface are used and how they behave. The NVI idiom or pattern, I never know when one becomes the other, proposes a change in that mentality by dividing the interface into two separate contracts:
how the interface is to be used
what deriving classes must offer
This is in some sense particular to C++ (in fact to any language with multiple inheritance), where the interface can in fact contain code that adapts from the outer interface --how users see me-- and the inner interface --how I am implemented.
This can be useful in different cases, first when the behavior is common but can be parametrized in only specific ways, with a common algorithm skeleton. Then the algorithm can be implemented in the base class and the extension points in derived elements. In languages without multiple inheritance this has to be implemented by splitting into a class that implements the algorithm based in some parameters that comply with a different 'private' interface. I am using here 'private' in the sense that only your class will use that interface.
The second common usage is that by using the NVI idiom, it is simple to instrument the code by only modifying at the base level:
class Base {
public:
void foo() {
foo_impl();
}
private:
virtual void foo_impl() = 0;
};
The extra cost of having to write the dispatcher foo() { foo_impl(); } is rather small and it allows you to later add a locking mechanism if you convert the code into a multithreaded application, add logging to each call, or a timer to verify how much different implementations take in each function... Since the actual method that is implemented in derived classes is private at this level, you are guaranteed that all polymorphic calls can be instrumented at a single point: the base (this does not block extending classes from making foo_impl public thought)
void Base::foo() {
scoped_log log( "calling foo" ); // we can add traces
lock l(mutex); // thread safety
foo_impl();
}
If the virtual methods were public, then you could not intercept all calls to the methods and would have to add that logging and thread safety to all the derived classes that implement the interface.
You can declare a private virtual method whose purpose is to be derivated. Example :
class CharacterDrawer {
public:
virtual ~CharacterDrawer() = 0;
// draws the character after calling getPosition(), getAnimation(), etc.
void draw(GraphicsContext&);
// other methods
void setLightPosition(const Vector&);
enum Animation {
...
};
private:
virtual Vector getPosition() = 0;
virtual Quaternion getRotation() = 0;
virtual Animation getAnimation() = 0;
virtual float getAnimationPercent() = 0;
};
This object can provide drawing utility for a character, but has to be derivated by an object which provides movement, animation handling, etc.
The advantage of doing like this instead of provinding "setPosition", "setAnimation", etc. is that you don't have to "push" the value at each frame, instead you "pull" it.
I think this can be considered as an interface since these methods have nothing to do with actual implementation of all the drawing-related stuff.
Why would I want to define a C++
interface that contains private
methods?
The question is a bit ambiguous/contradictory: if you define (purely) an interface, that means you define the public access of anything that connects to it. In that sense, you do not define an interface that contains private methods.
I think your question comes from confusing an abstract base class with an interface (please correct me if I'm wrong).
An abstract base class can be a partial (or even complete) functionality implementation, that has at least an abstract member. In this case, it makes as much sense to have private members as it makes for any other class.
In practice it is rarely needed to have pure virtual base classes with no implementation at all (i.e. base classes that only define a list of pure virtual functions and nothing else). One case where that is required is COM/DCOM/XPCOM programming (and there are others). In most cases though it makes sense to add some private implementation to your abstract base class.
In a template method implementation, it can be used to add a specialization constraint: you can't call the virtual method of the base class from the derived class (otherwise, the method would be declared as protected in the base class):
class Base
{
private:
virtual void V() { /*some logic here, not accessible directly from Derived*/}
};
class Derived: public Base
{
private:
virtual void V()
{
Base::V(); // Not allowed: Base::V is not visible from Derived
}
};
This is our ideal inheritance hierarchy:
class Foobar;
class FoobarClient : Foobar;
class FoobarServer : Foobar;
class WindowsFoobar : Foobar;
class UnixFoobar : Foobar;
class WindowsFoobarClient : WindowsFoobar, FoobarClient;
class WindowsFoobarServer : WindowsFoobar, FoobarServer;
class UnixFoobarClient : UnixFoobar, FoobarClient;
class UnixFoobarServer : UnixFoobar, FoobarServer;
This is because the our inheritance hierarchy would try to inherit from Foobar twice, and as such, the compiler would complain of ambiguous references on any members of Foobar.
Allow me to explain why I want such a complex model. This is because we want to have the same variable accessible from WindowsFoobar, UnixFoobar, FoobarClient, and FoobarServer. This wouldn't be a problem, only I'd like to use multiple inheritance with any combination of the above, so that I can use a server/client function on any platform, and also use a platform function on either client or server.
I can't help but feel this is a somewhat common issue with multiple inheritance... Am I approaching this problem from completely the wrong angle?
Update 1:
Also, consider that we could use #ifdef to get around this, however, this will tend to yield very ugly code like such:
CFoobar::CFoobar()
#if SYSAPI_WIN32
: m_someData(1234)
#endif
{
}
... yuck!
Update 2:
For those who want to read more into the background of this issue, I really suggest skimming over the appropriate mailing list thread. Thing start to get interesting around the 3rd post. Also there is a related code commit with which you can see the real life code in question here.
It would work, although you'd get two copies of the base Foobar class. To get a single copy, you'd need to use virtual inheritance. Read on multiple inheritance here.
class Foobar;
class FoobarClient : virtual public Foobar;
class FoobarServer : virtual public Foobar;
class WindowsFoobar : virtual public Foobar;
class UnixFoobar : virtual public Foobar;
However, there are many problems associated with multiple inheritance. If you really want to have the model presented, why not make FoobarClient and FoobarServer take a reference to Foobar at construction time, and then have Foobar& FoobarClient/Server::getFoobar ?
Composition is often a way out of multiple inheritance. Take a example now:
class WindowsFoobarClient : public WindowsFoobar
{
FoobarClient client;
public:
WindowsFoobarClient() : client( this ) {}
FoobarClient& getClient() { return client }
}
However care must be taken in using this in the constructor.
What you are directly after here is virtual inheritance feature of C++. What you are in here for is a maintenance nightmare. This might not be a huge surprise since well-known authors like H. Sutter have been arguing against such use of inheritance for a while already. But this comes from direct experience with code like this. Avoid deep inheritance chains. Be very afraid of the protected keyword - it's use is very limited. This kind of design quickly gets out of hand - tracking down patterns of access to protected variable somewhere up the inheritance chain from lower level classes becomes hard, responsibilities of the code parts become vague, etc., and people who look at your code a year from now will hate you :)
You're in C++, you should get friendly with templates. Using the template-argument-is-a-base-class pattern, you'll not need any multiple inheritance or redundant implementations. It will look like this:
class Foobar {};
template <typename Base> class UnixFoobarAspect : public Base {};
template <typename Base> class WindowsFoobarAspect : public Base {};
template <typename Base> class FoobarClientAspect : public Base {};
template <typename Base> class FoobarServerAspect : public Base {};
typedef UnixFoobarAspect<FoobarClientAspect<Foobar>/*this whitespace not needed in C++0x*/> UnixFoobarClient;
typedef WindowsFoobarAspect<FoobarClientAspect<Foobar> > WindowsFoobarClient;
typedef UnixFoobarAspect<FoobarServerAspect<Foobar> > UnixFoobarServer;
typedef WindowsFoobarAspect<FoobarServerAspect<Foobar> > WindowsFoobarServer;
You might also consider using the curiously recurring template pattern instead of declaring abstract functions to avoid virtual function calls when the base class needs to call a function implemented in one of the specialized variants.
Use virtual inheritance, in the declaration of FoobarClient, FoobarServer, WindowsFoobar and UnixFoobar, put the word virtual before the Foobar base class name.
This will ensure there is always a single instance of Foobar no matter how many times it appears in your base class hierarchy.
Have a look at this search. Diamond inheritance is somewhat of contentuous issue and the proper solution dependes on individual situation.
I would like to comment on the Unix/Windows side of things. Generally one would #ifndef things out that are not appropriate for the particular platform. So you would end up with just Foobar compiled for either Windows or Unix using preprocessor directives, not UnixFoobar and WindowsFoobar. See how far you can get using that paradigm before exploring virtual inheritance.
Try this example of composition and inheritance:
class Client_Base;
class Server_Base;
class Foobar
{
Client_Base * p_client;
Server_Base * p_server;
};
class Windows_Client : public Client_Base;
class Windows_Server : public Server_Base;
class Win32 : Foobar
{
Win32()
{
p_client = new Windows_Client;
p_server = new Windows_Server;
}
};
class Unix_Client : public Client_Base;
class Unix_Server : public Server_Base;
class Unix : Foobar
{
Unix()
{
p_client = new Unix_Client;
p_server = new Unix_Server;
}
};
Many experts have said that issues can be resolved with another level of indirection.
There is nothing "illegal" about having the same base class twice. The final child class will just (literally) have multiple copies of the base class as part of it (including each variable in the base class, etc). It may result in some ambiguous calls to that base classes' functions, though, which you might have to resolve manually. This doesn't sound like what you want.
Consider composition instead of inheritance.
Also, virtual inheritance is a way to fold together the same base class which appears twice. If it really is just about data sharing, though, composition might make more sense.
You can access the variable with the qualified class name, but I forget the exact syntax.
However, this is one of the bad cases of using multiple inheritance that can cause you many difficulties. Chances are that you don't want to have things this way.
It's much more likely you want to have foobar privately inherited, have each subclass own a foobar, have foobar be a pure virtual class, or have the derived class own the things it currently defines or even define foobar on its own.