I have never used multiple inheritance but while reading about it recently I started to think about how I could use it practically within my code. When I use polymorphism normally I usually use it by creating new derived instances declared as base class pointers such as
BaseClass* pObject = new DerivedClass();
so that I get the correct polymorphic behaviour when calling virtual functions on the derived class. In this way I can have collections of different polymorphic types that manage themselves with regards to behaviour through their virtual functions.
When considering using multiple inheritance, I was thinking about the same approach but how would I do this if I had the following hierarchy
class A {
virtual void foo() = 0;
};
class B : public A {
virtual void foo() {
// implementation
}
};
class C {
virtual void foo2() = 0;
};
class D : public C {
virtual void foo2() {
// implementation
}
};
class E : public C, public B {
virtual void foo() {
// implementation
}
virtual void foo2() {
// implementation
}
};
with this hierarchy, I could create a new instance of class E as
A* myObject = new E();
or
C* myObject = new E();
or
E* myObject = new E();
but if I declare it as a A* then I will lose the polymorphism of the class C and D inheritance hierarchy. Similarly if I declare it as C* then I lose the class A and B polymorphism. If I declare it as E* then I cannot get the polymorphic behaviour in the way I usually do as the objects are not accessed through base class pointers.
So my question is what is the solution to this? Does C++ provide a mechanism that can get around these problems, or must the pointer types be cast back and forth between the base classes? Surely this is quite cumbersome as I could not directly do the following
A* myA = new E();
C* myC = dynamic_cast<C*>(myA);
because the cast would return a NULL pointer.
With multiple inheritance, you have a single object that you can view any of multiple different ways. Consider, for example:
class door {
virtual void open();
virtual void close();
};
class wood {
virtual void burn();
virtual void warp();
};
class wooden_door : public wood, public door {
void open() { /* ... */ }
void close() { /* ... */ }
void burn() { /* ... */ }
void warp() { /* ... */ }
};
Now, if we create a wooden_door object, we can pass it to a function that expects to work with (a reference or pointer to) a door object, or a function that expects to work with (again, a pointer or reference to) a wood object.
It's certainly true that multiple inheritance will not suddenly give functions that work with doors any new capability to work with wood (or vice versa) -- but we don't really expect that. What we expect is to be able to treat our wooden door as either a door than can open and close, or as a piece of wood that can burn or warp -- and that's exactly what we get.
In this case, classes A and C are interfaces, and E implements two
interfaces. (Typically, you wouldn't have intermediate classes C and
D in such a case.) There are several ways of dealing with this.
The most frequent is probably to define a new interface, which is a sum
of A and C:
class AandC : public A, public C {};
and have E derive from this. You'd then normally manage E through a
AandC*, passing it indifferently to functions taking an A* or a
C*. Functions that need both interfaces in the same object will deal
with AandC*.
If the interfaces A and C are somehow related, say C offers
additional facilities which some A (but not all) might want to
support, then it might make sense for A to have a getB() function,
which returns the C* (or a null pointer, if the object doesn't support
the C interface).
Finally, if you have mixins and multiple interfaces, the cleanest
solution is to maintain two independent hierarchies, one for the
interfaces, and another with the implementation parts:
// Interface...
class AandC : public virtual A, public virtual C {};
class B : public virtual A
{
// implement A...
};
class D : public virtual C
{
// implement C...
};
class E : public AandC, private B, private D
{
// may not need any additional implementation!
};
(I'm tempted to say that from a design point of view, inheritance of
interface should always be virtual, to allow this sort of thing in the
future, even if it isn't needed now. In practice, however, it seems
fairly rare to not be able to predict this sort of use in advance.)
If you want more information about this sort of thing, you might want to
read Barton and Nackman. There book is fairly dated now (it describes
pre C++98), but most of the information is still valid.
This should work
A* myA = new E();
E* myC = dynamic_cast<E*>(myA);
myC->Foo2();
C can't cast to A because it isn't an A; it can only cast down to D or E.
Using A* you can make an E* and through that you can always explicitly say things like C::foo() but yes, there is no way for A to implicitly call functions in C that might have overrides or might not.
In weird cases like this, templates are often a good solution because they can allow classes to act as if they have common inheritance even if they don't. For instance, you might write a template that works with anything that can have foo2() invoked on it.
Related
I have an array of Base* objects. This holds a bunch of derived objects, some of which may implement an Interface.
struct Base {
virtual void doNotCallThis() { cout << "nooo" << endl; }
};
struct Interface {
virtual void doThis() = 0;
};
// Example derived class
struct Derived : Base, virtual Interface {
virtual void doThis() { cout << "yes" << endl; }
};
int main() {
Base* b[1];
b[0] = new Derived(); // Here would be a bunch of different derived classes
((Interface*)b[0])->doThis(); // Elsewhere, doThis() would be called for select array elements
return 0;
}
Output:
nooo
I don't know the exact type of b[i] at run time, so I can't cast to Derived (it could be Derived2, Derived3, etc). I also can't use dynamic_cast if that's a solution. All I know is that, by the time I call doThis(), b[i] is a type that inherits from Interface. The way I attempted to call it above causes the wrong function to be called, eg. Base::doNotCallThis().
How can I call it properly?
As other people have pointed out, you would probably do best to find a way to refactor your design so that casting isn't necessary.
But putting that aside, I can explain what's going wrong and how to correctly cast.
The problem with ((Interface*)b[0]) is that since Base and Interface are unrelated the compiler has to do a blind reinterpretive cast. Practically speaking that means in this situation the resulting pointer doesn't actually line up with the Interface part of the object. If you were to try static_cast<Interface*>(b[0]) you would find it doesn't compile - and that's a big hint that it's the wrong kind of cast to be making.
On the other hand, the compiler does know the relationship from Base to Derived and also from Derived to Interface. So as long as you know for sure that the object not only implements Interface but also is a Derived then you can do:
static_cast<Interface*>(static_cast<Derived*>(b[0]))->doThis();
However if your design has multiple different derived types which independently implement Interface then you might not be able to do that unless again you absolutely know what the derived type is at any time you go to make the call. - This is why refactoring it into a better class hierarchy is more desirable, since it's much less fragile and cumbersome to work with.
(As a side note, this issue points out why it's a great idea to never use raw/reintrepretive casts when moving up and down a class hierarchy. At least use static_cast since the can compiler better help you do it correctly.)
Writing an answer with the risk of being downvoted:
If we start with::
struct Base()
{
virtual void SomeFunc();
};
struct Interface
{
virtual void doThis();
}
then to create a bunch of derived functions from Base that are also interfaces, I'd do something like this:
struct BaseInterface : public Base, public Interface
{
// Nothing here - this is just combining Base and Interface
};
struct Base1 : public BaseInterface
{
... add stuff that Base1 has that isn't in Base.
};
struct Derived: public Base1
{
... some more stuff that isn't in Base1
}
And then we use it in Main like this:
int main() {
BaseInterface* b[1];
b[0] = new Derived(); // Here would be a bunch of different derived classes
b[0])->doThis(); // Elsewhere, doThis() would be called for select array elements
return 0;
}
If you have a feature rich class, possibly one you do not own/control, it is often the case where you want to add some functionality so deriving makes sense.
Occasionally you want to subtract as well, that is disallow some part of the base interface. The common idiom I have seen is to derive and make some member functions private and then not implement them. As follows:
class Base
{
public:
virtual void foo() {}
void goo() { this->foo(); }
};
class Derived : public Base
{
private:
void foo();
};
someplace else:
Base * b= new Derived;
and yet another place:
b->foo(); // Any way to prevent this at compile time?
b->goo(); // or this?
It seems that if the compilation doesn't know that it is derived, the best you can do is not implement and have it fail at runtime.
The issue arises when you have a library, that you can't change, that takes a pointer to base, and you can implement some of the methods, but not all. So part of the library is useful, but you run the risk of core dumping if you don't know at compile time which functions will call what.
To make it more difficult, others may inherit from you class and want to use the library, and they may add some of the functions you didn't.
Is there another way? in C++11? in C++14?
Let's analyze this, focused on two major points:
class Base
{
public:
virtual void foo() {} // This 1)
// ...
class Derived : public Base // and this 2)
In 1) you tell the world that every object of Base offers the method foo() publicly. This implies that when I have Base*b I can call b->foo() - and b->goo().
In 2) you tell the world that your class Derived publicly behaves like a Base. Thus the following is possible:
void call(Base *b) { b->foo(); }
int main() {
Derived *b = new Derived();
call(b);
delete b;
}
Hopefully you see that there is no way call(Base*) can know if b is a derived and thus it can't possibly decide at compile-time if calling foo wouldn't be legal.
There are two ways to handle this:
You could change the visibility of foo(). This is probably not what you want because other classes can derive from Base and someone wants to call foo afterall. Keep in mind that virtual methods can be private, so you should probably declare Base as
class Base
{
virtual void foo() {}
public:
void goo() { this->foo(); }
};
You can change Derived so that it inherits either protected or private from Base. This implies that nobody/only inheriting classes can "see" that Derived is a Base and a call to foo()/goo() is not allowed:
class Derived : private Base
{
private:
void foo() override;
// Friends of this class can see the Base aspect
// .... OR
// public: // this way
// void foo(); // would allow access to foo()
};
// Derived d; d.goo() // <-- illegal
// d.foo() // <-- illegal because `private Base` is invisible
You should generally go with the latter because it doesn't involve changing the interface of the Base class - the "real" utility.
TL;DR: Deriving a class is a contract to provide at least that interface. Subtraction is not possible.
This seems to be what you want to do:
struct Library {
int balance();
virtual int giveth(); // overrideable
int taketh(); // part of the library
};
/* compiled into the library's object code: */
int Library::balance() { return giveth() - taketh(); }
/* Back in header files */
// PSEUDO CODE
struct IHaveABadFeelingAboutThis : public Library {
int giveth() override; // my implementation of this
int taketh() = delete; // NO TAKE!
};
So that you can't call taketh() on an IHaveABadFeelingAboutThis even when it is cast as the base class.
int main() {
IHaveABadFeelingAboutThis x;
Library* lib = &x;
lib->taketh(); // Compile error: NO TAKE CANDLE!
// but how should this be handled?
lib->balance();
}
If you want to present a different interface than the underlying library you need a facade to present your interface instead of the that of the library.
class Facade {
struct LibraryImpl : public Library {
int giveth() override;
};
LibraryImpl m_impl;
public:
int balance() { return m_impl.balance(); }
virtual int giveth() { return m_impl.giveth(); }
// don't declare taketh
};
int main() {
Facade f;
int g = f.giveth();
int t = f.taketh(); // compile error: undefined
}
Although I don't think your overall situation is good design, and I share many of the sentiments in the comments, I can also appreciate that a lot of code you don't control is involved. I don't believe there is any compile time solution to your problem that has well defined behavior, but what is far preferable to making methods private and not implementing them is to implement the entire interface and simply make any methods you can't cope with throw an exception. This way at least the behavior is defined, and you can even do try/catch if you think you can recover from a library function needing interface you can't provide. Making the best of a bad situation, I think.
If you have class A:public B, then you should follow the https://en.wikipedia.org/wiki/Liskov_substitution_principle
The Liskov substitution principle is that a pointer-to-A can be used as a pointer-to-B in all circumstances. Any requirements that B has, A should satisfy.
This is tricky to pull off, and is one of the reasons why many consider OO-style inheritance far less useful than it looks.
Your base exposes a virtual void foo(). The usual contract means that such a foo can be called, and if its preconditions are met, it will return.
If you derive from base, you cannot strengthen the preconditions, nor relax the postconditions.
On the other hand, if base::foo() was documented (and consumers of base supported) the possibility of it throwing an error (say, method_does_not_exist), then you could derive, and have your implementation throw that error. Note that even if the contract says it could do this, in practice if this isn't tested consumers may not work.
Violating the Liskov substitution principle is a great way to have lots of bugs and unmaintainable code. Only do it if you really, really need to.
Say we have 2 classes A and B. Both are subclasses of C, the superclass. Now let's assume that C defines a method areYouA(), that is a method that asks the object if it's an object of the A class.
So a possible code, in c++, would be:
C object1 = A()
C object2 = B()
if (object1.areYouA()==true){//crazy code here regarding if it's an A object}
if (object2.areYouA()==false){//crazy code here regarding if it's a B object}
From an OO POV is it correct to add a method to the B and C classes to ask their respective objects if they're of the A class??? If it's not correct, then what could be another approach to this???? My objective is obvious i have 2 classes A and B that i need, at some point i have portions of binary data that could be of the A class or the B class, also an object would initially be of the A class, as it gets full, it's structure needs to change and also some of it's behaviour, therefore becoming an object of the B class. So my approach to this was thinking of an abstract class as parent, and initially declaring a C object but storing an A object, and later storing a B object as it changes.
Is there another approch to this, like a pattern???
ADDITIONAL INFO:
I think i wasnt very clear about this. Let's say i want to store information from an A object in file1, and info from a B object in file2. If use its mother class as an interface, then how can i tell if it's an A object or a B object to store in the respective files?? Should i add an atribute to each object for their filename they belong too??? That means if i need to change the name of the file, then every object needs to change too.
The correct way is to use virtual functions to implement polymorphic behaviour:
struct C
{
virtual void do_crazy_stuff() { } // (see text below)
virtual ~C() { } // always have a virtual d'tor for polymorphic bases
};
struct A : C
{
virtual void do_crazy_stuff() { /* your crazy code for A */ }
};
struct B : C
{
virtual void do_crazy_stuff() { /* your crazy code for B */ }
};
Now you can use the same interface everywhere:
void process_data(C & x) // <-- Note: take argument by reference!!
{
x.do_crazy_stuff();
}
This works on any object of a type derived from C:
int main()
{
A x;
B y;
process_data(x); // calls x.A::do_crazy_stuff
process_data(y); // calls x.B::do_crazy_stuff
}
If you like, you can even declare the base function C::do_crazy_stuff as pure virtual; this makes the base class abstract (so it cannot be instantiated).
First off,
C object1 = A();
C object2 = B();
slices the objects. They are no longer of type A or B, they are C.
You can use pointer for this:
C* object1 = new A();
C* object2 = new B();
Although C pointers, they point to instances of A and B respectively.
Second, your approach is wrong. It seems like you're not taking advantage of polymorphism. If you have functionality that is shared between the classes, it should be implemented in C. If it changes, you should have virtual methods.
Knowledge about the type of an object is usually at least a code smell. You can alter the behavior simply by using polymorphism. Although if it really is necessary, the correct way would be using dynamic_cast:
C* object1 = new A();
A* pA = dynamic_cast<A*>(object1);
pA will be NULL if object1 doesn't point to an object of type A.
This is a design issue in my opinion you have to analyze your problem better. Why do you need to know the type of your object? Why you don't implement the interface method in your A and B classes, which will do whatever you need with a binary data? Take a look on visitor design pattern to get an image of how to implement it in this way.
Suppose I have 3 classes as follows (as this is an example, it will not compile!):
class Base
{
public:
Base(){}
virtual ~Base(){}
virtual void DoSomething() = 0;
virtual void DoSomethingElse() = 0;
};
class Derived1
{
public:
Derived1(){}
virtual ~Derived1(){}
virtual void DoSomething(){ ... }
virtual void DoSomethingElse(){ ... }
virtual void SpecialD1DoSomething{ ... }
};
class Derived2
{
public:
Derived2(){}
virtual ~Derived2(){}
virtual void DoSomething(){ ... }
virtual void DoSomethingElse(){ ... }
virtual void SpecialD2DoSomething{ ... }
};
I want to create an instance of Derived1 or Derived2 depending on some setting that is not available until run-time.
As I cannot determine the derived type until run-time, then do you think the following is bad practice?...
class X
{
public:
....
void GetConfigurationValue()
{
....
// Get configuration setting, I need a "Derived1"
b = new Derived1();
// Now I want to call the special DoSomething for Derived1
(dynamic_cast<Derived1*>(b))->SpecialD1DoSomething();
}
private:
Base* b;
};
I have generally read that usage of dynamic_cast is bad, but as I said, I don't know
which type to create until run-time. Please help!
Why not delay the moment at which you "throw away" some if the type information by assigning a pointer to derived to a pointer to base:
void GetConfigurationValue()
{
// ...
// Get configuration setting, I need a "Derived1"
Derived1* d1 = new Derived1();
b = d1;
// Now I want to call the special DoSomething for Derived1
d1->SpecialD1DoSomething();
}
The point of virtual functions is that once you have the right kind of object, you can call the right function without knowing which derived class this object is -- you just call the virtual function, and it does the right thing.
You only need a dynamic_cast when you have a derived class that defines something different that's not present in the base class, and you need/want to take the extra something into account.
For example:
struct Base {
virtual void do_something() {}
};
struct Derived : Base {
virtual void do_something() {} // override dosomething
virtual void do_something_else() {} // add a new function
};
Now, if you just want to call do_something(), a dynamic_cast is completely unnecessary. For example, you can have a collection of Base *, and just invoke do_something() on every one, without paying any attention to whether the object is really a Base or a Derived.
When/if you have a Base *, and you want to invoke do_something_else(), then you can use a dynamic_cast to figure out whether the object itself is really a Derived so you can invoke that.
Using dynamic_cast is not bad practice per se. It's bad practice to use it inappropriately, i.e. where it's not really needed.
It's also a bad practice to use it this way:
(dynamic_cast<Derived1*>(b))->SpecialD1DoSomething();
Reason: dynamic_cast(b) may return NULL.
When using dynamic_cast, you have to be extra careful, because it's not guaranteed, that b is actually of type Derived1 and not Derived2:
void GenericFunction(Base* p)
{
(dynamic_cast<Derived1*>(b))->SpecialD1DoSomething();
}
void InitiallyImplementedFunction()
{
Derived1 d1;
GenericFunction(&d1); // OK... But not for long.
// Especially, if implementation of GenericFunction is in another library
// with not source code available to even see its implementation
// -- just headers
}
void SomeOtherFunctionProbablyInAnotherUnitOfCompilation()
{
Derived2 d2;
GenericFunction(&d2); // oops!
}
You have to check if dynamic_cast is actually successful. There are two ways of doing it: checking it before and after the cast. Before the cast you can check if the pointer you're trying to cast is actually the one you expect via RTTI:
if (typeid(b) == typeid(Derived1*))
{
// in this case it's safe to call the function right
// away without additional checks
dynamic_cast<Derived1*>(b)->SpecialD1DoSomething();
}
else
{
// do something else, like try to cast to Derived2 and then call
// Derived2::SpecialD2DoSomething() in a similar fashion
}
Checking it post-factum is actually a bit simpler:
Derived1* d1 = dynamic_cast<Derived1*>(b);
if (d1 != NULL)
{
d1->SpecialD1DoSomething();
}
I'd also say it's a bad practice to try and save typing while programming in C++. There are many features in C++ than seem to be completely fine to be typed shorter (i.e. makes you feel 'that NULL will never happen here'), but turn out to be a pain in the ass to debug afterwards. ;)
Some other things you might like to consider to avoid the use of dynamic_cast
From Effective C++ (Third Edition) - Item 35 Alternatives to virtual functions -
'Template Method pattern' via Non-Vitual Interface (NVI). Making the virtual functions private/protected with a public method 'wrapper' - allows you to enforce some other workflow of stuff to do before and after the virtual method.
'Strategy pattern' via function pointers. Pass in the extra method as a function pointer.
'Strategy pattern' via tr1::function. similar to 2. but you could provide whole classes with various options
'Strategy pattern' classic. Seperate Strategy from main class - push the virtual functions into another hierarchy.
There is a pattern named Factory Pattern that would fit this scenario. This allows you to return an instance of the correct class based on some input parameter.
Enjoy!
What is wrong with:
Base * b;
if( some_condition ) {
b = new Derived1;
}
else {
b = new Derived2;
}
if ( Derived2 * d2 = dynamic_cast <Derived2 *>( b ) ) {
d2->SpecialD2DoSomething();
}
Or am I missing something?
And can OI suggest that when posting questions like this you (and others) name your classes A, B, C etc. and your functions things like f1(), f2() etc. It makes life a lot easier for people answering your questions.
One way to avoid dynamic_cast is to have a virtual trampoline function "SpecialDoSomething" whose derived polymorphic implementation calls that particular derived class's "SpecialDxDoSomething()" which can be whatever non-base class name you desire. It can even call more than one function.
Consider two classes
class A{
public:
A(){
}
~A(){
}
};
class AImpl : public A{
public:
AImpl(){
a = new AInternal();
}
AImpl(AInternal *a){
this->_a = a;
}
~AImpl(){
if(a){
delete a;
a = null;
}
}
private:
AInternal *a;
};
I am trying to hide the AInternal's implementation and expose only A's interface. Two things I see here
class A is totally empty.
Hiding is achieved basically through inheritance. I have to actually use downcasting and upcasting from A to AImpl and vice versa.
Is this a good design. Being very inexperienced in designing, I cannot see the pitfalls of it and why it is bad?
You're overcomplicating things by using 3 classes. I think what you're looking for is the pimpl idiom.
I am trying to hide the AInternal's implementation and expose only A's interface.
I think you are trying to do something like factory.
Here is an example:
class IA {
public:
IA() {}
virtual ~IA() {}
virtual void dosth() =0;
};
class Factory {
private:
class A : public IA {
public:
A () {}
virtual ~A() {}
void dosth() { cout << "Hello World"; }
};
public:
Factory () {}
virtual ~Factory() {}
IA*newA() { return new A; }
};
And the usage of Factory class:
Factory f;
IA*a = f.newA();
a->dosth();
return 0;
IMO AInternal makes no sense. Whatever you do there, should be done in AImpl. Otherwise, it's ok to do that in C++.
The code is rather obtuse, so I would be concerned with maintaining it six months down the road.
If you're going to do it this way, then the destructor ~A needs to be virtual.
You seem to be combining two common design features:
1) AInternal is a "pimpl". It provides for better encapsulation, for example if you need to add a new field to AInternal, then the size of AImpl doesn't change. That's fine.
2) A is an base class used to indicate an interface. Since you talk about upcasting and downcasting, I assume you want dynamic polymorphism, meaning that you'll have functions which pass around pointers or references to A, and at runtime the referands will actually be of type AImpl. That's also fine, except that A's destructor should either be virtual and public, or non-virtual and protected.
I see no other design problems with this code. Of course you'll need to actually define the interface A, by adding some pure virtual member functions to it that you implemented in AImpl. Assuming you plan to do that, there's nothing wrong with using an empty base class for the purpose which in Java is served by interfaces (if you know Java). Generally you'd have some kind of factory which creates AImpl objects, and returns them by pointer or reference to A (hence, upcasts them). If the client code is going to create AImpl objects directly then that might also be fine, and in fact you might not need dynamic polymorphism at all. You could instead get into templates.
What I don't see is why you would ever have to downcast (that is, cast an A* to AImpl*). That's usually bad news. So there may be some problems in your design which can only be revealed by showing us more of the definitions of the classes, and the client code which actually uses A and AImpl.