Proper Inheritance Design - c++

I have three classes named A, B, and C. B inherits from A and C inherits from B. (A -> B -> C).
I also have an abstract base class named IBinary. I'd like to make all of the classes implement the IBinary interface. When I make class A inherit from IBinary, the output of my code is C::readb. When class A does not inherit from IBinary, the output is B:readb.
What is the proper way to make my three classes subscribe to the same interface? If I only have the top class (A) inherit from the interface class, I'll need to refactor my code so that I don't have resolution problems like the one above.
If I explicitly have all of the classes inherit from the interface class then I'll have a more complicated class hierarchy and become closer to having a diamond of death.
#include <iostream>
class IBinary {
public:
virtual void readb( std::istream& in ) = 0;
};
// Basic A -- change whether this inherits from IBinary
class A : public IBinary {
public:
A() {};
void readb( std::istream& in ) {}
};
// Specialized A
class B : public A {
public:
B() {};
void load() {
this->readb(std::cin); // <-- which readb is called?
}
void readb( std::istream& in ) {
std::cout << "B::readb" << std::endl;
}
};
// Specialized B
class C : public B {
public:
C() {};
void readb( std::istream& in ) {
std::cout << "C::readb" << std::endl;
}
void foo() {
B::load();
}
};
int main() {
C c;
c.foo();
}

The virtual in the definition of IBinary::readb makes all the difference.
When you inherit from IBinary, all the readbs in the hierarchy that override the one from IBinary are implicitly virtual, too. So virtual dispacth kicks in, just as it's supposed to.
When you don't, then the call is resolved statically. Because the call is inside B, it is B::readb that gets called.

Just have A inherit from IBinary which will make all the children be usable as the abstract interface IBinary.

The reason you are seeing this behavior is, in short, because A::readb is not declared virtual.
Because IBinary::readb is virtual, when A inherits from it, A::readb becomes virtual by default.
Your code would behave more consistently if you added virtual to every declaration of readb, rather than just the first. For this reason, a lot of code style guides for C++ make it a requirement that all virtual methods be declared virtual in all derived classes, even if they are not the ancestor base class.

Related

Calling appropriate parent class function from derived class function in multiple inheritance

Suppose I have the following classes:
class Base {
virtual void func() { cout << "func base" << endl; }
};
class A : virtual public Base {
public:
virtual void func() { cout << "func A" << endl; }
};
class B : virtual public Base {
public:
virtual void func() { cout << "func B" << endl; }
};
class C : public A, public B {
C(bool is_a);
public:
virtual void func() { // Call appropriate parent's func here }
};
My requirement is that the appropriate parent class' function func() be called when I call C's func(). This is what I mean by appropriate:
Base* ptr = new C(true /*is_a*/);
ptr->func(); // This should call A's func internally
How to achieve this? Is it even possible?
[edit]
This might be more of a design question. It is known that class C will have only one true parent (either A or B). And depending on which is the parent, I want that function to be called. Any alternative design suggestions are welcome.
I am deriving C from A and B because there is some common functionality which both A and B share and which can't be a part of base class.
You need to call A::func() explicitly.
class C : public A, public B {
public:
virtual void func() { A::func(); }
};
Update (to yours):
What do you really want to achieve? Is there an actual problem that you want to solve?
If you know that "class C will have only one true parent", why do you need to derive from A and B in the first place? Even though Aconcagua's answer works, it looks more like a workaround for a problem that isn't properly stated from your side. In case you are dealing with different implementations for your class, wouldn't you want to employ some kind of pattern (like the bridge or strategy pattern) instead?
This might be more of a design question. It is known that class C will have only one true parent (either A or B). And depending on which is the parent, I want that function to be called. Any alternative design suggestions are welcome.
class Base { };
class A : public Base { };
class B : public Base { };
class C
{
// common interface for all types of C
};
class C_A : public A, public C { };
class C_B : public B, public C { };
Now you can use C_A, wherever a C needs to behave as A and C_B analogously...
In above example, all virtual inheritances are removed, they are not needed as is. Depending on the use case, it might or might not be appropriate to let class C itself inherit from Base. If so, let all of A, B and C inherit virtually from Base. C_A and C_B, though, do not need to inherit virtually from their parents (there still will just be one Base instance inherited due to the virtual inheritance of the base classes).

Define the method once to be virtual in the inheritance hierarchy to make polymorphism to work

Is it sufficient to define the method once to be virtual in the inheritance hierarchy to make polymorphism to work.
In the following example Der::f is not defined to be virtual but d2->f(); prints der2
I am using VS IDE (may be it is only there...)
class Base
{
public:
virtual void f() { std::cout << "base"; }
};
class Der : public Base
{
public:
void f() { std::cout << "der"; } //should be virtual?
};
class Der2 : public Der
{
public:
void f() { std::cout << "der2"; }
};
int main()
{
Der* d2 = new Der2();
d2->f();
}
Yes, polymorphism will work if you define method as a virtual only in your Base class. However, it is a convention I came across working with some large projects to always repeat virtual keyword in method declarations in derived classes. It may be useful when you are working with a lot of files with complex class hierarchy (many inheritance levels), where classes are declared in separate files. That way you don't have to check which method is virtual by looking for base class declaration while adding another derived class.
Everything that inherits from Base - directly or through several layers will have f() virtual as if the class declarion had explicitely placed virtual when f() was declared.

C++ hidding of member functions in inheritance hierarchy staring with CRTP

Yesterday, I wrote some code and i would really appreciate a judgement if this is good or bad practice. And if its bad, what could go wrong.
The construct is as followed:
The base class A comes from an API sadly as a template. The goal was to be able to put derived classes from A in a std::vector. I achieved this with the base class B from which all derived classes that inherit from A will inherit. Without the pure-virtual function in B the foo() function from A was called.
With the pure-virtual function in B the foo() version from C get called. This is exactly what I want.
So is this bad practice?
EDIT:
To clear some misunderstandings in my example code out:
template <class T>
class A
{
public:
/* ctor & dtor & ... */
T& getDerived() { return *static_cast<T*>(this); }
void bar()
{
getDerived().foo(); // say whatever derived class T will say
foo(); // say chicken
}
void foo() { std::cout << "and chicken" << std::endl; }
};
class B : public A<B>
{
public:
/* ctor & dtor */
virtual void foo() = 0; // pure-virtual
};
class C : public B
{
public:
/* ctor & dtor */
virtual void foo() { std::cout << "cow"; }
};
class D : public B
{
public:
/* ctor & dtor */
virtual void foo() { std::cout << "bull"; }
};
The std::vector shall contain both, C as well as D and
void main()
{
std::vector<B*> _cows;
_cows.push_back((B*)new C()));
_cows.push_back((B*)new D()));
for(std::vector<B*>::size_type i = 0; i != _cows.size(); ++i)
_cows[i].bar();
}
with output
cow and chicken
bull and chicken
is desired.
As far as I know is there no other way to store classes derived from a template class in a container than the use of a base class. If I use a base class for my derived classes, e.g., B i must cast every instance within the vector back to its proper class. But at the time bar() is called, I don't know the exact type.
I call it bad practice for doing possibly confusing and obfuscating things.
1) A function name should in all base and sub-classes either be virtual or non-virtual.
mixing overriding and overloading within the inheritance hierachry is a very bad idea.
No one really expects something like that.
Thus if A::foo should be virtual as well, or B and C foo should not.
2) Preferably base classes should be interfaces.
As such A:foo should be pure virtual and not B::foo.
2b) If a base class has already provided a default implementation don't declare the function pure virtual in a sub-class.

Refer base class members from derived class

class A {
public:
void fa() {
}
};
class B : public A{
public:
void fb() {
}
};
class C : public A, public B {
public:
void fc() {
//call A::fa(), not B::A::fa();
}
};
How to call A::fa() from C::fc() function.
GCC warns withdirect base A inaccessible in C due to ambiguity, does this mean there is no direct way to refer base class members?
One option would be to create a stub class that you can use for casting to the right base class subobject:
struct A {
void fa() { }
};
struct B : A {
void fb() { }
};
// Use a stub class that we can cast through:
struct A_ : A { };
struct C : A_, B {
void fc() {
implicit_cast<A_&>(*this).fa();
}
};
Where implicit_cast is defined as:
template <typename T> struct identity { typedef T type; }
template <typename T>
T implicit_cast(typename identity<T>::type& x) { return x; }
I just found the following info from ISO C++ 2003 standard (10.1.3)
A class shall not be specified as a direct base class of a derived class more than once. [Note: a class can be
an indirect base class more than once and can be a direct and an indirect base class. There are limited
things that can be done with such a class. The non-static data members and member functions of the direct
base class cannot be referred to in the scope of the derived class. However, the static members, enumerations
and types can be unambiguously referred to.
It means there is no direct way :(
I just compiled you code on codepad.org , putting A::fa() is enough to call fa() from your C::fc() function.
void fc() {
A::fa();
}
The below is the link to codepad with your code.
http://codepad.org/NMFTFRnt
I don't think you can do what you want. There is an ambiguity here: when you say A::fa(), it still doesn't tell the compiler which A object to use. There isn't any way to access class A. That's what the warning is telling you.
This seems like an awfully strange construct, though. Public inheritance should be used for is-a relationships. You are saying that C is-a A twice over? It doesn't make sense. And that suggests that this is either an artificial example that would never come up in practice, or you should reconsider this design.
You can use virtual inheritance to overcome such problem:
class B : virtual public A {
Now, you can use A::fa() simply in the child class C.
void fc()
{
fa();
}
However, I generally don't see any practical need to inherit class A again into class C, when B is already publically inheriting A. So, In your case, you can make it simple:
class C : public B {
Edit:
If you want 2 instances for A. then the direct instance which you are intending can be made as an object of C:
class C : public B {
A obj;
Because, having a directly inherited A will not be usable in anyway. You cannot declare any pointer or reference to it inside the scope of C.

Multiple Inheritance from same grandparent - merge implementations?

for a certain project I have declared an interface (a class with only pure virtual functions) and want to offer users some implementations of this interface.
I want users to have great flexibility, so I offer partial implementations of this interface. In every implementation there is some functionality included, other functions are not overridden since they take care about different parts.
However, I also want to present users with a fully usable implementation of the interface as well. So my first approach was to simply derive a class from both partial implementations. This did not work and exited with the error that some functions are still pure virtual in the derived class.
So my question is if there is any way to simply merge two partial implementations of the same interface. I found a workaround by explicitely stating which function I want to be called for each method, but I consider this pretty ugly and would be grateful for an mechanism taking care of this for me.
#include <iostream>
class A{
public:
virtual void foo() = 0;
virtual void bar() = 0;
};
class B: public A{
public:
void foo(){ std::cout << "Foo from B" << std::endl; }
};
class C: public A{
public:
void bar(){ std::cout << "Bar from C" << std::endl; }
};
// Does not work
class D: public B, public C {};
// Does work, but is ugly
class D: public B, public C {
public:
void foo(){ B::foo(); }
void bar(){ C::bar(); }
};
int main(int argc, char** argv){
D d;
d.foo();
d.bar();
}
Regards,
Alexander
The actual problem is about managing several visitors for a tree, letting each of them traverse the tree, make a decision for each of the nodes and then aggregate each visitor's decision and accumulate it into a definite decision.
A separation of both parts is sadly not possible without (I think) massive overhead, since I want to provide one implementation taking care of managing the visitors and one taking care of how to store the final decision.
Have you considered avoiding the diamond inheritance completely, providing several abstract classes each with optional implementations, allowing the user to mix and match default implementation and interface as needed?
In your case what's happening is that once you inherit to D, B::bar hasn't been implemented and C::foo hasn't been implemented. The intermediate classes B and C aren't able to see each others' implementations.
If you need the full interface in the grandparent, have you considered providing the implementation in a different way, possibly a policy with templates, and default classes that will be dispatched into to provide the default behavior?
If your top level interface has a logical division in functionality, you should split it into two separate interfaces. For example if you have both serialization and drawing functions in interface A, you should separate these into two interfaces, ISerialization and IDrawing.
You're free to then provide a default implementation of each of these interfaces. The user of your classes can inherit either your interface or your default implementation as needed.
There is also the possibility that you could use a "factory" class for the main interface type. In other words the primary interface class also contains some type of static function that generates an appropriate child class on-request from the user. For instance:
#include <cstdio>
class A
{
public:
enum class_t { CLASS_B, CLASS_C };
static A* make_a_class(class_t type);
virtual void foo() = 0;
virtual void bar() = 0;
};
class B: public A
{
private:
virtual void foo() { /* does nothing */ }
public:
virtual void bar() { printf("Called B::bar()\n"); }
};
class C: public A
{
private:
virtual void bar() { /* does nothing */ }
public:
virtual void foo() { printf("Called C::foo()\n"); }
};
A* A::make_a_class(class_t type)
{
switch(type)
{
case CLASS_B: return new B();
case CLASS_C: return new C();
default: return NULL;
}
}
int main()
{
B* Class_B_Obj = static_cast<B*>(A::make_a_class(A::CLASS_B));
C* Class_C_Obj = static_cast<C*>(A::make_a_class(A::CLASS_C));
//Class_B_Obj->foo(); //can't access since it's private
Class_B_Obj->bar();
Class_C_Obj->foo();
//Class_C_Obj->bar(); //can't access since it's private
return 0;
}
If class A for some reason needs to access some private members of class B or class C, just make class A a friend of the children classes (for instance, you could make the constructors of class B and class C private constructors so that only the static function in class A can generate them, and the user can't make one on their own without calling the static factory function in class A).
Hope this helps,
Jason
Since you mentioned that you mainly needed access to the functions rather than data-members, here is another method you could use rather than multiple inheritance using templates and template partial specialization:
#include <iostream>
using namespace std;
enum class_t { CLASS_A, CLASS_B, CLASS_C };
template<class_t class_type>
class base_type
{
public:
static void foo() {}
static void bar() {}
};
template<>
void base_type<CLASS_A>::foo() { cout << "Calling CLASS_A type foo()" << endl; }
template<>
void base_type<CLASS_B>::bar() { cout << "Calling CLASS_B type bar()" << endl; }
template<>
void base_type<CLASS_C>::foo() { base_type<CLASS_A>::foo(); }
template<>
void base_type<CLASS_C>::bar() { base_type<CLASS_B>::bar(); }
int main()
{
base_type<CLASS_A> Class_A;
Class_A.foo();
base_type<CLASS_B> Class_B;
Class_B.bar();
base_type<CLASS_C> Class_C;
Class_C.foo();
Class_C.bar();
return 0;
}
Now if you need non-static functions that have access to private data-members, this can get a bit trickier, but it should still be doable. It would though most likely require the need for a separate traits class you can use to access the proper types without running into "incomplete types" compiler errors.
Thanks,
Jason
I think the problem is that when using simple inheritance between B and A, and between C and A, you end up with two objects of type A in D (each of which will have a pure virtual function, causing a compile error because D is thus abstract and you try to create an instance of it).
Using virtual inheritance solves the problem since it ensure there is only one copy of A in D.