Find out inheritance relation between two objects classes in c++ - c++

I've got an abstract C++ base class CPlugin. From it, there are many classes derived directly and indirectly. Now given CPlugin *a,*b I need to find out, if a's real class is derived from b's real class.
I.e. I'd like to do something like this:
void checkInheritance(CPlugin *a, CPlugin *b){
if (getClass(a).isDerivedFrom(getClass(b)){
std::cout << "a is a specialization from b's class" << std::endl;
}
}
But how do I implement the "getClass" and "isDerivedFrom" in C++?

You cannot do this in C++. The only way to get some information about types on runtime is RTTI. RTTI is not powerful enough to do what you need though. Please explain what you are trying to achieve, then you will get better answers.

A whole solution is really tough to provide. What you are trying to achieve is a behavior that depends on the concrete type of two parameters : this is called double dispatch. A few pages of Modern C++ Design (Andrei Alexandrescu) are devoted to this subjet.
Once the actual concrete type of both parameters are known at a single code point, the "isDerivedFrom" part can be answered using boost type_traits : boost is_base_of.

You can use dynamic cast to test whether an object belongs to a subtype of a type known at compile time. The mechanism for changing behaviour depending on the runtime type of an object is a virtual function, which gives you a scope where the type of the receiver is known at compile time.
So you can achieve the same effect by a virtual function so you have the type at compile time on one side, and then dynamic cast to check the other side against that type:
#include <iostream>
class Plugin {
public:
virtual bool objectIsDerivedFromMyClass ( const Plugin & object ) const = 0;
};
template <typename T, typename BasePlugin = Plugin>
class TypedPlugin : public BasePlugin {
public:
virtual bool objectIsDerivedFromMyClass ( const Plugin & object ) const {
return dynamic_cast<const T*> ( &object ) != 0;
}
private:
int CheckMe(const T*) const;
};
class PluginA : public TypedPlugin<PluginA> {};
class PluginB : public TypedPlugin<PluginB, PluginA> {};
class PluginC : public TypedPlugin<PluginC> {};
int main () {
PluginA a;
PluginB b;
PluginC c;
std::cout << std::boolalpha
<< "type of a is derived from type of a " << a.objectIsDerivedFromMyClass ( a ) << '\n'
<< "type of a is derived from type of b " << b.objectIsDerivedFromMyClass ( a ) << '\n'
<< "type of b is derived from type of a " << a.objectIsDerivedFromMyClass ( b ) << '\n'
<< "type of c is derived from type of a " << a.objectIsDerivedFromMyClass ( c ) << '\n'
;
return 0;
}
(You also may want to add a check that T extends TypedPlugin<T>)
It's not quite double dispatch, though dynamic_cast is runtime polymorphic on its argument so it is pretty close.
Though for anything much more complicated (or if you want to stick with your original style of comparing the objects which represent the runtime types of the objects you have), you need to start create metaclasses, or use an existing framework which supplies metaclasses. Since you're talking about plugins, you may already have somewhere to specify configuration properties or dependencies, and that could be used for this too.

Typeinfo and dynamic cast: http://www.cplusplus.com/reference/std/typeinfo/type_info/

I don't really understand what you are after, but you can always use virtual methods in the following manner:
template <typename Derived>
struct TypeChecker
{
virtual bool ParentOf(CPlugin const& c) const
{
return dynamic_cast<Derived const*>(&c);
}
};
Now, augment the CPlugin class with the following pure virtual method:
virtual bool ParentOf(CPlugin const& c) const = 0;
And make each class deriving from CPlugin inherit from TypeChecker as well:
class SomePlugin: public CPlugin, private TypeChecker<SomePlugin> {};
And finally use it like such:
void checkInheritance(CPlugin const& lhs, CPlugin const& rhs)
{
if (!rhs.ParentOf(lhs)) return;
std::cout << "lhs is derived from rhs' class\n";
}
This does not detect if it is a specialization though, since both could perfectly be of the exact same class, this can be detected by using the typeid operator.
Note the requirement to implement it for every single class deriving from CPlugin and you'll understand why it is so complicated and error-prone...

Related

Inheritance and templates instanciations with pointers to simulate "virtual data"

I have a hierarchy of classes:
class Base
{
public:
Base():a{5}{}
virtual ~Base(){};
int a;
};
class Derived : public Base
{
public:
Derived():b{10}{}
int b;
};
I then have a class template that operates on whatever type it is instanciated with:
template<typename T>
class DoStuff
{
public:
DoStuff():val{}{}
virtual ~DoStuff(){};
virtual void printDoStuff() = 0;
T getVal(){return val;};
private:
T val;
};
class DoStuffWithInt : public DoStuff<int>
{
public:
virtual void printDoStuff() override {cout << "val = " << getVal() << endl;}
};
class DoStuffWithBase : public DoStuff<Base>
{
public:
virtual void printDoStuff() {cout << "a = " << getVal().a << endl;}
};
Now I would like to have a hierarchy of class like this:
class DoStuffWithBase : public DoStuff<Base>
{
public:
virtual void printDoStuff() {printVal(); cout << "a = " << getVal().a << endl;}
};
// Wrong and will not compile, trying to make a point
class DoStuffWithDerived : public DoStuffWithBase<Derived>
{
public:
void printDoStuff() override {DoStuffWithBase::printDoStuff(); cout << "b = " << getVal().b << endl;}
};
Basically I would like to have DoStuffWithBase that operates on a base be extended so that I can reuse its functions, but the extended class DoStuffWithDerived should operate on a Derived type.
I managed to get something working by templating DoStuffWithBase with a pointer to Base and extending it:
template <class T>
static void deleteIfPointer(const T& t)
{
std::cout << "not pointer" << std::endl;
}
template <class T>
static void deleteIfPointer(T* t)
// ^
{
std::cout << "is pointer" << std::endl;
delete t;
}
template<typename T>
class DoStuff
{
public:
DoStuff():val{}{}
DoStuff(const T& value):val{value}{};
virtual ~DoStuff(){deleteIfPointer(val);}
virtual void printDoStuff() = 0;
T getVal(){return val;};
private:
T val;
};
class DoStuffWithBase : public DoStuff<Base*>
{
public:
// New base
DoStuffWithBase(): DoStuff(new Base()){}
DoStuffWithBase(Base* b) : DoStuff(b){}
virtual void printDoStuff() {printVal(); cout << "a = " << getVal()->a << endl;}
};
class DoStuffWithDerived : public DoStuffWithBase
{
public:
// New derived
DoStuffWithDerived(): DoStuffWithBase(new Derived()){}
void printDoStuff() override {DoStuffWithBase::printDoStuff(); cout << "b = " << static_cast<Derived*>(getVal())->b << endl;}
};
It works but there are several things I don't like:
The code is a lot more complicated, when 99% of the time, I won't need to extend a DoStuffWithX class, I will just use DoStuffWithInt, DoStuffWithClass, DoStuffWithAnotherClass etc... Here I had to add several constructors, a special case destructor and so on.
I have to use pointers and manage them (static_cast when needed, deletion...), all in order to avoid slicing and get the right type. Also, DoStuff::val should theorically not be null, but with a pointer there is no way I can prevent that (or atleast I don't know one). Maybe using smart pointers would help a bit here ? I am not super familiar with them.
I have to manage cases where T is a pointer and when it is not. For example, the deleteIfPointer function above, but also switching between . and -> and probably more.
Is there any simpler way to achieve what I am trying to do ? A design pattern or something else ? Am I stuck with my solution and is it somewhat good ?
Edit: I tried to implement it with std::variant as in #Tiger4Hire's answer:
class Derived : public Base
{
public:
Derived():b{10}{}
int b;
};
class Derived2 : public Base
{
public:
Derived2():c{12}{}
int c;
};
using DerivedTypes = std::variant<Derived, Derived2>;
struct VariantVisitor
{
void operator()(Derived& d)
{
d.b = 17;
}
void operator()(Derived2& d)
{
d.c = 17;
}
};
class DoStuffWithVariant : public DoStuff<DerivedTypes>
{
public:
void handleBasePart(Base& base)
{
cout << "a = " << base.a << endl;
base.a = 10;
}
virtual void printDoStuff() override
{
auto unionVal_l = getVal();
if (std::holds_alternative<Derived>(unionVal_l))
{
std::cout << "the variant holds a Derived!\n";
auto& derived_l = std::get<0>(unionVal_l);
cout << "b = " << derived_l.b << endl;
handleBasePart(derived_l);
}
else if (std::holds_alternative<Derived2>(unionVal_l))
{
std::cout << "the variant holds a Derived2!\n";
auto& derived2_l = std::get<1>(unionVal_l);
cout << "c = " << derived2_l.c << endl;
handleBasePart(derived2_l);
}
std::visit(VariantVisitor{}, unionVal_l);
}
};
What I like about it:
I don't have to use pointers.
I feel the code is less tricky, easier to understand.
What I don't like about it:
The code is all in one place and it deals with all the possible Derived types (and even the Base type) at once whereas with inheritance, classes are more specialized, you can really look at a class and directly know what it does, what it overrides etc... On the other hand one could argue that it means the algorithm is in one place instead of dispatched all over the classes hierarchy.
You can't have an abstract base class as your interface.
All in all it is a really good alternative, but I am still wondering if there is a simpler way to implement dynamic polymorphism ? Do you necessarily have to resort to (base class) pointers with dynamic polymorphism ? Are std::variant the way to go now ?
Edit2: 2 other drawbacks with variants that I didn't notice at first:
All your derived class and your base class have to be defined in the same library. Clients can't easily add a new Derived class since it would mean modifying the variant and they might not have access to it.
On the project I am working on, base classes are defined in one library, and are derived in other independant "sub" libraries. So if I try to use variant in my main library, it won't be able to access the Derived types in the sub libraries, which is a major issue.
If your base class implenting the variant (DoStuff here) has other members, when you call std::visit on the variant, you might have to also embark the needed other members of DoStuff. I think you should be able to use lambdas to capture them, but still, it's a lot less straightforward than using them directly as in the case of inheritance.
Your core problem is that you cast away your type information.
C++ will always call the right function, if it knows the correct type. This is why the pattern of pointer-to-base is almost always an anti-pattern (even though it is often taught as the "C++" way to do things).
Modern C++-style is to hold things as strongly-typed pointers, and cast them to the base pointer object, only when calling a function that takes a base-pointer as a parameter.
The standard supports this way of working by providing std::variant. Thus rather than
std::vector<Base*> my_list_of_things;
my_list_of_things.push_back(new Derived); // casting away type is bad
You start with
using DerivedTypes = std::variant<std::unique_ptr<Derived1>,
std::unique_ptr<Derived2>/*,etc*/>;
std::vector<DerivedTypes> my_list_of_things;
Now you can iterate over the list, calling a function which takes a pointer-to-base, casting away the type information only during the call.
You can also visit the members of the list, with a function (often a lambda) that knows exactly the type it is working on.
So you get the best of both worlds!
This does assume you have access to C++17 or above though, also that you are not working with code that is a library (compiled) but allows the library user to make their own classes. For example, libraries like Qt can't use this way of working.
If you don't have access to C++17, you may find curiously recursing templates fit much of what you are doing. (This is a controversial pattern though, as it is ugly and confusing)

Arrays of template class objects

Problem
I would like an array of pointers to instances of a template class. My problem would be solved if C++ allowed templated virtual methods in a base class, with a templated derived class.
Therefore, how would one implement templated virtual methods?
Below I have a solution which seems to work, but I'm interested in comments about my implementation.
Constraints
The template parameter is infinitely variable, e.g., I cannot enumerate every specialization of this template class. The template class T can be any POD, array of POD, or struct of POD.
The complete set of T is known at compile time. Basically, I have a file which defines all the different T used to instantiate the objects, and use Xmacros (https://en.wikipedia.org/wiki/X_Macro) to create the array of objects.
I know this isn't a great idea. Let's gloss over that for the time being. This ends up being more a curiosity.
Possible Solutions
These are the things I've looked into.
Create base and derived classes
class Base {
virtual void SomeMethod() = 0;
}
template <class T>
class Derived : Base {
void SomeMethod() {...}
}
The problem with this is I cannot declare all the virtual methods in Base that I want to overload, as virtual methods cannot be templated. Otherwise, it would be a perfect solution.
std::any/std::variant
I am using C++17, so I could define the virtual base methods taking std::any. But it cannot hold arrays, which precludes its use here.
CRTP
It seems this would not help me create an array of these different objects. I would need to do something like
template <typename D, typename T>
class Base
{
...
};
template <typename T>
class Derived : public Base<Derived, T>
{
...
};
So I still end up with trying to create an array of Derived<T> objects.
Visitor Pattern
Again it looks like I would need to enumerate every possible type the Visitable class needs to service, which, while not impossible (again, I have a file which defines all the different T that will be used) seems like more Xmacros, which is just making the problem more complicated.
My Solution
This is what I came up with. It will run in https://www.onlinegdb.com/online_c++_compiler
#include <iostream>
#include <array>
#include <typeinfo>
// Base class which declares "overloaded" methods without implementation
class Base {
public:
template <class T>
void Set(T inval);
template <class T>
void Get(T* retval);
virtual void Print() = 0;
};
// Template class which implements the overloaded methods
template <class T>
class Derived : public Base {
public:
void Set(T inval) {
storage = inval;
}
void Get(T* retval) {
*retval = storage;
}
void Print() {
std::cout << "This variable is type " << typeid(T).name() <<
", value: " << storage << std::endl;
}
private:
T storage = {};
};
// Manually pointing base overloads to template methods
template <class T> void Base::Set(T inval) {
static_cast<Derived<T>*>(this)->Set(inval);
}
template <class T> void Base::Get(T* retval) {
std::cout << "CALLED THROUGH BASE!" << std::endl;
static_cast<Derived<T>*>(this)->Get(retval);
}
int main()
{
// Two new objects
Derived<int>* ptr_int = new Derived<int>();
Derived<double>* ptr_dbl = new Derived<double>();
// Base pointer array
std::array<Base*, 2> ptr_arr;
ptr_arr[0] = ptr_int;
ptr_arr[1] = ptr_dbl;
// Load values into objects through calls to Base methods
ptr_arr[0]->Set(3);
ptr_arr[1]->Set(3.14);
// Call true virtual Print() method
for (auto& ptr : ptr_arr) ptr->Print();
// Read out the values
int var_int;
double var_dbl;
std::cout << "First calling Get() method through true pointer." << std::endl;
ptr_int->Get(&var_int);
ptr_dbl->Get(&var_dbl);
std::cout << "Direct values: " << var_int << ", " << var_dbl << std::endl;
std::cout << "Now calling Get() method through base pointer." << std::endl;
ptr_arr[0]->Get(&var_int);
ptr_arr[1]->Get(&var_dbl);
std::cout << "Base values: " << var_int << ", " << var_dbl << std::endl;
return 0;
}
When this is run, it shows that calling the methods on Base correctly point to the Derived implementations.
This variable is type i, value: 3
This variable is type d, value: 3.14
First calling Get() method through true pointer.
Direct values: 3, 3.14
Now calling Get() method through base pointer.
CALLED THROUGH BASE!
CALLED THROUGH BASE!
Base values: 3, 3.14
Essentially I am manually creating the virtual method pointers. But, since I am explicitly doing so, I am allowed to use template methods in Base which point to the methods in Derived. It is more prone to error, as for example for each template method I need to type the method name twice, i.e., I could mess up:
template <class T> void Base::BLAH_SOMETHING(T inval) {
static_cast<Derived<T>*>(this)->WHOOPS_WRONG_CALL(inval);
}
So after all this, is this a terrible idea? To me it seems to achieve my objective of circumventing the limitation of templated virtual methods. Is there something really wrong with this? I understand there could be ways to structure the code that make all this unnecessary, I am just focusing on this specific construction.
It is more prone to error, as for example for each template method I need to type the method name twice
Oh, that's the least of your concerns. Imagine if you downcast to the wrong type.
At least save yourself a headache and use dynamic_cast:
class Base {
public:
virtual ~Base() = default;
template <class T>
void Set(T inval) {
dynamic_cast<Derived<T>&>(*this).Set(inval);
}
template <class T>
T Get() {
return dynamic_cast<Derived<T>&>(*this).Get();
}
};
template <class T>
class Derived : public Base {
public:
void Set(T inval) {
storage = inval;
}
T Get() {
return storage;
}
private:
T storage{};
};
Other than that, I agree with the comments, this is probably not the right approach to your problem.
The normal run-off-the-mill method of dealing with subclasses that contain unknown types is to move the entire thing to a virtual function. Thus, instead of
superclass->get_value(&variable_of_unknown_type);
print(variable_of_unknown_type);
you write
superclass->print_value();
Now you don't need to know about any of the types a subclass might contain.
This is not always appropriate though, because there could be lots of operations. Making every operation a virtual function is troublesome if you are adding new operations all the time. On the other hand, the set of possible subclasses is often limited. In this case your best bet is the Visitor. Visitor rotates the inheritance hierarchy 90°, so to speak. Instead of fixing the set of operations and adding new subclasses freely, you fix the set of subclasses and add new operations freely. So instead of
superclass->print_value();
you write
class PrinterVisitor : public MyVisitor
{
virtual void processSubclass1(Subclass1* s) { print(s->double_value); }
virtual void processSubclass2(Subclass2* s) { print(s->int_value); }
};
superclass->accept(PrinterVisitor());
Now accept is the only virtual function in your base class. Note there are no casts that could possibly fail anywhere in the code.

Is there any use for a class to contain only (by default) private members in c++?

Members of a class are by default private in c++.
Hence, I wonder whether there is any possible use of creating a class that has all its members (variables and functions) set by default to private.
In other words, does there exist any meaningful class definition without any of the keywords public, protected or private?
There is a pattern, used for access protection, based on that kind of class: sometimes it's called passkey pattern (see also clean C++ granular friend equivalent? (Answer: Attorney-Client Idiom) and How to name this key-oriented access-protection pattern?).
Only a friend of the key class has access to protectedMethod():
// All members set by default to private
class PassKey { friend class Foo; PassKey() {} };
class Bar
{
public:
void protectedMethod(PassKey);
};
class Foo
{
void do_stuff(Bar& b)
{
b.protectedMethod(PassKey()); // works, Foo is friend of PassKey
}
};
class Baz
{
void do_stuff(Bar& b)
{
b.protectedMethod(PassKey()); // error, PassKey() is private
}
};
Tag dispatching. It's used in the standard library for iterator category tags, in order to select algorithms which may be more efficient with certain iterator categories. For example, std::distance may be implemented something like this: (in fact it is implemented almost exactly like this in gnu libstdc++, but I've modified it slightly to improve readability)
template<typename Iterator>
typename iterator_traits<Iterator>::difference_type
distance(Iterator first, Iterator last)
{
return __distance(first, last,
typename iterator_traits<Iterator>::iterator_category());
}
Where __distance is a function which is overloaded to behave more efficiently for std::random_access_iterator_tag (which is an empty struct, but could just as easily be a class), simply using last - first instead of the default behavior of counting how many increments it takes to get first to last.
Application wide resource acquisition ?
#include <iostream>
class C {
C() {
std::cout << "Acquire resource" << std::endl;
}
~C() {
std::cout << "Release resource" << std::endl;
}
static C c;
};
C C::c;
int main() {
return 0;
}
As stated in comments below, I have I mind an industrial application that had to "lock" some hardware device while the program was running. But one might probably found other use for this as, after all, it is only some "degenerated" case or RAII.
As about using "private" methods outside the declaration block: I use a static member here. So, it is declared at a point where private members are accessible. You're not limited to constructor/destructor. You can even (ab)use a static methods and then invoke private instance methods using a fluent interface:
class C {
C() { std::cout << "Ctor " << this << std::endl; }
~C() { std::cout << "Dtor" << this << std::endl; }
static C* init(const char* mode) {
static C theC;
std::cout << "Init " << mode << std::endl;
return &theC;
}
C* doThis() {
std::cout << "doThis " << std::endl;
return this;
}
C* doThat() {
std::cout << "doThat " << std::endl;
return this;
}
static C *c;
};
C *C::c = C::init("XYZ")
->doThis()
->doThat();
int main() {
std::cout << "Running " << std::endl;
return 0;
}
That code is still valid (as all C members are accessible at the point of declaration of C::c). And will produce something like that:
Ctor 0x601430
Init XYZ
doThis
doThat
Running
Dtor0x601430
Meaningful? Good practice? Probably not, but here goes:
class DataContainer {
friend class DataUser;
int someDataYouShouldNotWorryAbout;
};
class DataUser {
public:
DataUser() {
container.someDataYouShouldNotWorryAbout = 42;
}
private:
DataContainer container;
};
No, there is no sense in creating a class without public member variable and/or functions, since there wouldn't be a way to access anything in the class. Even if not explicitly stated, the inheritance is private as well.
Sure, you could use friend as suggested, but it would create unneeded convolution.
On the other hand, if you use struct and not class to define a class, then you get everything public. That may make sense.
For example :
struct MyHwMap {
unsigned int field1 : 16;
unsigned int field2 : 8;
unsigned int fieldA : 24;
};
An admittedly ugly case from many, many years ago and not in C++ but the idea would still apply:
There was a bug in the runtime library. Actually fixing the offending code would cause other problems so I wrote a routine that found the offending piece of code and replaced it with a version that worked. The original incarnation had no interface at all beyond it's creation.
A derived class can be all-private, even its virtual methods redefining/implementing base-class methods.
To construct instances you can have friend classes or functions (e.g. factory), and/or register the class/instance in a registry.
An example of this might be classes representing "modules" in a library. E.g. wxWidgets has something like that (they are registered and do init/deinit).

How can I combine generic iterator-based algorithms with implementation based algorithms?

I am using the Strategy Pattern, together with the Abstract Factory Pattern to generate different algorithms in a Calculator class during run-time.
The calculations will depend on a Relationship between involved types. This is why I made the "*Algorithm::calculate" a member function template, generic with respect to a Relationship.
However, I already have an algorithm that is completely implementation-based in the existing code, it is not-generic nor iterator-based, and I want to add it to the algorithm hierarchy so that I can produce it using the AbstractFactory as well and see how it behaves.
An implemenation based algorithm uses the member functions of the types involved in the calculations to get the calculation done. In this example, it would use RelationshipWithA::target_type member functions to access the data of the Type&, as well as "A" member functions to access the data of RelationshipWithA::a_.
This is what I came up with so far (this is just a model, without the Abstract Factory, and the Calculator class):
#include <iostream>
class Result{};
class A {};
class B {
public:
void specific() const
{
std::cout << "B::specific()" << std::endl;
};
};
class C : public B {};
class D {};
template<class Type>
class RelationshipWithA
{
const A& a_;
const Type& t_;
public:
typedef Type target_type;
RelationshipWithA (const A& a, const Type& t)
:
a_(a),
t_(t)
{
std::cout << "RelationshipWithA::ctor" << std::endl;
};
const A& a() const
{
return a_;
}
const Type& type() const
{
return t_;
}
};
class DefaultAlgorithm
{
public:
template <class Relationship>
void calculate (Result& res, const Relationship& r)
{
std::cout << "DefaultAlgorithm::calculate" << std::endl;
const A& a = r.a();
const typename Relationship::target_type& t = r.type();
// Default iterator based calculation on a, target_type and r
};
};
class AlternativeAlgorithm
:
public DefaultAlgorithm
{
public:
template <class Relationship>
void calculate (Result& res, const Relationship& r)
{
std::cout << "AlternativeAlgorithm::calculate" << std::endl;
// Optimized iterator based calculation on a, target_type and r
}
};
class ImplementationBasedAlgorithm
:
public DefaultAlgorithm
{
public:
// No specialization: Relationships store
// a const reference to any class that inherits from B
template <class Relationship>
void calculate (Result& res, const Relationship& r)
{
// Use B implementation and the Relationship With A to compute the result
std::cout << "ImplementationBasedAlgorithm::calculate" << std::endl;
const A& a = r.a();
const B& b = r.type();
b.specific();
// Implementation based on B implementation
}
};
int main(int argc, const char *argv[])
{
Result res;
A a;
C c;
RelationshipWithA<C> relationshipAC (a, c);
DefaultAlgorithm defaultAlg;
AlternativeAlgorithm alternativeAlg;
ImplementationBasedAlgorithm implementationAlg;
defaultAlg.calculate(res, relationshipAC);
alternativeAlg.calculate(res, relationshipAC);
implementationAlg.calculate(res,relationshipAC);
D d;
RelationshipWithA<D> relationshipAD (a, d);
defaultAlg.calculate(res, relationshipAD);
alternativeAlg.calculate(res, relationshipAD);
// This fails, as expected
//implementationAlg.calculate(res,relationshipAD);
return 0;
}
I like this design because the algorithms are not generic classes, which makes it easy for the Generic Abstract Factory to produce them during run-time.
However, in Effective C++ there is an Item 36 saying: "never redefine an inherited non-virtual function". I mean, non-virtual functions are implementation invariant, they should not be overriden in general, but:
There are no virtual member function templates available in C++.
If I make the Algorithm classes generic on RelationshipWithA and "*Algorithm::calculate" a vritual member function, the Factory needs to know about the Realtionship in order to generate Algorithms, and the code gets seriously smelly (to me at least).
Is this then a proper solution for the problem, even though I override inherited non-virtual functions (function templates)?
To the client, there is no difference in behaviour what so ever: the result is there, the only difference is in the way it is computed. This means that the Is-A relationship is still upheld: the "*Algorithm::calculate" is still implementation invariant to the client.
It isn't really an Is-A relationship...
The specific implementations aren't really A DefaultAlgorithm... they are specific algorithms...
You could have an empty BaseAlgorithm class that you can create with the factory. But then you'll need to cast it to the right type anyway before using the template functions. This kinda beats the factory pattern anyway, because you aren't using an interface.
In your case if the factory creates one of the derived classes but returns the base class, if you use that variable, it will call the base class methods:
DefaultAlgorithm algo = Factory.CreateImplementationBasedAlgorithm();
RelationshipWithA<D> relationshipAD (a, d);
algo.calculate(res, relationshipAD); //won't fail because the base class methods are used (because it isn't virtual)
To fix that, you could make the a base Relationship class, and make the calculate() method virtual.
The calculate() method will get then you could static_cast the Relationship to some base_relationship interface that has the interface you want for that algorithm, and so you can achive the compilation failure for not having the right a() or type() methods.

Enforcing correct parameter types in derived virtual function

I'm finding it difficult to describe this problem very concisely, so I've attached the code for a demonstration program.
The general idea is that we want a set of Derived classes that are forced to implement some abstract Foo() function from a Base class. Each of the derived Foo() calls must accept a different parameter as input, but all of the parameters should also be derived from a BaseInput class.
We see two possible solutions so far, neither we're very happy with:
Remove the Foo() function from the base class and reimplement it with the correct input types in each Derived class. This, however, removes the enforcement that it be implemented in the same manner in each derived class.
Do some kind of dynamic cast inside the receiving function to verify that the type received is correct. However, this does not prevent the programmer from making an error and passing the incorrect input data type. We would like the type to be passed to the Foo() function to be compile-time correct.
Is there some sort of pattern that could enforce this kind of behaviour? Is this whole idea breaking some sort of fundamental idea underlying OOP? We'd really like to hear your input on possible solutions outside of what we've come up with.
Thanks so much!
#include <iostream>
// these inputs will be sent to our Foo function below
class BaseInput {};
class Derived1Input : public BaseInput { public: int d1Custom; };
class Derived2Input : public BaseInput { public: float d2Custom; };
class Base
{
public:
virtual void Foo(BaseInput& i) = 0;
};
class Derived1 : public Base
{
public:
// we don't know what type the input is -- do we have to try to cast to what we want
// and see if it works?
virtual void Foo(BaseInput& i) { std::cout << "I don't want to cast this..." << std::endl; }
// prefer something like this, but then it's not overriding the Base implementation
//virtual void Foo(Derived1Input& i) { std::cout << "Derived1 did something with Derived1Input..." << std::endl; }
};
class Derived2 : public Base
{
public:
// we don't know what type the input is -- do we have to try to cast to what we want
// and see if it works?
virtual void Foo(BaseInput& i) { std::cout << "I don't want to cast this..." << std::endl; }
// prefer something like this, but then it's not overriding the Base implementation
//virtual void Foo(Derived2Input& i) { std::cout << "Derived2 did something with Derived2Input..." << std::endl; }
};
int main()
{
Derived1 d1; Derived1Input d1i;
Derived2 d2; Derived2Input d2i;
// set up some dummy data
d1i.d1Custom = 1;
d2i.d2Custom = 1.f;
d1.Foo(d2i); // this compiles, but is a mistake! how can we avoid this?
// Derived1::Foo() should only accept Derived1Input, but then
// we can't declare Foo() in the Base class.
return 0;
}
Since your Derived class is-a Base class, it should never tighten the base contract preconditions: if it has to behave like a Base, it should accept BaseInput allright. This is known as the Liskov Substitution Principle.
Although you can do runtime checking of your argument, you can never achieve a fully type-safe way of doing this: your compiler may be able to match the DerivedInput when it sees a Derived object (static type), but it can not know what subtype is going to be behind a Base object...
The requirements
DerivedX should take a DerivedXInput
DerivedX::Foo should be interface-equal to DerivedY::Foo
contradict: either the Foo methods are implemented in terms of the BaseInput, and thus have identical interfaces in all derived classes, or the DerivedXInput types differ, and they cannot have the same interface.
That's, in my opinion, the problem.
This problem occured to me, too, when writing tightly coupled classes that are handled in a type-unaware framework:
class Fruit {};
class FruitTree {
virtual Fruit* pick() = 0;
};
class FruitEater {
virtual void eat( Fruit* ) = 0;
};
class Banana : public Fruit {};
class BananaTree {
virtual Banana* pick() { return new Banana; }
};
class BananaEater : public FruitEater {
void eat( Fruit* f ){
assert( dynamic_cast<Banana*>(f)!=0 );
delete f;
}
};
And a framework:
struct FruitPipeLine {
FruitTree* tree;
FruitEater* eater;
void cycle(){
eater->eat( tree->pick() );
}
};
Now this proves a design that's too easily broken: there's no part in the design that aligns the trees with the eaters:
FruitPipeLine pipe = { new BananaTree, new LemonEater }; // compiles fine
pipe.cycle(); // crash, probably.
You may improve the cohesion of the design, and remove the need for virtual dispatching, by making it a template:
template<class F> class Tree {
F* pick(); // no implementation
};
template<class F> class Eater {
void eat( F* f ){ delete f; } // default implementation is possible
};
template<class F> PipeLine {
Tree<F> tree;
Eater<F> eater;
void cycle(){ eater.eat( tree.pick() ); }
};
The implementations are really template specializations:
template<> class Tree<Banana> {
Banana* pick(){ return new Banana; }
};
...
PipeLine<Banana> pipe; // can't be wrong
pipe.cycle(); // no typechecking needed.
You might be able to use a variation of the curiously recurring template pattern.
class Base {
public:
// Stuff that don't depend on the input type.
};
template <typename Input>
class Middle : public Base {
public:
virtual void Foo(Input &i) = 0;
};
class Derived1 : public Middle<Derived1Input> {
public:
virtual void Foo(Derived1Input &i) { ... }
};
class Derived2 : public Middle<Derived2Input> {
public:
virtual void Foo(Derived2Input &i) { ... }
};
This is untested, just a shot from the hip!
If you don't mind the dynamic cast, how about this:
Class BaseInput;
class Base
{
public:
void foo(BaseInput & x) { foo_dispatch(x); };
private:
virtual void foo_dispatch(BaseInput &) = 0;
};
template <typename TInput = BaseInput> // default value to enforce nothing
class FooDistpatch : public Base
{
virtual void foo_dispatch(BaseInput & x)
{
foo_impl(dynamic_cast<TInput &>(x));
}
virtual void foo_impl(TInput &) = 0;
};
class Derived1 : public FooDispatch<Der1Input>
{
virtual void foo_impl(Der1Input & x) { /* your implementation here */ }
};
That way, you've built the dynamic type checking into the intermediate class, and your clients only ever derive from FooDispatch<DerivedInput>.
What you are talking about are covariant argument types, and that is quite an uncommon feature in a language, as it breaks your contract: You promised to accept a base_input object because you inherit from base, but you want the compiler to reject all but a small subset of base_inputs...
It is much more common for programming languages to offer the opposite: contra-variant argument types, as the derived type will not only accept everything that it is bound to accept by the contract, but also other types.
At any rate, C++ does not offer contravariance in argument types either, only covariance in the return type.
C++ has a lot of dark areas, so it's hard to say any specific thing is undoable, but going from the dark areas I do know, without a cast, this cannot be done. The virtual function specified in the base class requires the argument type to remain the same in all the children.
I am sure a cast can be used in a non-painful way though, perhaps by giving the base class an Enum 'type' member that is uniquely set by the constructor of each possible child that might possibly inherit it. Foo() can then check that 'type' and determine which type it is before doing anything, and throwing an assertion if it is surprised by something unexpected. It isn't compile time, but it's the closest a compromise I can think of, while still having the benefits of requiring a Foo() be defined.
It's certainly restricted, but you can use/simulate coviarance in constructors parameters.