How do you abstract interface classes - c++

I'm trying to work with my own little framework(only for interest actually:) ) recently. I'd like to abstract an interface to describe a process with an input and an output. So I defined a class BaseInput and a class Baseoutput. And here is the interface.
class IProcess
{
public:
virtual void Proc(BaseInput &input) = 0;
};
And here is my problem. The classes derived from IProcess have to make BaseInput as its argument's type according to the standard of C++. I expect the subclasses could be like this(I know it's wrong):
class ProcessA : public IProcess
{
public:
void Proc(MyInput &input) override;
};
I know this couldn't compile correctly. And I also know It could convert to a MyInput pointer in ProcessA::Proc. I considered Dependency Injection but I don't know whether it could solve my problem.
How do you guys solve such kinds of problems in the real project?
P.S. I actually found a similar situation here
Edit 1:
Well, I'm extremely sorry for my wrong expression. And thanks for all you guys' helping. MyInput actually brings some data like this:
class MyInput : public BaseInput
{
public:
//... some functions
std::vector<int> m_Data;
};
AKA the argument's type should be MyInput& if I need to access m_Data from proc without any pointer converting. What I want to achieve is architecture just like .NET Core MVC. An input worker class accept different input(from files, internet or serial ports...), pack them into classes derived from BaseInput and give it to some process classes derived from IProcess (maybe there also could be some middleware) and finally return a result packaged by output classes derived from BaseOutput.
It might be a really dumb architecture. I'm also wondering how to make it better. I had also considered not to pack those input. But I don't know how :-x
Thanks to all you guys again.🙏

Actually, MyInput must inherits BaseInput. Then, ProcessA::Proc must have the same protoype than the IProcess::proc that it is supposed to override (So it must take a BaseInput as parameter too).
By using polymorphism, you will be able to pass a MyInput when calling the function since you passed a reference and that MyInput inherits BaseInput.
Here is an example:
.h:
class BaseInput
{
public:
virtual ~BaseInput();
virtual void display();
};
class MyInput : public BaseInput
{
public:
void display() override;
};
class IProcess
{
public:
virtual ~IProcess();
virtual void proc(BaseInput & input) = 0;
};
class ProcessA : public IProcess
{
public:
void proc(BaseInput & input) override;
};
.cpp:
BaseInput::~BaseInput()
{}
void BaseInput::display()
{
std::cout << "BaseInput::display()" << std::endl;
}
void MyInput::display()
{
std::cout << "MyInput::display()" << std::endl;
}
IProcess::~IProcess()
{}
void ProcessA::proc(BaseInput & input)
{
input.display();
}
main:
int main()
{
MyInput mi;
ProcessA pa;
pa.proc(mi); // Pass a MyInput
return 0;
}
The output is (as expected):
MyInput::display()
EDIT (answer to question's edit 1):
You have two solutions.
Either you defines in BaseInput the required methods to be implemented by every input type (as I did with display()). In your case, it could be a getData() member for example.
Or you will have to dynamic_cast your given BaseInput & into a MyInput &.
Keep in mind that if dynamic_cast fails with pointers, it returns a nullptr, but if it fails with references, it will throw a std::bad_cast exception.
As you use references, you will have to catch the exception in case of potential fail (if another type of input is given).

Related

Preferred way to understand object type at runtime

Consider I have a Plant class that has derived Fruit and Vegetable classes, and Fruit class has some more derived classes, like Orange and Apple, while Vegetable has derived Potato and Tomato. Assume, Plant has Plant::onConsume()=0; method:
class Plant
{
public:
virtual void onConsume(void)=0;
};
class Fruit:public Plant
{
};
class Orange:public Fruit
{
void onConsume(void)
{
// Do something specific here
}
};
class Apple:public Fruit
{
void onConsume(void)
{
// Do something specific here
}
};
class Vegetable:public Plant
{
};
class Potato:public Vegetable
{
void onConsume(void)
{
// Do something specific here
}
};
class Tomato:public Vegetable
{
void onConsume(void)
{
// Do something specific here
}
};
class Consumer
{
public:
void consume(Plant &p)
{
p.onConsume();
// Specific actions depending on actual p type here
// like send REST command to the remote host for Orange
// or draw a red square on the screen for Tomato
}
};
Suppose, I have a Consumer class with Consumer::consume(Plant) method. This "consume" method should perform different actions for different "Plants" instances/types, among calling Plant::onConsume() for any of "Plants". These action ain't directly related to the Plant class, require a lot of different additional actions and parameters, could literally be completely arbitrary, so cannot be implemented inside onConsume method.
What is the preferred method to implement this? As I understand, it is possible to implement some "Plant::getPlantType()=0" method, that would return plant type, but in this case I'm not sure what should it return. In case the returned value would be an enum, I'd need to change this enum each time I add a new derived class. And in any case, there's no control that multiple derived classes could return the same value.
Also, I'm aware there's a dynamic_cast conversion that returns nullptr if conversion could not be made, and typeid() operator that returns std::typeinfo (even with typeinfo::name()), which could be used in the switch() (it's just great for my case). But I'm afraid it could significally slow down the execution and make code heavier.
So, my question is, what is the preferred way in C++ to do that? maybe I just forgot about some simpler way to implement that?
A little update. Thank you for your explanations about inheritance, encapsulation etc! I supposed it's clear from my question, but it is not, I am sorry about that. So, please think about it, like I don't have an access to the whole Plant sources hierarchy, just need to implement this Consumer::onConsume(Plant). So I cannot add new specific methods in it. Or, also, it could be considered as a Plants library, that I have to write once, and make it usable for other devs. So, I could divide use cases/functionality into two parts: one that implemented "per class" in the Plant::onConsume() method, and second that is unknown yet and will differ depending on usage.
One option would be the visitor pattern, but this requires one function per type in some class. Basically you create a base class PlantVisitor with one Visit function per object type and pass add a virtual method to Plant that receives a PlantVisitor object and calls the corresponding function of the visitor passing itself as parameter:
class PlantVisitor
{
public:
virtual void Visit(Orange& orange) = 0;
virtual void Visit(Tomato& tomato) = 0;
...
};
class Plant
{
public:
virtual void Accept(PlantVisitor& visitor) = 0;
};
class Orange : public Plant
{
public:
void Accept(PlantVisitor& visitor) override
{
visitor.Visit(*this);
}
};
class Tomato : public Plant
{
public:
void Accept(PlantVisitor& visitor) override
{
visitor.Visit(*this);
}
};
This would allow you to do something like this:
class TypePrintVisitor : public PlantVisitor
{
public:
void Visit(Orange& orange) override
{
std::cout << "Orange\n";
}
void Visit(Tomato& tomato) override
{
std::cout << "Tomato\n";
}
};
std::vector<std::unique_ptr<Plant>> plants;
plants.emplace_back(std::make_unique<Orange>());
plants.emplace_back(std::make_unique<Tomato>());
TypePrintVisitor visitor;
for (size_t i = 0; i != plants.size(); ++i)
{
std::cout << "plant " << (i+1) << " is a ";
plants[i]->Accept(visitor);
}
Not sure the need for this does not indicate a design inefficiency though.
Btw: If you've got multiple visitors and do not necessarily want to implement logic for every single type in all of them, you could add default implementations in PlantVisitor that call the function for the supertype instead of specifying pure virtual functions.
Polymorphism is all about not having to know about a specific type. Usually your design is flawed if you discover having to detect a specific type explicitly.
At very first:
void Consumer::consume(Plant p)
does not work as intended! The Plant object is accepted by value, i. e. its bytes are copied one by one; however, only those of the Plant type, any others (those of derived types) are ignored and get lost within consume function – this is called object slicing.
Polymorphism only works with references or pointers.
Now assume you want to do something like the following (incomplete code!):
void Consumer::consume(Plant& p) // must be reference or pointer!
{
p.onConsume();
generalCode1();
if(/* p is apple */)
{
appleSpecific();
}
else if(/* p is orange */)
{
orangeSpecific();
}
generalCode2();
}
You don't want to decide yourself upon type, you let the Plant class do the stuff for you, which means you extend its interface appropriately:
class Plant
{
public:
virtual void onConsume() = 0;
virtual void specific() = 0;
};
The code of the consume function will now be changed to:
void Consumer::consume(Plant const& p) // must be reference or pointer!
{
p.onConsume();
generalCode1();
p.specific();
generalCode2();
}
You'll do so at any place you need specific behaviour (and specific is just a demo name, chose one that describes nicely what the function actually is intended to do).
p.onConsume();
generalCode1();
p.specific1();
generalCode2();
p.specific2();
generalCode3();
p.specific3();
generalCode4();
// ...
Of course you need now to provide appropriate implementations in your derived classes:
class Orange:public Fruit
{
void onConsume() override
{ }
void specific() override
{
orangeSpecific();
}
};
class Apple:public Fruit
{
void onConsume() override
{ }
void specific() override
{
appleSpecific();
}
};
Note the addition of override keyword, which protects you from accidentally creating overloaded functions instead actually overwriting in case of signature mismatch. It helps you, too, to locate all places of necessary changes if you discover having to change the function signature in the base class.

Is checking of object type really always sign of bad design?

I have a source of some lines of text, each of which is a message, representing object of some type. I'm making a parser for these lines, which should take the text line as input and give the ready to use object as output. So I make the following hierarchy of classes:
class Message
{
public:
virtual ~Message(){};
};
class ObjectTypeA : public Message
{/*...*/};
class ObjectTypeB : public Message
{/*...*/};
class ObjectTypeC : public Message
{/*...*/};
and here's how it's used:
std::shared_ptr<Message> parseLine(std::string& line);
void doWork()
{
std::string line;
while(getLine(line))
{
std::shared_ptr<Message> object=parseLine(line);
if(dynamic_cast<ObjectTypeA*>(object.get()))
doSomethingA(*static_cast<ObjectTypeA*>(object.get()));
else if(dynamic_cast<ObjectTypeB*>(object.get()))
doCompletelyUnrelatedProcessing(*static_cast<ObjectTypeB*>(object.get()));
else if(dynamic_cast<ObjectTypeC*>(object.get()))
doSomethingEvenMoreDifferent(*static_cast<ObjectTypeC*>(object.get()));
}
}
Here the parser would be a library function, and the objects don't know in advance how they will be processed. So, I can't put the processing code to a virtual function of Message implementations.
But many of the answers in this question say that if one needs to check type of the object, it's a sign of bad design. But I can't seem to see what's bad here. Is there any better way to organize the solution?
First off, it's not always a sign of bad design. There are very few absolutes in "soft" things like "good" or "bad" design. Nevertheless, it does often indicate a different approach would be preferable, for one or more of these reasons: extensibility, ease of maintenance, familiarity, and similar.
In your particular case: One of the standard ways to make arbitrary class-specific processing possible without type switches or bloating/polluting the interface of the class is to use the Visitor pattern. You create a generic MessageVisitor interface, teach the Message subclasses to call into it, and implement it wherever you need to process them:
class MessageVisitor;
class Message
{
public:
virtual ~Message(){};
virtual void accept(MessageVisitor &visitor) = 0;
};
class ObjectTypeA : public Message
{
void accept(MessageVisitor &visitor) override
{ visitor.visit(*this); }
/*...*/
};
class ObjectTypeB : public Message
{
void accept(MessageVisitor &visitor) override
{ visitor.visit(*this); }
/*...*/
};
class ObjectTypeC : public Message
{
void accept(MessageVisitor &visitor) override
{ visitor.visit(*this); }
/*...*/
};
class MessageVisitor
{
public:
virtual void visit(ObjectTypeA &subject) {}
virtual void visit(ObjectTypeB &subject) {}
virtual void visit(ObjectTypeC &subject) {}
};
You would then use it like this:
void doWork()
{
struct DoWorkMessageVisitor : MessageVisitor
{
void visit(ObjectTypeA &subject) override { doSomethingA(subject); }
void visit(ObjectTypeB &subject) override { doSomethingB(subject); }
void visit(ObjectTypeC &subject) override { doSomethingC(subject); }
};
std::string line;
while(getLine(line))
{
std::shared_ptr<Message> object=parseLine(line);
DoWorkMessageVisitor v;
object->accept(v);
}
}
Feel free to customise this with const overloads etc. as necessary.
Note that accept cannot be implemented in the base class, because you need the correct type of *this in the invocation of visit. That is where the type switch has "moved".
An alternative is to make the visit functions in MessageVisitor pure virtual instead of empty. Then, if you need to add a new message type, it will automatically force you to update all places where such type-specific processing occurs.
You're really asking for opinions on whats good and bad design. Here's mine:
Yours is bad design, because you try to do something in another class that should be handled by the subclasses, because that's what polymorphism is for.
Your mother class should have a
virtual void do_stuff_that_is_specific_to_the_subclass(...) = 0;
method, which you'd implement in your subclasses.
Here the parser would be a library function, and the objects don't know in advance how they will be processed. So, I can't put the processing code to a virtual function of Message implementations.
Why not? You should simply have a
virtual void do_stuff_that_is_specific_to_the_subclass(parser&, ...) = 0;
method that uses the parser differently for each subclass. There's no reason that what you can do in your if/else clauses couldn't just be done in the subclasses, unless it breaks encapsulation, which I'd doubt, because the only reason you've got these objects is that you want to do specific things differently for different lines.
doSomethingA, doCompletelyUnrelatedProcessing and doSomethingEvenMoreDifferent could be just overrides of pure virtual function of Message class. In your case that would be much more effecient and better as a design solution.

oop - C++ - Proper way to implement type-specific behavior?

Let's say I have a parent class, Arbitrary, and two child classes, Foo and Bar. I'm trying to implement a function to insert any Arbitrary object into a database, however, since the child classes contain data specific to those classes, I need to perform slightly different operations depending on the type.
Coming into C++ from Java/C#, my first instinct was to have a function that takes the parent as the parameter use something like instanceof and some if statements to handle child-class-specific behavior.
Pseudocode:
void someClass(Arbitrary obj){
obj.doSomething(); //a member function from the parent class
//more operations based on parent class
if(obj instanceof Foo){
//do Foo specific stuff
}
if(obj instanceof Bar){
//do Bar specific stuff
}
}
However, after looking into how to implement this in C++, the general consensus seemed to be that this is poor design.
If you have to use instanceof, there is, in most cases, something wrong with your design. – mslot
I considered the possibility of overloading the function with each type, but that would seemingly lead to code duplication. And, I would still end up needing to handle the child-specific behavior in the parent class, so that wouldn't solve the problem anyway.
So, my question is, what's the better way of performing operations that where all parent and child classes should be accepted as input, but in which behavior is dictated by the object type?
First, you want to take your Arbitrary by pointer or reference, otherwise you will slice off the derived class. Next, sounds like a case of a virtual method.
void someClass(Arbitrary* obj) {
obj->insertIntoDB();
}
where:
class Arbitrary {
public:
virtual ~Arbitrary();
virtual void insertIntoDB() = 0;
};
So that the subclasses can provide specific overrides:
class Foo : public Arbitrary {
public:
void insertIntoDB() override
// ^^^ if C++11
{
// do Foo-specific insertion here
}
};
Now there might be some common functionality in this insertion between Foo and Bar... so you should put that as a protected method in Arbitrary. protected so that both Foo and Bar have access to it but someClass() doesn't.
In my opinion, if at any place you need to write
if( is_instance_of(Derived1) )
//do something
else if ( is_instance_of(Derived2) )
//do somthing else
...
then it's as sign of bad design. First and most straight forward issue is that of "Maintainence". You have to take care in case further derivation happens. However, sometimes it's necessary. for e.g if your all classes are part of some library. In other cases you should avoid this coding as far as possible.
Most often you can remove the need to check for specific instance by introducing some new classes in the hierarchy. For e.g :-
class BankAccount {};
class SavingAccount : public BankAccount { void creditInterest(); };
class CheckingAccount : public BankAccount { void creditInterest(): };
In this case, there seems to be a need for if/else statement to check for actual object as there is no corresponsing creditInterest() in BanAccount class. However, indroducing a new class could obviate the need for that checking.
class BankAccount {};
class InterestBearingAccount : public BankAccount { void creditInterest(): } {};
class SavingAccount : public InterestBearingAccount { void creditInterest(): };
class CheckingAccount : public InterestBearingAccount { void creditInterest(): };
The issue here is that this will arguably violate SOLID design principles, given that any extension in the number of mapped classes would require new branches in the if statement, otherwise the existing dispatch method will fail (it won't work with any subclass, just those it knows about).
What you are describing looks well suited to inheritance polymorphicism - each of Arbitrary (base), Foo and Bar can take on the concerns of its own fields.
There is likely to be some common database plumbing which can be DRY'd up the base method.
class Arbitrary { // Your base class
protected:
virtual void mapFields(DbCommand& dbCommand) {
// Map the base fields here
}
public:
void saveToDatabase() { // External caller invokes this on any subclass
openConnection();
DbCommand& command = createDbCommand();
mapFields(command); // Polymorphic call
executeDbTransaction(command);
}
}
class Foo : public Arbitrary {
protected: // Hide implementation external parties
virtual void mapFields(DbCommand& dbCommand) {
Arbitrary::mapFields();
// Map Foo specific fields here
}
}
class Bar : public Arbitrary {
protected:
virtual void mapFields(DbCommand& dbCommand) {
Arbitrary::mapFields();
// Map Bar specific fields here
}
}
If the base class, Arbitrary itself cannot exist in isolation, it should also be marked as abstract.
As StuartLC pointed out, the current design violates the SOLID principles. However, both his answer and Barry's answer has strong coupling with the database, which I do not like (should Arbitrary really need to know about the database?). I would suggest that you make some additional abstraction, and make the database operations independent of the the data types.
One possible implementation may be like:
class Arbitrary {
public:
virtual std::string serialize();
static Arbitrary* deserialize();
};
Your database-related would be like (please notice that the parameter form Arbitrary obj is wrong and can truncate the object):
void someMethod(const Arbitrary& obj)
{
// ...
db.insert(obj.serialize());
}
You can retrieve the string from the database later and deserialize into a suitable object.
So, my question is, what's the better way of performing operations
that where all parent and child classes should be accepted as input,
but in which behavior is dictated by the object type?
You can use Visitor pattern.
#include <iostream>
using namespace std;
class Arbitrary;
class Foo;
class Bar;
class ArbitraryVisitor
{
public:
virtual void visitParent(Arbitrary& m) {};
virtual void visitFoo(Foo& vm) {};
virtual void visitBar(Bar& vm) {};
};
class Arbitrary
{
public:
virtual void DoSomething()
{
cout<<"do Parent specific stuff"<<endl;
}
virtual void accept(ArbitraryVisitor& v)
{
v.visitParent(*this);
}
};
class Foo: public Arbitrary
{
public:
virtual void DoSomething()
{
cout<<"do Foo specific stuff"<<endl;
}
virtual void accept(ArbitraryVisitor& v)
{
v.visitFoo(*this);
}
};
class Bar: public Arbitrary
{
public:
virtual void DoSomething()
{
cout<<"do Bar specific stuff"<<endl;
}
virtual void accept(ArbitraryVisitor& v)
{
v.visitBar(*this);
}
};
class SetArbitaryVisitor : public ArbitraryVisitor
{
void visitParent(Arbitrary& vm)
{
vm.DoSomething();
}
void visitFoo(Foo& vm)
{
vm.DoSomething();
}
void visitBar(Bar& vm)
{
vm.DoSomething();
}
};
int main()
{
Arbitrary *arb = new Foo();
SetArbitaryVisitor scv;
arb->accept(scv);
}

Enforcing correct parameter types in derived virtual function

I'm finding it difficult to describe this problem very concisely, so I've attached the code for a demonstration program.
The general idea is that we want a set of Derived classes that are forced to implement some abstract Foo() function from a Base class. Each of the derived Foo() calls must accept a different parameter as input, but all of the parameters should also be derived from a BaseInput class.
We see two possible solutions so far, neither we're very happy with:
Remove the Foo() function from the base class and reimplement it with the correct input types in each Derived class. This, however, removes the enforcement that it be implemented in the same manner in each derived class.
Do some kind of dynamic cast inside the receiving function to verify that the type received is correct. However, this does not prevent the programmer from making an error and passing the incorrect input data type. We would like the type to be passed to the Foo() function to be compile-time correct.
Is there some sort of pattern that could enforce this kind of behaviour? Is this whole idea breaking some sort of fundamental idea underlying OOP? We'd really like to hear your input on possible solutions outside of what we've come up with.
Thanks so much!
#include <iostream>
// these inputs will be sent to our Foo function below
class BaseInput {};
class Derived1Input : public BaseInput { public: int d1Custom; };
class Derived2Input : public BaseInput { public: float d2Custom; };
class Base
{
public:
virtual void Foo(BaseInput& i) = 0;
};
class Derived1 : public Base
{
public:
// we don't know what type the input is -- do we have to try to cast to what we want
// and see if it works?
virtual void Foo(BaseInput& i) { std::cout << "I don't want to cast this..." << std::endl; }
// prefer something like this, but then it's not overriding the Base implementation
//virtual void Foo(Derived1Input& i) { std::cout << "Derived1 did something with Derived1Input..." << std::endl; }
};
class Derived2 : public Base
{
public:
// we don't know what type the input is -- do we have to try to cast to what we want
// and see if it works?
virtual void Foo(BaseInput& i) { std::cout << "I don't want to cast this..." << std::endl; }
// prefer something like this, but then it's not overriding the Base implementation
//virtual void Foo(Derived2Input& i) { std::cout << "Derived2 did something with Derived2Input..." << std::endl; }
};
int main()
{
Derived1 d1; Derived1Input d1i;
Derived2 d2; Derived2Input d2i;
// set up some dummy data
d1i.d1Custom = 1;
d2i.d2Custom = 1.f;
d1.Foo(d2i); // this compiles, but is a mistake! how can we avoid this?
// Derived1::Foo() should only accept Derived1Input, but then
// we can't declare Foo() in the Base class.
return 0;
}
Since your Derived class is-a Base class, it should never tighten the base contract preconditions: if it has to behave like a Base, it should accept BaseInput allright. This is known as the Liskov Substitution Principle.
Although you can do runtime checking of your argument, you can never achieve a fully type-safe way of doing this: your compiler may be able to match the DerivedInput when it sees a Derived object (static type), but it can not know what subtype is going to be behind a Base object...
The requirements
DerivedX should take a DerivedXInput
DerivedX::Foo should be interface-equal to DerivedY::Foo
contradict: either the Foo methods are implemented in terms of the BaseInput, and thus have identical interfaces in all derived classes, or the DerivedXInput types differ, and they cannot have the same interface.
That's, in my opinion, the problem.
This problem occured to me, too, when writing tightly coupled classes that are handled in a type-unaware framework:
class Fruit {};
class FruitTree {
virtual Fruit* pick() = 0;
};
class FruitEater {
virtual void eat( Fruit* ) = 0;
};
class Banana : public Fruit {};
class BananaTree {
virtual Banana* pick() { return new Banana; }
};
class BananaEater : public FruitEater {
void eat( Fruit* f ){
assert( dynamic_cast<Banana*>(f)!=0 );
delete f;
}
};
And a framework:
struct FruitPipeLine {
FruitTree* tree;
FruitEater* eater;
void cycle(){
eater->eat( tree->pick() );
}
};
Now this proves a design that's too easily broken: there's no part in the design that aligns the trees with the eaters:
FruitPipeLine pipe = { new BananaTree, new LemonEater }; // compiles fine
pipe.cycle(); // crash, probably.
You may improve the cohesion of the design, and remove the need for virtual dispatching, by making it a template:
template<class F> class Tree {
F* pick(); // no implementation
};
template<class F> class Eater {
void eat( F* f ){ delete f; } // default implementation is possible
};
template<class F> PipeLine {
Tree<F> tree;
Eater<F> eater;
void cycle(){ eater.eat( tree.pick() ); }
};
The implementations are really template specializations:
template<> class Tree<Banana> {
Banana* pick(){ return new Banana; }
};
...
PipeLine<Banana> pipe; // can't be wrong
pipe.cycle(); // no typechecking needed.
You might be able to use a variation of the curiously recurring template pattern.
class Base {
public:
// Stuff that don't depend on the input type.
};
template <typename Input>
class Middle : public Base {
public:
virtual void Foo(Input &i) = 0;
};
class Derived1 : public Middle<Derived1Input> {
public:
virtual void Foo(Derived1Input &i) { ... }
};
class Derived2 : public Middle<Derived2Input> {
public:
virtual void Foo(Derived2Input &i) { ... }
};
This is untested, just a shot from the hip!
If you don't mind the dynamic cast, how about this:
Class BaseInput;
class Base
{
public:
void foo(BaseInput & x) { foo_dispatch(x); };
private:
virtual void foo_dispatch(BaseInput &) = 0;
};
template <typename TInput = BaseInput> // default value to enforce nothing
class FooDistpatch : public Base
{
virtual void foo_dispatch(BaseInput & x)
{
foo_impl(dynamic_cast<TInput &>(x));
}
virtual void foo_impl(TInput &) = 0;
};
class Derived1 : public FooDispatch<Der1Input>
{
virtual void foo_impl(Der1Input & x) { /* your implementation here */ }
};
That way, you've built the dynamic type checking into the intermediate class, and your clients only ever derive from FooDispatch<DerivedInput>.
What you are talking about are covariant argument types, and that is quite an uncommon feature in a language, as it breaks your contract: You promised to accept a base_input object because you inherit from base, but you want the compiler to reject all but a small subset of base_inputs...
It is much more common for programming languages to offer the opposite: contra-variant argument types, as the derived type will not only accept everything that it is bound to accept by the contract, but also other types.
At any rate, C++ does not offer contravariance in argument types either, only covariance in the return type.
C++ has a lot of dark areas, so it's hard to say any specific thing is undoable, but going from the dark areas I do know, without a cast, this cannot be done. The virtual function specified in the base class requires the argument type to remain the same in all the children.
I am sure a cast can be used in a non-painful way though, perhaps by giving the base class an Enum 'type' member that is uniquely set by the constructor of each possible child that might possibly inherit it. Foo() can then check that 'type' and determine which type it is before doing anything, and throwing an assertion if it is surprised by something unexpected. It isn't compile time, but it's the closest a compromise I can think of, while still having the benefits of requiring a Foo() be defined.
It's certainly restricted, but you can use/simulate coviarance in constructors parameters.

C++ dynamic type construction and detection

There was an interesting problem in C++, but it was more about architecture.
There are many (10, 20, 40, etc) classes describing some characteristics (mix-in classes), for example:
struct Base { virtual ~Base() {} };
struct A : virtual public Base { int size; };
struct B : virtual public Base { float x, y; };
struct C : virtual public Base { bool some_bool_state; };
struct D : virtual public Base { string str; }
// ....
The primary module declares and exports a function (for simplicity just function declarations without classes):
// .h file
void operate(Base *pBase);
// .cpp file
void operate(Base *pBase)
{
// ....
}
Any other module can have code like this:
#include "mixing.h"
#include "primary.h"
class obj1_t : public A, public C, public D {};
class obj2_t : public B, public D {};
// ...
void Pass()
{
obj1_t obj1;
obj2_t obj2;
operate(&obj1);
operate(&obj2);
}
The question is how do you know what the real type of a given object in operate() is without using dynamic_cast and any type information in classes (constants, etc)? The operate() function is used with a big array of objects in small time periods and dynamic_cast is too slow for it and I don't want to include constants (enum obj_type { ... }) because this is not the OOP-way.
// module operate.cpp
void some_operate(Base *pBase)
{
processA(pBase);
processB(pBase);
}
void processA(A *pA)
{
}
void processB(B *pB)
{
}
I cannot directly pass a pBase to these functions. And it's impossible to have all possible combinations of classes, because I can add new classes just by including new header files.
One solution that came to mind, in the editor I can use a composite container:
struct CompositeObject
{
vector<Base *pBase> parts;
};
But the editor does not need time optimization and can use dynamic_cast for parts to determine the exact type. In operate() I cannot use this solution.
So, is it possible to avoid using a dynamic_cast and type information to solve this problem? Or maybe I should use another architecture?
The real problem here is about what you are trying to achieve.
Do you want something like:
void operate(A-B& ) { operateA(); operateB(); }
// OR
void operate(A-B& ) { operateAB(); }
That is, do you want to apply an operation on each subcomponent (independently), or do you wish to be able to apply operations depending on the combination of components (much harder).
I'll take the first approach here.
1. Virtual ?
class Base { public: virtual void operate() = 0; };
class A: virtual public Base { public virtual void operate() = 0; };
void A::operate() { ++size; } // yes, it's possible to define a pure virtual
class obj1_t: public A, public B
{
public:
virtual void operate() { A::operate(); B::operate(); }
};
Some more work, for sure. Notably I don't like the repetition much. But that's one call to the _vtable, so it should be one of the fastest solution!
2. Composite Pattern
That would probably be the more natural thing here.
Note that you can perfectly use a template version of the pattern in C++!
template <class T1, class T2, class T3>
class BaseT: public Base, private T1, private T2, private T3
{
public:
void operate() { T1::operate(); T2::operate(); T3::operate(); }
};
class obj1_t: public BaseT<A,B,C> {};
Advantages:
no more need to repeat yourself! write operate once and for all (baring variadic...)
only 1 virtual call, no more virtual inheritance, so even more efficient that before
A, B and C can be of arbitrary type, they should not inherit from Base at all
edit the operate method of A, B and C may be inlined now that it's not virtual
Disadvantage:
Some more work on the framework if you don't have access to variadic templates yet, but it's feasible within a couple dozen of lines.
First thing that comes to mind is asking what you really want to achieve... but then again the second thought is that you can use the visitor pattern. Runtime type information will implicitly be used to determine at what point in the hierarchy is the final overrider of the accept method, but you will not explicitly use that information (your code will not show any dynamic_cast, type_info, constants...)
Then again, my first thought comes back... since you are asking about the appropriateness of the architecture, what is it that you really want to achieve? --without knowledge of the problem you will only find generic answers as this one.
The usual object oriented way would be to have (pure) virtual functions in the base class that are called in operate() and that get overridden in the derived classes to execute code specific to that derived class.
Your problem is that you want to decide what to do based on more than one object's type. Virtual functions do this for one object (the one left of the . or ->) only. Doing so for more than one object is called multiple dispatch (for two objects it's also called double dispatch), and in C++ there's no built-in feature to deal with this.
Look at double dispatch, especially as done in the visitor pattern.