I am designing an interface for my project and curious if the idea can come true or not.
Here is the situation,
At run time, I want to use an array of base class pointers to issue commands to different derived objects. Different derived objects got different implementation (virtual functions). My problem is: if those objects got different different support level of the interface, how can I avoid writing an empty function?
For example, (my current code)
Class Base { //this is the main interface
public:
virtual void basicFun() = 0;
virtual void funA(int input1) {return;}
virtual void funB(int input2) {return;}
virtual void funC(int input3) {return;}
}
Class Derived1 : public Base { //this class support only funA()
public:
void basicFun() {//....}
void funA(int input1) {//do something}
}
Class Derived2 : public Base { //this class support both funA() funB()
public:
void basicFun() {//....}
void funA(int input1) {//do something}
void funB(int input2) {//do something}
}
Class Derived3 : public Base { //this class support all
public:
void basicFun() {//....}
void funA(int input1) {//do something}
void funB(int input2) {//do something}
void funC(int input3) {//do something}
}
Assumption: for a certain object, unsupported function would never be called. i.e. BasePtr->funC() will never be called if the object pointed by basePtr is Derived1 or Derived2.
The problem is:
I must define an empty function either in Base or Derived if an uniform interface is desired
If empty functions are defined like above, compiler keeps warning me unreferenced parameters (input1~input3). Of course I can turn it off, but just don't like this way.
So, is there any pattern that I can use to achieve an uniform interface without defining empty functions? I have been thinking about this for a few days. It seems impossible. Because funA() funB() and funC() must be in the interface so that I can use a Base pointer array to control all objects, which means in Derived1, funB() and funC() must somehow be defined.
Thanks, and happy thanksgiving, and thanks for sharing your ideas.
Solti
Uniform interfaces are a good thing. Which means that you must implement all methods in an interface, even if it means you will have empty methods. There is no design pattern for this problem, because it's not a real problem in the first place.
Think about this for a moment: say you have a Car interface, with methods Accelerate() and Brake(). A class that derives from Car must implement all methods. Would you want an object derived from Car implement the Accelerate() method but not the Brake() method? It would be an amazingly unsafe Car!
An interface in the context of OOP must have a well defined contract that is adhered to by both sides. In C++, this is enforced to a certain extent by requiring all pure virtuals to be implemented in derived class(es). Trying to instantiate a class with unimplemented virtual methods result in compilation error(s), assuming one doesn't use stupid tricks to get around it.
You object to creating empty methods because they cause compiler warnings. In your case, just omit the parameter name:
void funC(int) // Note lack of parameter name
{
}
Or comment the name out:
void funC(int /*input3*/)
{
}
Or even by using templates!
template<class T> void ignore( const T& ) { }
//...
void funC(int input3)
{
ignore(input3);
}
Here's what I would have done: create a helper base class with empty default implementations of all pure virtual methods. Then you can derive from this base class instead of the main interface and then select which method to override.
// main interface, everything pure virtual
struct IBase
{
virtual ~IBase() {}
virtual void basicFun() = 0;
virtual void funA(int input1) = 0;
virtual void funB(int input2) = 0;
virtual void funC(int input3) = 0;
};
// helper base class with default implementations (empty)
class Base : public IBase
{
void basicFun() {}
void funA(int input1) {}
void funB(int input2) {}
void funC(int input3) {}
};
class Derived1 : public Base { //this class support only funA()
void funA(int input1) {//do something}
};
class Derived2 : public Base { //this class support both funA() funB()
void funA(int input1) {//do something}
void funB(int input2) {//do something}
};
class Derived3 : public IBase { //this class support all
void basicFun() {//....}
void funA(int input1) {//do something}
void funB(int input2) {//do something}
void funC(int input3) {//do something}
};
int main()
{
// I always program to the interface
IBase& b1 = Derived1(); b1.basicFun(); b1.funA(); b1.funB(); b1.funC();
IBase& b2 = Derived2(); b2.basicFun(); b2.funA(); b2.funB(); b2.funC();
IBase& b3 = Derived3(); b3.basicFun(); b3.funA(); b3.funB(); b3.funC();
}
If you truly need non uniform interfaces (think about it before), maybe the Visitor pattern is worth a try:
struct Visitor;
struct Base
{
virtual ~Base() {}
virtual void accept(Visitor& v) { v.visit(*this); }
};
struct InterfaceA : Base
{
void accept(Visitor& v) { v.visit(*this); }
virtual void MethodA() = 0;
};
struct InterfaceB : Base
{
void accept(Visitor& v) { v.visit(*this); }
virtual void MethodB() = 0;
};
struct InterfaceA2 : InterfaceA
{
void accept(Visitor& v) { v.visit(*this); }
void MethodA(); // Override, eg. in terms of MethodC
virtual void MethodC() = 0;
};
// Provide sensible default behavior. Note that the visitor class must be
// aware of the whole hierarchy of interfaces
struct Visitor
{
virtual ~Visitor() {}
virtual void visit(Base& b) { throw "not implemented"; }
virtual void visit(InterfaceA& x) { this->visit(static_cast<Base&>(x)); }
virtual void visit(InterfaceA2& x) { this->visit(static_cast<InterfaceA&>(x)); }
virtual void visit(InterfaceB& x) { this->visit(static_cast<Base&>(x)); }
};
// Concrete visitor: you don't have to override all the functions. The unimplemented
// ones will default to what you expect.
struct MyAction : Visitor
{
void visit(InterfaceA& x)
{
x.MethodA();
}
void visit(InterfaceB& x)
{
x.methodB();
}
};
usage:
Base* x = getSomeConcreteObject();
MyAction f;
x->accept(f);
this will invoke either MethodA or MethodB depending on the runtime type of x. The visitor as implemented allows you not to overload many functions, and falls back to the action for a base class if the behavior is not implemented. Eventually, if you fail to provide an action in a visitor for some class, it will default to throw "not implemented".
Const correctness may force you to distinguish between Visitor and ConstVisitor, for whom all the accept methods would be const.
Related
Given a base class which has some virtual functions, can anyone think of a way to force a derived class to override exactly one of a set of virtual functions, at compile time? Or an alternative formulation of a class hierarchy that achieves the same thing?
In code:
struct Base
{
// Some imaginary syntax to indicate the following are a "pure override set"
// [
virtual void function1(int) = 0;
virtual void function2(float) = 0;
// ...
// ]
};
struct Derived1 : Base {}; // ERROR not implemented
struct Derived2 : Base { void function1(int) override; }; // OK
struct Derived3 : Base { void function2(float) override; }; // OK
struct Derived4 : Base // ERROR too many implemented
{
void function1(int) override;
void function2(float) override;
};
I'm not sure I really have an actual use case for this, but it occurred to me as I was implementing something that loosely follows this pattern and thought it was an interesting question to ponder, if nothing else.
No, but you can fake it.
Base has non-virtual float and int methods that forward to a pure virtual std variant one.
Two helper classes, one int one float, implement the std variant one, forwarding both cases to either a pure virtual int or float implementation.
It is in charge of dealing with the 'wrong type' case.
Derived inherit from one or another helper, and implement only int or float.
struct Base
{
void function1(int x) { vfunction(x); }
void function2(float x) { vfunction(x); }
virtual void vfunction(std::variant<int,float>) = 0;
};
struct Helper1:Base {
void vfunction(std::variant<int,float> v) final {
if (std::holds_alternative<int>(v))
function1_impl( std::get<int>(v) );
}
virtual void function1_impl(int x) = 0;
};
struct Helper2:Base {
void vfunction(std::variant<int,float> v) final {
if (std::holds_alternative<float>(v))
function2_impl( std::get<float>(v) );
}
virtual void function2_impl(float x) = 0;
};
struct Derived1 : Base {}; // ERROR not implemented
struct Derived2 : Helper1 { void function1_impl(int) override; }; // OK
struct Derived3 : Helper2 { void function2_impl(float) override; }; // OK
This uses https://en.wikipedia.org/wiki/Non-virtual_interface_pattern -- the interface contains non-virtual methods, whose details can be overridden to make them behave differently.
If you are afraid people will override vfunction you can use the private lock technique, and/or just give it a name like private_implementation_detail_do_not_implement and trust your code review process.
Or an alternative formulation of a class hierarchy that achieves the same thing?
One option is to have an intermediate base class that implements one function.
struct Base
{
virtual ~Base() {};
virtual void function(int) = 0;
virtual void function(float) = 0;
};
template <typename T>
struct TBase : Base
{
virtual void function(T) override {}
};
struct Derived1 : Base {};
struct Derived2 : TBase<float> { void function(int) override {} };
struct Derived3 : TBase<int> { void function(float) override {} };
int main()
{
Derived1 d1; // ERROR. Virtual functions are not implemented
Derived2 d2; // OK.
Derived3 d3; // OK.
}
Note that the functions are named function in this approach, not function1 and function2.
Your classes will remain abstract if you don't override all the abstract virtual methods. You have to do all of them if you want to instantiate the object.
I get error when i try to compile this code.
class FunctionVisitor
{
public:
virtual ~FunctionVisitor() = default;
virtual void visit(SumTimer&) = 0;
virtual void visit(SumSelector&) = 0;
};
class timerVisitor : public FunctionVisitor
{
private:
std::string variableName;
std::string variableValue;
public:
timerVisitor(std::string varName, std::string varValue) : variableName(varName), variableValue(varValue) { }
virtual void visit(SumTimer& fun) override;
};
class selectorVisitor : public FunctionVisitor
{
private:
std::string variableName;
std::string variableValue;
public:
selectorVisitor(std::string varName, std::string varValue) : variableName(varName), variableValue(varValue) { }
virtual void visit(SumSelector& sel) override;
};
The reason is that i have pure virtual functions in the base class but each sub class only has defination of one function of the base class virtual function.
Can i have pure virtual functions in this case ?
Every class that inherits from abstract class in c++ and doesn't override all of its pure virtual functions is considered abstract and cannot be instantiated neither locally nor dynamically. You can either override the functions to do nothing (or return an exception)
virtual void visit(SumTimer& fun) override {}
or make the abstract class concrete and the functions do nothing by default
class FunctionVisitor
{
public:
virtual ~FunctionVisitor() = default;
virtual void visit(SumTimer&) {}
virtual void visit(SumSelector&) {}
};
What do you want to happen if you call a different function? E.g. if you call visit(SumSelector&) on a timerVisitor?
#user253751 i don't want any action in that case.
If you don't want anything to happen when the function is called but not overridden, then make the base class have a function that does nothing. Instead of
virtual void visit(SumTimer&) = 0;
write:
virtual void visit(SumTimer&) {}
Pure virtual (= 0) means that you want to force derived classes to override the function. If you don't want to do that, then don't make them pure virtual!
Suppose that I have a heirarchy of several classes:
class A {
public:
virtual void DoStuff() = 0;
};
class B : public A {
public:
// Does some work
void DoStuff() override;
};
class C : public B {
public:
// Calls B::DoStuff and does other work
void DoStuff() override;
};
It can naively be implemented:
void Derived::DoStuff() {
Base::DoStuff();
...
}
This implementation has a serious problem, I believe: one always has to remember to call base implementation when overrides.
Alternative:
class A {
public:
void DoStuff() {
for (auto& func: callbacks_) {
func(this);
}
}
virtual ~A() = default;
protected:
template <class T>
void AddDoStuff(T&& func) {
callbacks_.emplace_back(std::forward<T>(func));
}
private:
template <class... Args>
using CallbackHolder = std::vector<std::function<void(Args...)>>;
CallbackHolder<A*> callbacks_;
};
Usage:
class Derived : public Base {
public:
Derived() {
AddDoStuff([](A* this_ptr){
static_cast<Derived*>(this_ptr)->DoStuffImpl();
});
}
private:
void DoStuffImpl();
};
However, I believe that it has a good amount of overhead when actually calling DoStuff(), as compared to the first implementation. In the use cases which I saw, possibly long costruction of objects is not a problem (one might also try to implement something like "short vector optimization" if he wants).
Also, I believe that 3 definitions for each DoStuff method is a little too much boilerplate.
I know that it can be very effectively solved by using inheritance pattern simular to CRTP, and one can hide the template-based solution behind interface class (A in the example), but I keep wondering -- shouldn't there be an easier solution?
I'm interested in a good implementation of call DERIVED implementation FROM BASE, if and only if derived class exists and it has an overriding method for long inheritance chains (or something equivalent).
Thanks!
Edit:
I am aware of an idea described in #Jarod42's answer, and I don't find it appropriate because I believe that it is ugly for long inheritance chains -- one has to use a different method name for each level of hierarchy.
You might change your class B to something like:
class A {
public:
virtual ~A() = default;
virtual void DoStuff() = 0;
};
class B : public A {
public:
void DoStuff() final { /*..*/ DoExtraStuff(); }
virtual void DoExtraStuff() {}
};
class C : public B {
public:
void DoExtraStuff() override;
};
I am not sure if I understood correctly but this seems to be addressed pretty good by the "Make public interface non-virtual, virtualize private functions instead" advice.
I think it's orignated in the Open-Closed principle. The technique is as-follows:
#include <iostream>
class B {
public:
void f() {
before_f();
f_();
};
private:
void before_f() {
std::cout << "will always be before f";
}
virtual void f_() = 0;
};
class D : public B{
private:
void f_() override {
std::cout << "derived stuff\n";
}
};
int main() {
D d;
d.f();
return 0;
}
You essentially deprive descendant class of overriding public interface, only customize exposed parts. The base class B strictly enforces that required method is called before actual implementation in derived might want to do. As a bonus you don't have to remember to call base class.
Of course you could make f virtual as well and let D decide.
My classes are
Base
Derived_A
Derived_B
Parent
Child_One
Child_Two
Base has two signature functions:
virtual void foo( const Parent& ) = 0;
virtual void bar( const Base& ) = 0;
, which other parts of the program expect.
The problem is:
Derived_A treats Child_One and Child_Two the same. But Derived_B treats them differently.
How should I implement this?
One way is to find out what kind of object is passed to Derived_B.foo. This would be apparently "a design flaw".
The other way I tried is to change the signature functions as:
class Base
{
class Derived_A;
class Derived_B;
// virtual void bar( const Base& ) = 0;
virtual void bar( const Derived_A& ) = 0;
virtual void bar( const Derived_B& ) = 0;
}
class Derived_A: public virtual Base
{
virtual void foo( const Parent& ) = 0;
}
class Derived_B: public virtual Base
{
virtual void foo( const Child_A& ) = 0;
virtual void foo( const Child_B& ) = 0;
}
But now the bar function cannot use Base.foo. So I have to write the bar function twice, although the code is exactly the same.
Are there any other ways to deal with the problem? which one do you suggest?
P.S. I couldn't think of a good title. Please feel free to modify it.
The problem you are describing is called Double Dispatch. The link describes the problem and a few possible approaches to a solution (including polymorphic function signatures and the visitor pattern).
Without details of what the two type hierarchies' relation is with each other and how they interact, it's impossible to say what approach is appropriate. I've composed an overview of the other answers and another viable alternative that can be extended to the visitor pattern which was mentioned in a comment.
Performing the polymorphic behaviour in the children implementing a virtual function in Parent as already suggested by Joey Andres is quite typical object oriented solution for this problem in general. Whether it's appropriate, depends on the responsibilities of the objects.
The type detection as suggested by Olayinka and already mentioned in your question certainly smells kludgy, but depending on details, can be the minimum of N evils. It can be implemented with member function returning an enum (I guess that's what Olayinka's answer tries to represent) or with a series of dynamic_casts as shown in one of the answers in the question you linked.
A trivial solution could be to overload foo in Base:
struct Base {
virtual void foo(const Parent&) = 0;
virtual void foo(const Child_Two&) = 0;
};
struct Derived_A: Base {
void foo(const Parent& p) {
// treat same
}
void foo(const Child_Two& p) {
foo(static_cast<Parent&>(p));
}
};
struct Derived_A: Base {
void foo(const Parent& p) {
// treat Child_One (and other)
}
void foo(const Child_Two& p) {
// treat Child_Two
}
};
If there are other subtypes of Base that treat Child_One and Child_Two the same, then the implementation of foo(const Child_Two&) may be put in Base to avoid duplication.
The catch of this approach is that foo must be called with a reference of proper static type. The call will not resolve based on the dynamic type. That may be better or worse for your design. If you need polymorphic behaviour, you can use the visitor pattern which essentially adds virtual dispatch on top of the solution above:
struct Base {
foo(Parent& p) {
p.accept(*this);
}
virtual void visit(Child_A&) = 0;
virtual void visit(Child_B&) = 0;
};
struct Parent {
virtual void accept(Base&) = 0;
};
struct Child_A: Parent {
void accept(Base& v) {
v.visit(*this);
}
};
// Child_B similarly
struct Derived_A: Base {
void treat_same(Parent&) {
// ...
}
void visit(Child_A& a) {
treat_same(a);
}
void visit(Child_B& b) {
treat_same(b);
}
};
struct Derived_B: Base {
void visit(Child_A&) {
// ...
}
void visit(Child_B&) {
// ...
}
};
There's a bit more boilerplate, but since you seem very averse to implementing the behaviour in the children, this may be good approach for you.
You could've easily made a virtual foo method in Parent. Since you want Derive_A to treat all Parent's subclasses the same, why not implement a class that does just that in Parent. That is the most logical thing, since chances are, if you want to do the same to both of them, then both of them must have similar data, which is exist in Parent.
class Parent{
virtual void treatSame(){
// Some operations that treat both Child_A, and Child_B
// the same thing to both Child_A and Child_B.
}
virtual void foo() = 0;
}
Since you want Derived_B to do different operations in both Child_A and Child_B, take advantage of polymorphism. Consider the rest of the classes below:
class Child_A : public Parent{
virtual void foo(){
// Foo that is designed for special Child_A.
}
}
class Child_B : public Parent{
virtual void foo(){
// Foo that is designed for special Child_B.
}
}
class Base{
virtual void foo(Parent) = 0;
virtual void bar(Base) = 0;
}
class Derived_A: public Base
{
virtual void foo( Parent& p){
p.treatSame();
}
}
class Derived_B: public Base
{
virtual void foo( Parent& p){
p.foo(); // Calls appropriate function, thanks to polymorphism.
}
}
A possible usage is the following:
int main(){
Child_A a;
Child_B b;
Derived_A da;
da.foo(a); // Calls a.treatSame();
da.foo(b); // Calls a.treatSame();
Derived_B db;
db.foo(a); // Calls a.foo();
db.foo(b); // Calls b.foo();
}
Note that this will only work when the parameters are pointer or reference (I prefer to deal with reference when possible). Virtual dispatch (selecting appropriate function) won't work otherwise.
I'm not sure of the syntax but you get the gist.
class Base{
virtual void bar( Base ) = 0;
virtual void foo( Parent ) = 0;
}
class Derived_A: public virtual Base{
virtual void foo( Parent ) = 0;
}
class Derived_B: public virtual Base{
virtual void foo( Parent ){
//switch case also works
return parent.get_type() == Parent::TYPE_A ? foo_A((Child_A)parent) : foo_B((Child_B)parent);
}
virtual void foo_A( Child_A ) = 0;
virtual void foo_B( Child_B ) = 0;
}
class Parent{
virtual int get_type() = 0;
}
class Child_A: public virtual Parent{
return Parent::TYPE_A;
}
class Child_B: public virtual Parent{
return Parent::TYPE_B;
}
I have a base class and n derived class. I want to instantiate a derived class and send it to a function that receive as an argument a base class. Inside the function, I found which type of derived class it is by using dynamic_cast, but I don't want to use several if-else sentences. Instead, I would like to know if there is a way to find out which derived class is it in order to cast it.
Here I leave my code as an example.
class animal{
public:
virtual ~animal() {}
int eyes;
};
class dog: public animal{
public:
int legs;
int tail;
};
class fish: public animal{
public:
int mostage;
};
void functionTest(animal* a){
if(dynamic_cast<fish*>(a) != NULL){
do_something();
}
else if(dynamic_cast<dog*>(a) != NULL){
do_something();
}
};
I would like to have a more general approach to this. Something like dynamic_cast(a). Thank you!
It's great to do this for quick drafts if you need to demonstrate something in a few minutes, but usually you try to avoid using dynamic_cast this way - it can lead to extremely high maintenance costs if used in the wrong places. Various patterns are available, such as a simple method overload, the Visitor pattern, or a virtual "GetType" function (which could be implemented with the curiously recurring template pattern, if you like patterns).
I'll list all 3 approaches. The first one is by far the most straightforward, and easiest to use. The advantages of the other 2 is that each of them moves the decision of what to do to a different part of the code, which can be a huge benefit (or drawback).
Lets assume this is what you want to do:
void functionTest(animal* a)
{
if(dynamic_cast<fish*>(a) != NULL)
blub();
else if(dynamic_cast<dog*>(a) != NULL)
bark();
};
Simple virtual function approach:
class animal {
public:
virtual ~animal() {}
virtual void do_something() = 0;
int eyes;
};
class dog : public animal {
public:
virtual void do_something() { bark(); } // use override in C++11
int legs;
int tail;
};
class fish: public animal {
public:
virtual void do_something() { blub(); } // use override in C++11
int mostage;
};
void functionTest(animal* a)
{
if (a) a->do_something();
};
Visitor approach:
class IVisitor {
public:
~IVisitor(){}
virtual void visit(const fish&){}
virtual void visit(const dog&){}
virtual void visit(const animal&){}
};
class animal {
public:
virtual ~animal() {}
virtual void accept(IVisitor& visitor) = 0;
int eyes;
};
class dog : public animal {
public:
virtual void accept(IVisitor& visitor) { visitor.visit(*this); } // use override in C++11
int legs;
int tail;
};
class fish : public animal {
public:
virtual void accept(IVisitor& visitor) { visitor.visit(*this); } // use override in C++11
int mostage;
};
class MyVisitor : public IVisitor {
public:
virtual void visit(const fish&) { blub(); } // use override in C++11
virtual void visit(const dog&) { bark(); } // use override in C++11
};
void functionTest(animal* a)
{
if (a)
{
MyVisitor v;
a->accept(v);
}
};
GetType approach, with CRTP spice:
class animal {
public:
virtual ~animal() {}
virtual const type_info& getType() const = 0; // careful. typeinfo is tricky of shared libs or dlls are involved
int eyes;
};
template <class T>
class BaseAnimal : public animal {
// these are C++11 features. Alternatives exist to ensure T derives from BaseAnimal.
static_assert(std::is_base_of<BaseAnimal,T>(,"Class not deriving from BaseAnimal");// C++11
virtual const type_info& getType() const { return typeid(T); }
};
class dog : public BaseAnimal<dog> {
public:
int legs;
int tail;
};
class fish : public BaseAnimal<fish> {
public:
int mostage;
};
void functionTest(animal* a)
{
if (!a)
return;
if (a->getType() == typeid(fish))
blub();
else if (a->getType() == typeid(dog))
bark();
};
Notice that you should consider the above examples to be pseudo-code. For best practices you will need to look up the patterns. Also, the curiously recurring template pattern can also be used in the second approach, or it can be easily removed from the third. It's just for convenience in these cases.
You may use virtual functions for that:
class animal{
public:
virtual ~animal() {}
virtual void do_thing() = 0;
};
class dog: public animal{
public:
void do_thing() override { std::cout << "I'm a dog" << std::endl; }
};
class fish: public animal{
public:
void do_thing() override { std::cout << "I'm a fish" << std::endl; }
};
And then
void functionTest(animal& a){
a.do_thing();
}
As an alternative, if you want to avoid to have to many virtual functions, you may use visitor pattern
Let me strongly urge you to NOT do what you have here and to follow the very wise advice that everyone has given to use polymorphism (e.g. virtual functions). The approach you've outlined could be made to work, but it is working against the tools the language provides. What you are trying to do is exactly why the language has virtual functions.
With your method, if you add a new sub-class of animal then you also have to change function_test(), and function_test() is doing what the compiler would do for virtual functions anyway, but in a much clumsier and inefficient way.
Using virtual functions, all you have to do is implement do_something() in the new sub-class and the compiler takes care of the rest.
Don't use dynamic_cast<>() for this. That's not what it is for.
Consider implementing this "switch" as virtual functions.
If you do not want that, you can either use dynamic_cast as in your example, or you use the typeid operator to compute a mapping for the result of typeid to a function that implements the do_something code.
However, I would not recommend that, as you just end up with a hand-coded vtable. It is better to use virtual functions and let the compiler generate the mapping.
For additional reading, I recommend Herb Sutter's article Type inference vs static/dynamic typing. He mentions Boost variant and Boost any, which might be possible alternatives for your problem.
The "classical" type-switch by Stroustrup may be suitable for your needs:
https://parasol.tamu.edu/mach7/
https://parasol.tamu.edu/~yuriys/pm/
Basically it will let you do a switch-case like based on obect type, using one of three different implementations