C++: Require static function in abstract class - c++

I am trying to write a c++ abstract class and I can't figure out how to require implementers of this class to contain a static function.
For example:
class AbstractCoolThingDoer
{
void dosomethingcool() = 0; // now if you implement this class
// you better do this
}
class CoolThingDoerUsingAlgorithmA: public AbstractCoolthingDoer
{
void dosomethingcool()
{
//do something cool using Algorithm A
}
}
class CoolThingDoerUsingAlgorithmB: public AbstractCoolthingDoer
{
void dosomethingcool()
{
//do the same thing using Algorithm B
}
}
Now I'd like to do the coolthing without the details of how coolthing gets done. So I'd like to do something like
AbstractCoolThingDoer:dosomethingcool();
without needing to know how the coolthing gets done, but this seems to require a function that is both virtual and static which is of course a contradiction.
The rationale is that CoolThingDoerUsingAlgorithmB may be written later and hopefully the softare that needs cool things done won't have to be rewritten.
EDIT:Not sure I was clear on what I'm trying to accomplish. I have 3 criteria that I'm looking to satisfy
A library that uses abstractcoolthingdoer and does not need to be rewritten ever, even when another coolthingdoer is written that the library has never heard of.
If you try to write a coolthingdoer that doesn't conform to the required structure, then the executable that uses the library won't compile.
coolthingdoer has some static functions that are required.
I'm probably chasing down a poor design, so please point me to a better one. Am I needing a factory?

Maybe, something like this will help (see ideone.com example):
#include <iostream>
class A
{
protected:
virtual void do_thing_impl() = 0;
public:
virtual ~A(){}
static void do_thing(A * _ptr){ _ptr->do_thing_impl(); }
};
class B : public A
{
protected:
void do_thing_impl(){ std::cout << "B impl" << std::endl; }
};
class C : public A
{
protected:
void do_thing_impl(){ std::cout << "C impl" << std::endl; }
};
int main()
{
B b_;
C c_;
A::do_thing(&b_);
A::do_thing(&c_);
return (0);
}
EDIT: It seems to me the OP does not need run-time polymorphism, but rather compile-time polymorphism without need of class instance (use of static functions when the implementation is hidden in the derived classes, no instance required). Hope the code below helps to solve it (example on ideone.com):
#include <iostream>
template <typename Derived>
struct A
{
static void do_thing() { Derived::do_thing(); }
};
struct B : public A<B>
{
friend A<B>;
protected:
static void do_thing() { std::cout << "B impl" << std::endl; }
};
struct C : public A<C>
{
friend A<C>;
protected:
static void do_thing() { std::cout << "C impl" << std::endl; }
};
int main()
{
A<B>::do_thing();
A<C>::do_thing();
return (0);
}
EDIT #2: To force fail at compile-time in case user does not adhere to desired pattern, here is the slight modification at ideone.com:
#include <iostream>
template <typename Derived>
struct A
{
static void do_thing() { Derived::do_thing_impl(); }
};
struct B : public A<B>
{
friend A<B>;
protected:
static void do_thing_impl() { std::cout << "B impl" << std::endl; }
};
struct C : public A<C>
{
friend A<C>;
protected:
static void do_thing_impl() { std::cout << "C impl" << std::endl; }
};
struct D : public A<D>
{
friend A<D>;
};
int main()
{
A<B>::do_thing();
A<C>::do_thing();
A<D>::do_thing(); // This will not compile.
return (0);
}

This looks to me like right place to implement bridge pattern. Maybe this is what you are (unconsciously) willing to achieve. In short you specify an interface and its implementations, then call to your do_thing method in turn calls an implementation on a pointer to implementer class.
C++ example

Related

How to avoid paying for interface virtual methods during inversion of control in C++?

I work on this C++ code base which has a great architecture, very decoupled and easy to test. Though one thing that really bothers me is paying for virtual methods when most of times it isn't actually needed because the correct derived class is chosen once, during dependecy injection and dynamic polymorphism isn't needed. For example:
#include <iostream>
#include <memory>
class IDog{
public:
virtual void bark() = 0;
~IDog() = default;
};
class Dog : public IDog {
public:
void bark() override {std::cout << "woof" << std::endl;}
};
void makeDogSound(std::unique_ptr<IDog> dog){
dog->bark();
}
//prod main
int main(){
makeDogSound(std::make_unique<Dog>());
}
//test
class MockDock : public IDog {
public:
void bark() override {std::cout << "mock woof" << std::endl;}
};
//test main
int main(){
makeDogSound(std::make_unique<MockDock>());
}
I looked at some template based approachs like this one below:
#include <iostream>
#include <memory>
class Dog{
public:
void bark() {std::cout << "woof" << std::endl;}
};
template<typename DogT>
void makeDogSound(std::unique_ptr<DogT> dog){
dog->bark();
}
//prod main
int main(){
makeDogSound(std::make_unique<Dog>());
}
//test
class MockDock{
public:
void bark() {std::cout << "mock woof" << std::endl;}
};
//test main
int main(){
makeDogSound(std::make_unique<MockDock>());
}
But it seems that:
It would be difficult to keep track of the "dog interface" signature because they would be generated on the fly, every time I call a dog method inside makeDogSound.
Autocomplete wouldn't work inside makeDogSound as it doesn't know about the Dog avaiable methods.
I don't rule out that maybe I'm not understanding well the template based approach.
It also seems to me that using C++20 concepts could be a way to ensure a strong interface at compile time.
You're right that a C++20 concept would work well here to describe the interface:
template <typename Dog>
concept doglike = requires(Dog dog) {
{ dog.bark(); };
{ dog.name(); } -> std::convertible_to<std::string>; // example of how to specify return type, use std::same_as if you don't want conversions.
}
You could then write
template <doglike DogT>
void makeDogSound(std::unique_ptr<DogT> dog){
dog->bark();
}
and if you try calling this function template with an object that isn't doglike, you'll get a clear compiler error telling you so.

using intermediate class in a Crtp hierarchy without declaring a new class

I have a hierarchy similar to the following:
#include <iostream>
template<typename DerivedCrtp>
struct A
{
void Print() { std::cout << "A";}
};
struct B : public A<B>
{
};
template<typename DerivedCrtp>
struct C : public A<C<DerivedCrtp>>
{
void Print() { std::cout << "C";}
};
template<typename DerivedCrtp>
struct D : public C<D<DerivedCrtp>>
{
void Print() { std::cout << "D";}
};
struct CFinalized : public C<CFinalized>
{
void Print() { std::cout << "CFinal";}
};
template<typename DerivedCrtp = CSmart<>>
struct CSmart : public A<C<DerivedCrtp>>
{
void Print() { std::cout << "C";}
};
int main()
{
C<int> c;
D<int> d;
CFinalized cf;
c.Print();
d.Print();
cf.Print();
}
Because C is using crtp I can't directly use it without providing the self derived type DerivedCrtp.
In order to use it I need to "finalize" it's type (see CFinalized).
It works but every time I need to use a class part of that hierarchy (that in my real code is deeper and contains several more template parameters), I need to explicitly declare a new class.
Is there a smarter way to do this?

Handling unique methods of Subclasses

I have a component in a software that can be described by an interface / virtual class.
Which non-virtual subclass is needed is decided by a GUI selection at runtime.
Those subclasses have unique methods, for which is makes no sense to give them a shared interface (e.g. collection of different data types and hardware access).
A minimal code example looks like this:
#include <iostream>
#include <memory>
using namespace std;
// interface base class
class Base
{
public:
virtual void shared()=0;
};
// some subclasses with shared and unique methods
class A : public Base
{
public:
void shared()
{
cout << "do A stuff\n";
}
void methodUniqueToA()
{
cout << "stuff unique to A\n";
}
};
class B : public Base
{
public:
void shared()
{
cout << "do B stuff\n";
}
void methodUniqueToB()
{
cout << "stuff unique to B\n";
}
};
// main
int main()
{
// it is not known at compile time, which subtype will be needed. Therefore: pointer has base class type:
shared_ptr<Base> basePtr;
// choose which object subtype is needed by GUI - in this case e.g. now A is required. Could also have been B!
basePtr = make_shared<A>();
// do some stuff which needs interface functionality... so far so good
basePtr->shared();
// now I want to do methodUniqueToA() only if basePtr contains type A object
// this won't compile obviously:
basePtr->methodUniqueToA(); // COMPILE ERROR
// I could check the type using dynamic_pointer_cast, however this ist not very elegant!
if(dynamic_pointer_cast<A>(basePtr))
{
dynamic_pointer_cast<A>(basePtr)->methodUniqueToA();
}
else
if(dynamic_pointer_cast<B>(basePtr))
{
dynamic_pointer_cast<B>(basePtr)->methodUniqueToB();
}
else
{
// throw some exception
}
return 0;
}
Methods methodUniqueTo*() could have different argument lists and return data which is omitted here for clarity.
I suspect that this problem isn't a rare case. E.g. for accessing different hardware by the different subclasses while also needing the polymorphic functionality of their container.
How does one generally do this?
For the sake of completeness: the output (with compiler error fixed):
do A stuff
stuff unique to A
You can have an enum which will represent the derived class. For example this:
#include <iostream>
#include <memory>
using namespace std;
enum class DerivedType
{
NONE = 0,
AType,
BType
};
class Base
{
public:
Base()
{
mType = DerivedType::NONE;
}
virtual ~Base() = default; //You should have a virtual destructor :)
virtual void shared() = 0;
DerivedType GetType() const { return mType; };
protected:
DerivedType mType;
};
// some subclasses with shared and unique methods
class A : public Base
{
public:
A()
{
mType = DerivedType::AType;
}
void shared()
{
cout << "do A stuff\n";
}
void methodUniqueToA()
{
cout << "stuff unique to A\n";
}
};
class B : public Base
{
public:
B()
{
mType = DerivedType::BType;
}
void shared()
{
cout << "do B stuff\n";
}
void methodUniqueToB()
{
cout << "stuff unique to B\n";
}
};
// main
int main()
{
shared_ptr<Base> basePtr;
basePtr = make_shared<B>();
basePtr->shared();
// Here :)
if(basePtr->GetType() == DerivedType::AType)
static_cast<A*>(basePtr.get())->methodUniqueToA();
else if(basePtr->GetType() == DerivedType::BType)
static_cast<B*>(basePtr.get())->methodUniqueToB();
return 0;
}
You can store an enum and initialize it at the constructor. Then have a Getter for that, which will give you the Type. Then a simple static cast after getting the type would do your job!
The goal of using polymorphism for the client is to control different objects with a single way. In other words, the client do not have to pay any attention to the difference of each object. That way, checking the type of each object violates the basic goal.
To achieve the goal, you will have to :
write the concrete method(methodUniqueToX()).
write a wrapper of the concrete method.
name the wrapper method abstract.
make the method public and interface/abstract.
class Base
{
public:
virtual void shared()=0;
virtual void onEvent1()=0;
virtual void onEvent2()=0;
};
// some subclasses with shared and unique methods
class A : public Base
{
private:
void methodUniqueToA()
{
cout << "stuff unique to A\n";
}
public:
void shared()
{
cout << "do A stuff\n";
}
void onEvent1()
{
this.methodUniqueToA()
}
void onEvent2()
{
}
};
class B : public Base
{
private:
void methodUniqueToB()
{
cout << "stuff unique to B\n";
}
public:
void shared()
{
cout << "do B stuff\n";
}
void onEvent1()
{
}
void onEvent2()
{
methodUniqueToB()
}
};

c++ class design, base class inheritance, or facade design pattern

I have a dumb c++ design question. Is there a way for one class to have the same method names (hence, the same API) of the methods found in several classes?
My current situation is that I have a situation where I have classes
struct A
{
void foo() { std::cout << "A::foo" << std::endl;}
void boo() { std::cout << "A::boo" << std::endl;}
};
struct B
{
void moo() { std::cout << "B::moo" << std::endl;}
void goo() { std::cout << "A::goo" << std::endl;}
};
.... imagine possibly more
What I really want is another class that acts an interface for those of these functionalities. I might be misinterpreting as the facade design pattern for a simple interface that hides the complexity of instantiating classes above but still use their same interface.
struct C
{
void foo() { ... }
void boo() { ... }
void moo() { ... }
void goo() { ... }
};
For small number of methods shown above this is feasible by either declaring structs A and B or passing them in as parameters to struct C and call the methods of A and B in C but this is impracticable if A has 40 methods and B has 30 has methods. Redeclaring 70 methods with the same name in C to call the underlying methods of A and B seemed like a lot of redundancy for no reason if I could do better.
I thought of a second solutions of using a base class
struct base
{
void foo() { }
void boo() { }
void moo() { }
void goo() { }
};
struct A : public base
{
void foo() { std::cout << "A::foo" << std::endl;}
void boo() { std::cout << "A::boo" << std::endl;}
};
struct B : public base
{
void moo() { std::cout << "B::moo" << std::endl;}
void goo() { std::cout << "A::goo" << std::endl;}
};
To try and use a shared_ptr that has all the function definitions. e.g
std::shared_ptr<base> l_var;
l_var->foo();
l_var->boo();
l_var->moo();
l_var->goo();
That still doesn't quite give me what I want because half of the methods are defined in struct A while the other half is in struct B.
I was wondering if multiple inheritance would do the trick but in school I heard it's bad practice to do multiple inheritance (debugging is hard?)
Any thoughts or recommendations? Basically it's easier to manage struct A and B (and so on as it's own class for abstraction purposes). But would like the flexibility of somehow calling their methods in some wrapper where this complexity is hidden from the user.
I think that
Redeclaring 70 methods with the same name in C to call the underlying
methods of A and B
is the right path.
It is tempting to use multiple inheritance in cases like this to avoid writing pass-through code but I think that is generally a mistake. Prefer composition over inheritance.
I would question whether your user really wants to deal with one interface with 70 methods but if that's really what you want then I don't see why it is "impractical" to write the code in C:
class C {
A a;
B b;
public:
void foo() { return a.foo(); }
void boo() { return a.boo(); }
void moo() { return b.moo(); }
void goo() { return b.goo(); }
// ...
};
Live demo.
This has the advantage that you can easily change your mind in the future and replace A and B with something else without changing the interface of C.
You can hide the implementation of C further by using the PIMPL idiom or by splitting C into an abstract base class C and an implementation CImpl.
A Bridge Design Pattern will shine here. By decoupling abstraction from its implementation , many derived classes can used these implementations separately.
struct base {
protected:
struct impl;
unique_ptr<impl> _impl;
};
struct base::impl {
void foo() {}
void bar() {}
};
struct A :public base {
void foo() { _impl->foo(); }
};
struct B:public base {
void foo() { _impl->foo(); }
void bar() { _impl->bar(); }
};
Edited ( eg implementation)
#include <memory>
#include <iostream>
using namespace std;
struct base {
base();
protected:
struct impl;
unique_ptr<impl> _impl;
};
struct base::impl {
void foo() { cout << " foo\n"; }
void bar() { cout << " bar\n"; }
void moo() { cout << " moo\n"; }
void goo() { cout << " goo\n"; }
};
base::base():_impl(new impl()) {}
struct A :public base {
A():base() { }
void foo() { _impl->foo(); }
};
struct B:public base {
B() :base() { }
void foo() { _impl->foo(); }
void bar() { _impl->bar(); }
};
struct C :public base {
C() :base() { }
void foo() { _impl->foo(); }
void bar() { _impl->bar(); }
void moo() { _impl->moo(); }
void goo() { _impl->goo(); }
};
int main()
{
B b;
b.foo();
C c1;
c1.foo();
c1.bar();
c1.moo();
c1.goo();
return 0;
}
Use virtual multiple inheritance. The reason why
it's bad practice to do multiple inheritance
is because it directly will lead to ambiguous calls or redundant data, so you can use virtual inheritance to avoid it.
Learn how C++ implement iostream will help a lot, I thought.
I second Chris Drew's answer: not only multiple iharitance is bad, any inharitance is bad, compared to composition.
The purpose of the Fascade pattern is to hide complexity. As in, given your classes A and B with 40 and 30 methods, a Fascade would expose about 10 of them - those, needed by the user. Thus is avoided the problem of "if A has 40 methods and 30 has methods" then you have a big problem – n.m.
By the way, I love how you are using struct{} instead of class{public:}. This is quite controversial and the general consensus is it constitutes bad form, but stl does it and I do it.
Back to the question. If really all the 70 methods need to be exposed (!!), I would take a more pythonistic approach:
struct Iface
{
A _a;
B _b;
};
If you manage to make the fields const, things get even less bad. And for the third time - you are probably violating SRP with those large classes.

calling child methods from parent pointer with different child classes

I've a parent class with 2 or more child class deriving from it. The number of different child classes may increase in future as more requirements are presented, but they'll all adhere to base class scheme and will contain few unique methods of their own. Let me present an example -
#include <iostream>
#include <string>
#include <vector>
#include <memory>
class B{
private: int a; int b;
public: B(const int _a, const int _b) : a(_a), b(_b){}
virtual void tell(){ std::cout << "BASE" << std::endl; }
};
class C : public B{
std::string s;
public: C(int _a, int _b, std::string _s) : B(_a, _b), s(_s){}
void tell() override { std::cout << "CHILD C" << std::endl; }
void CFunc() {std::cout << "Can be called only from C" << std::endl;}
};
class D : public B{
double d;
public: D(int _a, int _b, double _d) : B(_a, _b), d(_d){}
void tell() override { std::cout << "CHILD D" << std::endl; }
void DFunc() {std::cout << "Can be called only from D" << std::endl;}
};
int main() {
std::vector<std::unique_ptr<B>> v;
v.push_back(std::make_unique<C>(1,2, "boom"));
v.push_back(std::make_unique<D>(1,2, 44.3));
for(auto &el: v){
el->tell();
}
return 0;
}
In the above example tell() method would work correctly since it is virtual and overrided properly in child classes. However for now I'm unable to call CFunc() method and DFunc() method of their respective classes. So I've two options in my mind -
either packup CFunc() and friends inside some already defined virtual method in child class so that it executes together. But I'll loose control over particular execution of unique methods as their number rises.
or provide some pure virtual methods in base class, which would be like void process() = 0 and let them be defined in child classes as they like. Would be probably left empty void process(){} by some and used by some. But again it doesn't feels right as I've lost return value and arguments along the way. Also like previous option, if there are more methods in some child class, this doesn't feels right way to solve.
and another -
dynamic_cast<>?. Would that be a nice option here - casting back parent's pointer to child's pointer (btw I'm using smart pointers here, so only unique/shared allowed) and then calling the required function. But how would I differentiate b/w different child classes? Another public member that might return some unique class enum value?
I'm quite unexperienced with this scenario and would like some feedback. How should I approach this problem?
I've a parent class with 2 or more child class deriving from it... But I'll loose control over particular execution of unique methods as their number rises.
Another option, useful when the number of methods is expected to increase, and the derived classes are expected to remain relatively stable, is to use the visitor pattern. The following uses boost::variant.
Say you start with your three classes:
#include <memory>
#include <iostream>
using namespace std;
using namespace boost;
class b{};
class c : public b{};
class d : public b{};
Instead of using a (smart) pointer to the base class b, you use a variant type:
using variant_t = variant<c, d>;
and variant variables:
variant_t v{c{}};
Now, if you want to handle c and d methods differently, you can use:
struct unique_visitor : public boost::static_visitor<void> {
void operator()(c c_) const { cout << "c" << endl; };
void operator()(d d_) const { cout << "d" << endl; };
};
which you would call with
apply_visitor(unique_visitor{}, v);
Note that you can also use the same mechanism to uniformly handle all types, by using a visitor that accepts the base class:
struct common_visitor : public boost::static_visitor<void> {
void operator()(b b_) const { cout << "b" << endl; };
};
apply_visitor(common_visitor{}, v);
Note that if the number of classes increases faster than the number of methods, this approach will cause maintenance problems.
Full code:
#include "boost/variant.hpp"
#include <iostream>
using namespace std;
using namespace boost;
class b{};
class c : public b{};
class d : public b{};
using variant_t = variant<c, d>;
struct unique_visitor : public boost::static_visitor<void> {
void operator()(c c_) const { cout << "c" << endl; };
void operator()(d d_) const { cout << "d" << endl; };
};
struct common_visitor : public boost::static_visitor<void> {
void operator()(b b_) const { cout << "b" << endl; };
};
int main() {
variant_t v{c{}};
apply_visitor(unique_visitor{}, v);
apply_visitor(common_visitor{}, v);
}
You can declare interfaces with pure methods for each device class. When you define a specific device implementation, you inherit only from the interfaces that make sense for it.
Using the interfaces that you define, you can then iterate and call methods which are specific to each device class.
In the following example I have declared a HardwareInterface which will be inherited by all devices, and an AlertInterface which will be inherited only by hardware devices that can physically alert a user. Other similar interfaces can be defined, such as SensorInterface, LEDInterface, etc.
#include <iostream>
#include <memory>
#include <vector>
class HardwareInteface {
public:
virtual void on() = 0;
virtual void off() = 0;
virtual char read() = 0;
virtual void write(char byte) = 0;
};
class AlertInterface {
public:
virtual void alert() = 0;
};
class Buzzer : public HardwareInteface, public AlertInterface {
public:
virtual void on();
virtual void off();
virtual char read();
virtual void write(char byte);
virtual void alert();
};
void Buzzer::on() {
std::cout << "Buzzer on!" << std::endl;
}
void Buzzer::off() {
/* TODO */
}
char Buzzer::read() {
return 0;
}
void Buzzer::write(char byte) {
/* TODO */
}
void Buzzer::alert() {
std::cout << "Buzz!" << std::endl;
}
class Vibrator : public HardwareInteface, public AlertInterface {
public:
virtual void on();
virtual void off();
virtual char read();
virtual void write(char byte);
virtual void alert();
};
void Vibrator::on() {
std::cout << "Vibrator on!" << std::endl;
}
void Vibrator::off() {
/* TODO */
}
char Vibrator::read() {
return 0;
}
void Vibrator::write(char byte) {
/* TODO */
}
void Vibrator::alert() {
std::cout << "Vibrate!" << std::endl;
}
int main(void) {
std::shared_ptr<Buzzer> buzzer = std::make_shared<Buzzer>();
std::shared_ptr<Vibrator> vibrator = std::make_shared<Vibrator>();
std::vector<std::shared_ptr<HardwareInteface>> hardware;
hardware.push_back(buzzer);
hardware.push_back(vibrator);
std::vector<std::shared_ptr<AlertInterface>> alerters;
alerters.push_back(buzzer);
alerters.push_back(vibrator);
for (auto device : hardware)
device->on();
for (auto alerter : alerters)
alerter->alert();
return 0;
}
Interfaces can be even more specific, as per individual sensor type: AccelerometerInterface, GyroscopeInterface, etc.
While what you ask is possible, it will either result in your code scattered with casts, or functions available on classes that make no sense. Both are undesirable.
If you need to know if it's a class C or D, then most likely either storing it as a B is wrong, or your interface B is wrong.
The whole point of polymorphism is that the things using B is that they don't need to know exactly what sort of B it is. To me, it sounds like you're extending classes rather than having them as members, ie "C is a B" doesn't make sense, but "C has a B does".
I would go back and reconsider what B,C,D and all future items do, and why they have these unique functions that you need to call; and look into if function overloading is what you really want to do. (Similar to Ami Tavory suggestion of visitor pattern)
you can use unique_ptr.get() to get the pointer in Unique Pointer,And the use the pointer as normall. like this:
for (auto &el : v) {
el->tell();
D* pd = dynamic_cast<D*>(el.get());
if (pd != nullptr)
{
pd->DFunc();
}
C* pc = dynamic_cast<C*>(el.get());
if (pc != nullptr)
{
pc->CFunc();
}
}
and the result is this:
CHILD C
Can be called only from C
CHILD D
Can be called only from D
You should use your 1st approach if you can to hide as much type-specific implementation details as possible.
Then, if you need public interfaces you should use virtual funtions (your 2nd approach), and avoid dynamic_cast (your 3rd approach). Many theads can tell you why (e.g. Polymorphism vs DownCasting). and you already mentioned one good reason, which is you shouldn't really check for the object type ...
If you have a problem with virtual functions because your drived classes have too many unique public interfaces, then it's not IS-A relationship and it's time to review your design. For example, for shared functionality, consider composition, rather than inheritance ...
There's been a lot of comments (in OP and Ami Tavory's answer) about visitor pattern.
I think it is and acceptable answer here (considering the OP question), even if visitor pattern has disadvantages, it also has advantages (see this topic: What are the actual advantages of the visitor pattern? What are the alternatives?). Basically, if you'll need to add a new child class later, the pattern implementation will force you to consider all cases where specific action for this new class has to be taken (compiler will force you to implement the new specific visit method for all your existing visitor child classes).
An easy implementation (without boost):
#include <iostream>
#include <string>
#include <vector>
#include <memory>
class C;
class D;
class Visitor
{
public:
virtual ~Visitor() {}
virtual void visitC( C& c ) = 0;
virtual void visitD( D& d ) = 0;
};
class B{
private: int a; int b;
public: B(const int _a, const int _b) : a(_a), b(_b){}
virtual void tell(){ std::cout << "BASE" << std::endl; }
virtual void Accept( Visitor& v ) = 0; // force child class to handle the visitor
};
class C : public B{
std::string s;
public: C(int _a, int _b, std::string _s) : B(_a, _b), s(_s){}
void tell() override { std::cout << "CHILD C" << std::endl; }
void CFunc() {std::cout << "Can be called only from C" << std::endl;}
virtual void Accept( Visitor& v ) { v.visitC( *this ); }
};
class D : public B{
double d;
public: D(int _a, int _b, double _d) : B(_a, _b), d(_d){}
void tell() override { std::cout << "CHILD D" << std::endl; }
void DFunc() {std::cout << "Can be called only from D" << std::endl;}
virtual void Accept( Visitor& v ) { v.visitD( *this ); }
};
int main() {
std::vector<std::unique_ptr<B>> v;
v.push_back(std::make_unique<C>(1,2, "boom"));
v.push_back(std::make_unique<D>(1,2, 44.3));
// declare a new visitor every time you need a child-specific operation to be done
class callFuncVisitor : public Visitor
{
public:
callFuncVisitor() {}
virtual void visitC( C& c )
{
c.CFunc();
}
virtual void visitD( D& d )
{
d.DFunc();
}
};
callFuncVisitor visitor;
for(auto &el: v){
el->Accept(visitor);
}
return 0;
}
Live demo: https://ideone.com/JshiO6
Dynamic casting is the tool of absolute last resort. It is usually used when you are trying to overcome a poorly designed library that cannot be modified safely.
The only reason to need this sort of support is when you require parent and child instances to coexist in a collection. Right? The logic of polymorphism says all specialization methods that cannot logically exist in the parent should be referenced from within methods that do logically exist in the parent.
In other words, it is perfectly fine to have child class methods that don't exist in the parent to support the implementation of a virtual method.
A task queue implementation is the quintessential example (see below)
The special methods support the primary run() method. This allows a stack of tasks to be pushed into a queue and executed, no casts, no visitors, nice clean code.
// INCOMPLETE CODE
class Task
{
public:
virtual void run()= 0;
};
class PrintTask : public Task
{
private:
void printstuff()
{
// printing magic
}
public:
void run()
{
printstuff();
}
};
class EmailTask : public Task
{
private:
void SendMail()
{
// send mail magic
}
public:
void run()
{
SendMail();
}
};
class SaveTask : public Task
private:
void SaveStuff()
{
// save stuff magic
}
public:
void run()
{
SaveStuff();
}
};
Here's a "less bad" way of doing it, while keeping it simple.
Key points:
We avoid losing type information during the push_back()
New derived classes can be added easily.
Memory gets deallocated as you'd expect.
It's easy to read and maintain, arguably.
struct BPtr
{
B* bPtr;
std::unique_ptr<C> cPtr;
BPtr(std::unique_ptr<C>& p) : cPtr(p), bPtr(cPtr.get())
{ }
std::unique_ptr<D> dPtr;
BPtr(std::unique_ptr<D>& p) : dPtr(p), bPtr(dPtr.get())
{ }
};
int main()
{
std::vector<BPtr> v;
v.push_back(BPtr(std::make_unique<C>(1,2, "boom")));
v.push_back(BPtr(std::make_unique<D>(1,2, 44.3)));
for(auto &el: v){
el.bPtr->tell();
if(el.cPtr) {
el.cPtr->CFunc();
}
if(el.dPtr) {
el.dPtr->DFunc();
}
}
return 0;
}