Tightly coupled parallel class hierarchies in C++ - c++

For context, I'm working on a C++ artificial-life system involving agents controlled by recurrent neural networks, but the details aren't important.
I'm facing a need to keep two object hierarchies for the "brain" and "body" of my agents separate. I want a variety of different brain and body types that can be coupled to each other at run-time. I need to do this to avoid a combinatorial explosion caused by the multiplicative enumeration of the separate concerns of how a body works and how a brain works.
For example, there are many topologies and styles of recurrent neural network with a variety of different transfer functions and input/output conventions. These details don't depend on how the body of the agent works, however, as long as sensory inputs can be encoded into neural activity and then decoded into actions.
Here is a simple class hierarchy that illustrates the problem and one potential solution:
// Classes we are going to declare
class Image2D; // fake
class Angle2D; // fake
class Brain;
class Body;
class BodyWithEyes;
class BrainWithVisualCortex;
// Brain and Body base classes know about their parallels
class Brain
{
public:
Body* base_body;
Body* body() { return base_body; }
virtual Brain* copy() { return 0; } // fake
// ...etc
};
class Body
{
public:
Brain* base_brain;
Brain* brain() { return base_brain; }
virtual Body* reproduce() { return 0; } // fake
// ...etc
};
// Now introduce two strongly coupled derived classes, with overloaded access
// methods to each-other that return the parallel derived type
class BrainWithVisualCortex : public Brain
{
public:
BodyWithEyes* body();
virtual void look_for_snakes();
virtual Angle2D* where_to_look_next() { return 0; } // fake
};
class BodyWithEyes : public Body
{
public:
BrainWithVisualCortex* brain();
virtual void swivel_eyeballs();
virtual Image2D* get_image() { return 0; } // fake
};
// Member functions of these derived classes
void BrainWithVisualCortex::look_for_snakes()
{
Image2D* image = body()->get_image();
// ... find snakes and respond
}
void BodyWithEyes::swivel_eyeballs()
{
Angle2D* next = brain()->where_to_look_next();
// ... move muscles to achieve the brain's desired gaze
}
// Sugar to allow derived parallel classes to refer to each-other
BodyWithEyes* BrainWithVisualCortex::body()
{ return dynamic_cast<BodyWithEyes*>(base_body); }
BrainWithVisualCortex* BodyWithEyes::brain()
{ return dynamic_cast<BrainWithVisualCortex*>(base_brain); }
// pretty vacuous test
int main()
{
BodyWithEyes* body = new BodyWithEyes;
BrainWithVisualCortex* brain = new BrainWithVisualCortex;
body->base_brain = brain;
brain->base_body = body;
brain->look_for_snakes();
body->swivel_eyeballs();
}
The trouble with this approach is that it's clunky and not particularly type-safe. It does have the benefit that the body() and brain() member functions provide a bit of sugar for derived classes to refer to their partners.
Does anyone know of a better way of accomplishing this tight coupling between 'parallel' hierarchies of classes? Does this pattern come up often enough to have warranted a well-known general solution? A perusal of the usual sources didn't reveal any established patterns that match this problem.
Any help appreciated!

I think what you are doing is approximately correct. You would want the members such as reproduce to be pure virtual, though, so the base classes cannot be created. What is your issue with type-safety? You don't want the Brain subclass and the Body subclass to depend on each others' types.

Related

C++: implementing multiple instances of an interface or an optional interface in a class

I'm having trouble finding best practice information about what I believe should be a fairly common problem pattern.
I will start with a specific (software update related) example, because it makes the discussion more concrete, but the issue should be fairly generic.
Say that I have a software updater interface:
struct Software_updater {
virtual ~Software_updater() = default;
virtual void action1(const Input1& input1) = 0;
virtual void action2() = 0;
virtual bool action3(const Input2& input2) = 0;
virtual Data1 info1() = 0;
virtual Data2 info2() = 0;
// etc.
};
For my first implementation A, I am lucky, everything is straightforward.
class A_software_updater : public Software_updater {
// ...
};
A B_software_updater, however, is more complicated. Like in the A-case, it is connected to the target to update in a non-trivial manner and maintains a target connection state. But more importantly, it can update two images: the application image, and the boot loader image.
Liking what I have so far, I see no real reason to go for a refactoring, so I assume I can just build upon it. I come up with the following solution:
class B_software_updater {
public:
Software_updater& application_updater() { return application_updater_; }
Software_updater& boot_loader_updater() { return boot_loader_updater_; }
private:
class Application_updater : public Software_updater {
// ...
} application_updater_;
class Boot_loader_updater : public Software_updater {
// ...
} boot_loader_updater_;
};
I.e. I am returning non-const references to "interfaces to" member variables. Note that they cannot be const, since they mute state.
Request 1: I think the solution above is a clean one, but I would be happy to get some confirmation.
In fact, I have recently faced the issue of having to optionally provide an interface in a class, based on compile-time selection of a feature, and I believe the pattern above is a solution for that problem too:
struct Optional_interface {
virtual ~Optional_interface() = default;
virtual void action1(const Input1& input1) = 0;
virtual void action2() = 0;
virtual bool action3(const Input2& input2) = 0;
virtual Data1 info1() = 0;
virtual Data2 info2() = 0;
// etc.
};
class A_implementation {
public:
#ifdef OPTIONAL_FEATURE
Optional_interface& optional_interface() { return optional_implementation_; }
#endif
// ...
private:
#ifdef OPTIONAL_FEATURE
class Optional_implementation : public Optional_interface {
// ...
} optional_implementation_;
#endif
// ...
};
Request 2: I could not find a simple (as in: not unnecessarily complicated template-based) and clean way to express a compile-time optional inheritance at the A_implementation-level. Can you?
Better solution
Based on a comment from #ALX23z about invalidation of member variable reference upon move, I am now rejecting my initial solution (original post). That invalidation problem would not be an issue for my case, but I am in search of a generic pattern.
As usual, the solution is obvious once one has found it.
First a summary of my initial problem.
Say that I have a software updater interface (or any interface, this is just an example):
struct Software_updater {
virtual ~Software_updater() = default;
virtual void action1(const Input1& input1) = 0;
virtual void action2() = 0;
virtual bool action3(const Input2& input2) = 0;
virtual Data1 info1() = 0;
virtual Data2 info2() = 0;
// etc.
};
A B_software_updater can update two images: an application image, and a boot loader image. Therefore, it wants to provide two instances of the Software_updater interface.
A solution that is better than the one in my original post is to declare a B_application_updater and a B_boot_loader_updater, constructed from a B_software_updater&, outside of B_software_updater, and instantiated by client code.
class B_application_updater : public Software_updater {
B_application_updater(B_software_updater&);
// ...
};
class B_boot_loader_updater : public Software_updater {
B_application_updater(B_boot_loader_updater&);
// ...
};
It does have the drawback of forcing the client code to create three objects instead of only one, but I think that the cleanliness outweighs that drawback.
This will work for the optional interface too (see original post):
class A_optional_implementation : public Optional_interface {
A_optional_implementation(A_implementation&);
};
A_optional_implementation will be declared outside of A_implementation.
Applications that do not need that interface will simply not instantiate A_optional_implementation.
Additional thoughts
This is an application of the adapter design pattern!
Basically, what this answer comes down to:
An Interface class.
An Implementation class that does the job, but does not really care about the interface. It does not inherit Interface. The point of this is that Implementation could "do the job" corresponding to several interfaces, without the complexity and drawbacks of multiple inheritance (name conflicts, etc.). It could also do the job corresponding to several instances of the same interface (my case above).
An Interface_adapter class that takes an Implementation& parameter in its constructor. It inherits Interface, i.e. it effectively implements it, and that is its only purpose.
Taking a step back, I realize that this is simply an application of the adapter pattern (although Implementationin this case does not necessarily need to implement any externally defined interface - its interface is just its public member functions)!
An intermediate solution: leave the adapter classes inside the implementation class
In the solution above, I specify that the adapter classes are declared outside of the implementation classes. While this seems logical for the traditional adapter pattern case, for my case, I could just as well declare them inside the implementation class (like I did in the original post) and make them public. The client code would still have to create the implementation and adapter objects, but the adapter classes would belong to the implementation namespace, which would look nicer.

Avoiding performance concerns of Runtime Polymorphism

In a numerical code run on thousands of processors for 10s of hours, I have a base class (Mesh) whose methods are hit 100s to 1000s of millions of times. There are currently two (Mesh_A, Mesh_B) derived classes, but eventually this will expand to three or four. User code cannot know until runtime whether its pointer to Mesh is actually a Mesh_A or Mesh_B, but for the rest of the run, it never changes.
Current Implementation:
// Base class
class Mesh {
...
virtual const Point& cell_centroid(int c) = 0;
}
// derived class A
class MeshA : public Mesh {
...
Point& cell_centroid(int c) { return cell_centroids_[c]; }
}
// derived class B
class MeshB : public Mesh {
...
Point& cell_centroid(int c) { return other_framework_->cell_centroid(c); }
}
// typical user class
class User {
User(Mesh* mesh) : mesh_(mesh) {}
void evalFunction() {
for (int c=0; c!=mesh_->num_cells(); ++c) {
double result = func(mesh_->cell_centroid(c));
...
}
}
// Other methods which use mesh_->cell_centroid() very often, and in different ways.
}
Previously, MeshA was the only Mesh, and there was no base class, and the heavily hit methods were all inlined. Profiling shows that the change to runtime polymorphism (likely thanks to the loss of inlining?) with virtual methods has resulted in a ~15% hit, which just isn't going to fly.
I've been pouring over static polymorphism and other ideas, but I'd love to hear thoughts on how one might avoid this hit in a reasonably sustainable way.
Idea 1: Coarsen the virtual function to amortize overhead. One thought was to try to encapsulate all the "calling patterns" of these methods inside a virtual method, lifting the virtual to a coarser level while keeping the fine-grained methods as non-virtual. For example, in the above example, one could pass a function pointer to a new virtual method of Mesh that implemented the loop, returning an array of doubles and called a non-virtual, inlined cell_centroid() method inside of this.
// Base class
class Mesh {
...
virtual void evalFunction(double (*func)(Point&), std::vector<double>* result) = 0;
}
// derived class A
class MeshA : public Mesh {
...
void evalFunction(double (*func)(Point&), std::vector<double>* result) {
for (int c=0; c!=num_cells(); ++c) (*result)[c] = (*func)(cell_centroid(c));
}
Point& cell_centroid(int c) { return cell_centroids_[c]; }
}
// similar for B
// typical user class
class User {
User(Mesh* mesh) : mesh_(mesh) {}
void evalFunction() {
m_->evalFunction();
}
}
I'm a little nervous that this will make the Mesh interface huge -- I don't have a single access pattern (like the example) that could easily be encapsulated. My guess is that, for every virtual method in the current Mesh class (15-20), I'd have 3 or 4 different "calling patterns", and the interface for Mesh would explode. There are a variety of "User" classes and, while the sometimes use Mesh the same way, they don't always, and I don't want to limit myself to a few patterns.
Idea 2: Template all user code with Mesh_T. Write a factory that creates User<MeshA> or User<MeshB> instances depending upon runtime information. This is a little concerning because this will effectively mean that my entire code is templated code, compile times will blow up, errors will be harder to debug etc. A large code base would be touched.
Idea 3: It seems to me that one ought to be able to resolve, at the start of the run, that the Mesh pointer User gets is actually a MeshA or MeshB, and not need to do the virtual table lookups and regain the inlined A or B implementation. I don't know of an elegant way of doing this that wouldn't basically be worse than Idea 1, i.e. a bunch of duplicated code in User with case/switch. But if there were an elegant way of doing this, it would be my first choice.
Any thoughts on a good choice, a better idea, or other comments on runtime polymorphism of a high level class without virtual low-level methods would be appreciated!
Provided I understood you correctly that mesh_ will always be either MeshA or MeshB and not a mix of them.
// typical user class
class User {
User(Mesh* mesh) : mesh_(mesh) {}
template<class dType>
void evalFunction() {
dType *myMesh = dynamic_cast<dType *>(mesh_);
for (int c=0; c!=myMesh _->num_cells(); ++c) {
double result = func(myMesh _->cell_centroid(c));
...
}
}
void evalFunction() {
if (dynamic_cast<MeshA *>(mesh_))
evalFunction<MeshA>();
if (dynamic_cast<MeshB *>(mesh_))
evalFunction<MeshB>();
}
}
evalFunction chooses either A or B template.
Alternatively
class User {
User(Mesh* mesh) : mesh_(mesh) {}
template<class dType>
void evalFunction(dType *myMesh) {
for (int c=0; c!=myMesh _->num_cells(); ++c) {
double result = func(myMesh _->cell_centroid(c));
...
}
}
void evalFunction() {
MeshA *meshA = dynamic_cast<MeshA *>(mesh_);
if (meshA)
evalFunction<MeshA>(meshA);
MeshB *meshB = dynamic_cast<MeshB *>(mesh_);
if (meshB)
evalFunction<MeshB>(meshB);
}
}

Using the strategy pattern if the concrete strategy depends on the concrete parameter type

I'm currently working with a System/Data hierarchy implemented like this:
class SystemData
{
}
class SystemDataA : public SystemData
{
int x;
}
class SystemDataB : public SystemData
{
float y;
}
class System
{
virtual SystemData* getData() = 0;
virtual Result computeData(SystemData*) = 0;
}
class SystemA : public System
{
// really returns SystemDataA
SystemData* getData() override;
Result computeData(SystemData*) override;
}
class SystemB : public System
{
// really returns SystemDataB
SystemData* getData() override;
Result computeData(SystemData*) override;
}
In the end there is a controller class which does sth similar to this:
void foo()
{
for(auto& s : systemVec)
{
SystemData* data = s->getData();
FinalResult final = s->computeData(data);
}
}
Whereas now each specific system dynamic_casts back to the concrete type it is able to process. This seems like pretty bad design and I'd like to refactor this into sth more reasonable. My first idea was to just implement the computation algorithm inside the SystemData classes and then just do:
SystemData* data = s->getData();
FinalResult final = data->compute();
but does it really belong there?
It appears more intuitive to have a separate algorithm hierarchy, probably implemented with the strategy pattern. However then I again have the problem of losing runtime type info of the data because all algorithms get passed the abstract data type and in the end will have to dynamic cast and do nullptr and error checks again. So is it still better to implement the algorithm inside the data classes itself? Can I maybe still implement the hierarchy in a separate module and just add function pointers or a similar construct to the data class? Is there a completely different solution I'm not aware of?

Dealing with functions in a class that should be broken down into functions for clarity?

How is this situation usually dealt with. For example, an object may need to do very specific things:
class Human
{
public:
void eat(Food food);
void drink(Liquid liquid);
String talkTo(Human human);
}
Say that this is what this class is supposed to do, but to actually do these might result in functions that are well over 10,000 lines. So you would break them down. The problem is, many of those helper functions should not be called by anything other than the function they are serving. This makes the code confusing in a way. For example, chew(Food food); would be called by eat() but should not be called by a user of the class and probably should not be called anywhere else.
How are these cases dealt with generally. I was looking at some classes from a real video game that looked like this:
class CHeli (7 variables, 19 functions)
Variables list
CatalinaHasBeenShotDown
CatalinaHeliOn
NumScriptHelis
NumRandomHelis
TestForNewRandomHelisTimer
ScriptHeliOn
pHelis
Functions list
FindPointerToCatalinasHeli (void)
GenerateHeli (b)
CatalinaTakeOff (void)
ActivateHeli (b)
MakeCatalinaHeliFlyAway (void)
HasCatalinaBeenShotDown (void)
InitHelis (void)
UpdateHelis (void)
TestRocketCollision (P7CVector)
TestBulletCollision (P7CVectorP7CVectorP7CVector)
SpecialHeliPreRender (void)
SpawnFlyingComponent (i)
StartCatalinaFlyBy (void)
RemoveCatalinaHeli (void)
Render (void)
SetModelIndex (Ui)
PreRenderAlways (void)
ProcessControl (void)
PreRender (void)
All of these look like fairly high level functions, which mean their source code must be pretty lengthy. What is good about this is that at a glance it is very clear what this class can do and the class looks easy to use. However, the code for these functions might be quite large.
What should a programmer do in these cases; what is proper practice for these types of situations.
For example, chew(Food food); would be called by eat() but should not be called by a user of the class and probably should not be called anywhere else.
Then either make chew a private or protected member function, or a freestanding function in an anonymous namespace inside the eat implementation module:
// eat.cc
// details of digestion
namespace {
void chew(Human &subject, Food &food)
{
while (!food.mushy())
subject.move_jaws();
}
}
void Human::eat(Food &food)
{
chew(*this, food);
swallow(*this, food);
}
The benefits of this approach compared to private member functions is that the implementation of eat can be changed without the header changing (requiring recompilation of client code). The drawback is that the function cannot be called by any function outside of its module, so it can't be shared by multiple member functions unless they share an implementation file, and that it can't access private parts of the class directly.
The drawback compared to protected member functions is that derived classes can't call chew directly.
The implementation of one member function is allowed to be split in whatever way you want.
A popular option is to use private member functions:
struct Human
{
void eat();
private:
void chew(...);
void eat_spinach();
...
};
or to use the Pimpl idiom:
struct Human
{
void eat();
private:
struct impl;
std::unique_ptr<impl> p_impl;
};
struct Human::impl { ... };
However, as soon as the complexity of eat goes up, you surely don't want a collection of private methods accumulating (be it inside a Pimpl class or inside a private section).
So you want to break down the behavior. You can use classes:
struct SpinachEater
{
void eat_spinach();
private:
// Helpers for eating spinach
};
...
void Human::eat(Aliment* e)
{
if (e->isSpinach()) // Use your favorite dispatch method here
// Factories, or some sort of polymorphism
// are possible ideas.
{
SpinachEater eater;
eater.eat_spinach();
}
...
}
with the basic principles:
Keep it simple
One class one responsibility
Never duplicate code
Edit: A slightly better illustration, showing a possible split into classes:
struct Aliment;
struct Human
{
void eat(Aliment* e);
private:
void process(Aliment* e);
void chew();
void swallow();
void throw_up();
};
// Everything below is in an implementation file
// As the code grows, it can of course be split into several
// implementation files.
struct AlimentProcessor
{
virtual ~AlimentProcessor() {}
virtual process() {}
};
struct VegetableProcessor : AlimentProcessor
{
private:
virtual process() { std::cout << "Eeek\n"; }
};
struct MeatProcessor
{
private:
virtual process() { std::cout << "Hmmm\n"; }
};
// Use your favorite dispatch method here.
// There are many ways to escape the use of dynamic_cast,
// especially if the number of aliments is expected to grow.
std::unique_ptr<AlimentProcessor> Factory(Aliment* e)
{
typedef std::unique_ptr<AlimentProcessor> Handle;
if (dynamic_cast<Vegetable*>(e))
return Handle(new VegetableProcessor);
else if (dynamic_cast<Meat*>(e))
return Handle(new MeatProcessor);
else
return Handle(new AlimentProcessor);
};
void Human::eat(Aliment* e)
{
this->process(e);
this->chew();
if (e->isGood()) this->swallow();
else this->throw_up();
}
void Human::process(Aliment* e)
{
Factory(e)->process();
}
One possibility is to (perhaps privately) compose the Human of smaller objects that each do a smaller part of the work. So, you might have a Stomach object. Human::eat(Food food) would delegate to this->stomach.digest(food), returning a DigestedFood object, which the Human::eat(Food food) function processed further.
Function decomposition is something that is learnt from experience, and it usually implies type decomposition at the same time. If your functions become too large there are different things that can be done, which is best for a particular case depends on the problem at hand.
separate functionality into private functions
This makes more sense when the functions have to access quite a bit of state from the object, and if they can be used as building blocks for one or more of the public functions
decompose the class into different subclasses that have different responsibilities
In some cases a part of the work falls naturally into its own little subproblem, then the higher level functions can be implemented in terms of calls to the internal subobjects (usually members of the type).
Because the domain that you are trying to model can be interpreted in quite a number of different ways I fear trying to provide a sensible breakdown, but you could imagine that you had a mouth subobject in Human that you could use to ingest food or drink. Inside the mouth subobject you could have functions open, chew, swallow...

calling a function from a set of overloads depending on the dynamic type of an object

I feel like the answer to this question is really simple, but I really am having trouble finding it. So here goes:
Suppose you have the following classes:
class Base;
class Child : public Base;
class Displayer
{
public:
Displayer(Base* element);
Displayer(Child* element);
}
Additionally, I have a Base* object which might point to either an instance of the class Base or an instance of the class Child.
Now I want to create a Displayer based on the element pointed to by object, however, I want to pick the right version of the constructor. As I currently have it, this would accomplish just that (I am being a bit fuzzy with my C++ here, but I think this the clearest way)
object->createDisplayer();
virtual void Base::createDisplayer()
{
new Displayer(this);
}
virtual void Child::createDisplayer()
{
new Displayer(this);
}
This works, however, there is a problem with this:
Base and Child are part of the application system, while Displayer is part of the GUI system. I want to build the GUI system independently of the Application system, so that it is easy to replace the GUI. This means that Base and Child should not know about Displayer. However, I do not know how I can achieve this without letting the Application classes know about the GUI.
Am I missing something very obvious or am I trying something that is not possible?
Edit: I missed a part of the problem in my original question. This is all happening quite deep in the GUI code, providing functionality that is unique to this one GUI. This means that I want the Base and Child classes not to know about the call at all - not just hide from them to what the call is
It seems a classic scenario for double dispatch. The only way to avoid the double dispatch is switching over types (if( typeid(*object) == typeid(base) ) ...) which you should avoid.
What you can do is to make the callback mechanism generic, so that the application doesn't have to know of the GUI:
class app_callback {
public:
// sprinkle const where appropriate...
virtual void call(base&) = 0;
virtual void call(derived&) = 0;
};
class Base {
public:
virtual void call_me_back(app_callback& cb) {cb.call(*this);}
};
class Child : public Base {
public:
virtual void call_me_back(app_callback& cb) {cb.call(*this);}
};
You could then use this machinery like this:
class display_callback : public app_callback {
public:
// sprinkle const where appropriate...
virtual void call(base& obj) { displayer = new Displayer(obj); }
virtual void call(derived& obj) { displayer = new Displayer(obj); }
Displayer* displayer;
};
Displayer* create_displayer(Base& obj)
{
display_callback dcb;
obj.call_me_back(dcb);
return dcb.displayer;
}
You will have to have one app_callback::call() function for each class in the hierarchy and you will have to add one to each callback every time you add a class to the hierarchy.
Since in your case calling with just a base& is possible, too, the compiler won't throw an error when you forget to overload one of these functions in a callback class. It will simply call the one taking a base&. That's bad.
If you want, you could move the identical code of call_me_back() for each class into a privately inherited class template using the CRTP. But if you just have half a dozen classes it doesn't really add all that much clarity and it requires readers to understand the CRTP.
Have the application set a factory interface on the system code. Here's a hacked up way to do this. Obviously, apply this changes to your own preferences and coding standards. In some places, I'm inlining the functions in the class declaration - only for brevity.
// PLATFORM CODE
// platformcode.h - BEGIN
class IDisplayer;
class IDisplayFactory
{
virtual IDisplayer* CreateDisplayer(Base* pBase) = 0;
virtual IDisplayer* CreateDisplayer(Child* pBase) = 0;
};
namespace SystemDisplayerFactory
{
static IDisplayFactory* s_pFactory;
SetFactory(IDisplayFactory* pFactory)
{
s_pFactory = pFactory;
}
IDisplayFactory* GetFactory()
{
return s_pFactory;
}
};
// platformcode.h - end
// Base.cpp and Child.cpp implement the "CreateDisplayer" methods as follows
void Base::CreateDisplayer()
{
IDisplayer* pDisplayer = SystemDisplayerFactory::GetFactory()->CreateDisplayer(this);
}
void Child::CreateDisplayer()
{
IDisplayer* pDisplayer = SystemDisplayerFactory::GetFactory()->CreateDisplayer(this);
}
// In your application code, do this:
#include "platformcode.h"
class CDiplayerFactory : public IDisplayerFactory
{
IDisplayer* CreateDisplayer(Base* pBase)
{
return new Displayer(pBase);
}
IDisplayer* CreateDisplayer(Child* pChild)
{
return new Displayer(pChild);
}
}
Then somewhere early in app initialization (main or WinMain), say the following:
CDisplayerFactory* pFactory = new CDisplayerFactory();
SystemDisplayFactory::SetFactory(pFactory);
This will keep your platform code from having to know the messy details of what a "displayer" is, and you can implement mock versions of IDisplayer later to test Base and Child independently of the rendering system.
Also, IDisplayer (methods not shown) becomes an interface declaration exposed by the platform code. Your implementation of "Displayer" is a class (in your app code) that inherits from IDisplayer.