Sorry for the lack of a better title; I couldn't think of a better one.
I have a class hierarchy like the following:
class Simulator
{
public:
virtual void simulate(unsigned int num_steps);
};
class SpecializedSimulator1 : public Simulator
{
Heap state1; Tree state2; // whatever
public:
double speed() const;
void simulate(unsigned int num_steps) override;
};
class SpecializedSimulator2 : public Simulator
{
Stack state1; Graph state2; // whatever
public:
double step_size() const;
void simulate(unsigned int num_steps) override;
};
class SpecializedSubSimulator2 : public SpecializedSimulator2
{
// more state...
public:
// more parameters...
void simulate(unsigned int num_steps) override;
};
class Component
{
public:
virtual void receive(int port, string data);
virtual void react(Simulator &sim);
};
So far, so good.
Now it gets more complicated.
Components can support one or more types of simulation. (For example, a component that negates its input may support Boolean circuits as well as continuous-time simulation.) Every component "knows" what kinds of simulations it supports, and given a particular kind of simulator, it queries the simulator (via dynamic_cast or double dispatch or whatever means are appropriate) to find out how it needs to react.
Here's where it gets tricky:
Some Components (say, imagine a SimulatorComponent class) themselves need to run sub-simulations inside of them. Part of this involves inheriting some properties of outer simulations, but potentially changing a few of them. For example, a continuous-time sub-simulator might want to lower its step size for its internal components in order to get better accuracy, but otherwise keep everything else the same.
Ideally, SimulatorComponent would be able to inherit from a class (say, SpecializedSimulator2) and override some subset of its properties as desired. The trouble, though, is that it has no idea whether the outer simulator's most-derived type is a SpecializedSimulator2 -- it may very well be the case that SimulatorComponent is running inside a more specialized simulator than that, like a SpecializedSubSimulator2! In that case, sub-components of SimulatorComponent would need to be able to somehow get access to the properties of SpecializedSubSimulator2 that they might need to access, but SimulatorComponent itself would not (and should not) be aware of these properties.
So, we see we can't use inheritance here.
Since the only means in C++ for "discovering" sub-interfaces like this is dynamic_cast, that means the sub-components must be able to directly access the outer simulator themselves, in order to run dynamic_cast on them. But if they do this, then SimulatorComponent can't intercept any of the calls.
At this point, I'm not sure what to do. The problem isn't impossible to solve, obviously -- I can think of some solutions (e.g. hierarchical key/value dictionary maintained at run-time) -- but the solutions involves some massive tradeoffs (e.g. less compile-time checking, performance loss, etc.) and make me wonder what I should be doing.
So, basically: how should I approach this problem? Is there a flaw in my design? Should I be solving this problem differently? Is there a design pattern for this that I'm just not aware of? Any tips?
I'll try to give a partial advice. For the situation in which you need to use a simulator inheriting properties from some parent then a cloning function could be the solution. This way you can ignore what actually the original simulation was, but anyway you end up with a new one with the same props.
It may just require some basic properties (like the simulation time step) which means you need to dynamic_cast to some intermediate class in your simulator hierarcy, but not exactly spot the right one.
Related
I have a collection of classes and functions which can interact with one another in rich and complex manners. Now I am devising an architecture for the top-level layer coordinating the interaction of these objects; if this were a word processor (it is not), I am now working on the Document class.
How do you implement the top-level layer of your system?
These are some important requirements:
Stand-alone: this is the one thing that can stand on its own
Serializable: it can be stored into a file and restored from a file
Extensible: I anticipate adding new functionality to the system
These are the options I have considered:
The GOF Mediator Pattern used to define an object that encapsulates how a set of objects interact [...] promotes loose coupling by by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently.
The problem I see with mediator is that I would need to subclass every object from a base class capable of communicating with the Mediator. For example:
class Mediator;
class Colleague {
public:
Colleague(Mediator*);
virtual ~Colleague() = default;
virtual void Changed() {
m_mediator->ColleagueChanged(this);
}
private:
Mediator* m_mediator;
};
This alone makes me walk away from Mediator.
The brute force blob class where I simply define an object and all methods which I need on those objects.
class ApplicationBlob {
public:
ApplicationBlob() { }
SaveTo(const char*);
static ApplicationBlob ReadFrom(const char*);
void DoFoo();
void DoBar();
// other application methods
private:
ClassOne m_cone;
ClassTwo m_ctwo;
ClassThree m_cthree;
std::vector<ClassFour> m_cfours;
std::map<ClassFive, ClassSix> m_cfive_to_csix_map;
// other application variables
};
I am afraid of the Blob class because it seems that every time I need to add behaviour I will need to tag along more and more crap into it. But it may be good enough! I may be over-thinking this.
The complete separation of data and methods, where I isolate the state in a struc-like object (mostly public data) and add new functions taking a reference to such struct-like object. For example:
struct ApplicationBlob {
ClassOne cone;
ClassTwo ctwo;
ClassThree cthree;
std::vector<ClassFour> cfours;
std::map<ClassFive, ClassSix> cfive_to_csix_map;
};
ApplicationBlob Read(const char*);
void Save(const ApplicationBlob&);
void Foo(const ApplicationBlob&);
void Bar(ApplicationBlob&);
While this approach looks exactly like the blob-class defined above, it allows me to physically separate responsibilities without having to recompile the entire thing everytime I add something. It is along the lines (not exactly, but in the same vein) of what Herb Sutter suggests with regards to preferring non-member non-friends functions (of course, everyone is a friend of a struct!).
I am stumped --- I don't want a monolith class, but I feel that at some point or another I need to bring everything together (the whole state of a complex system) and I cannot think of the best way to do it.
Please advise from your own experience (i.e., please tell me how do you do it in your application), literature references, or open source projects from where I can take some inspiration.
I have an collection of objects which represents a model of a system. Each of these objects derives from a base class which represents the abstract "component". I would like to be able to look at the system and choose certain behaviours based on what components are present and in what order.
For the sake of argument, let's call the base class Component and the actual components InputFilter, OutputFilter and Processor. Systems that we can deal with are ones with a Processor and one or both filters. The actual system has more types and more complex interaction between them, but I think this will do for now.
I can see two "simple" ways to handle this situation with a marshalComponentSettings() function which takes one of the collections and works out how to most efficiently set up each node. This may require modifying inputs in certain ways or splitting them up differently, so it's not quite as simple as just implementing a virtual handleSettings() function per component.
The first is to report a enumerated type from each class using a pure virtual function and use those to work out what to do, dynamic_cast'ing where needed to access component specific options.
enum CompType {
INPUT_FILTER,
OUTPUT_FILTER,
PROCESSOR
}
void marshal(Settings& stg)
{
if (comps[0].type() == INPUT_FILTER)
setUpInputFilter(stg); //maybe modified the stg, or provides other feedback of what was done
// something similar for outputs
setUpProcessor(stg);
}
The second is to dynamic_cast to anything that might be an option in this function and use the success of that or not (as well as maybe the cast object if needed) to determine what to do.
void marshal(Settings& stg)
{
if (InputFilter* filter = dynamic_cast<InputFilter*>(comp[0]))
setUpInputFilter(stg); //maybe modified the stg, or provides other feedback of what was done
// something similar for outputs
setUpProcessor(stg);
}
It seems that the first is the most efficient way (don't need to speculatively test each object to find out what it is), but even that doesn't quite seem right (maybe due to the annoying details of how those devices affect each other leaking into the general marshaling code).
Is there a more elegant way to handle this situation than a nest of conditionals determining behaviour? Or even a name for the situation or pattern?
Your scenario seems an ideal candidate for the visitor design pattern, with the following roles (see UML schema in the link):
objectStructure: your model, aka collection of Component
element: your Component base class
concreteElementX: your actual components (InputFilter, OutputFilter, Processor, ...)
visitor: the abstract family of algorithms that has to manage your model as a consistent set of elements.
concreteVisitorA: your configuration process.
Main advantages:
Your configuration/set-up corresponds to the design pattern's intent: an operation to be performed on the elements of an object structure. Conversely, this pattern allows you to take into consideration the order and kind of elements encountered during the traversal, as visitors can be stateful.
One positive side effect is that the visitor pattern will give your desing the flexibility to easily add new processes/algortihms with similar traversals but different purpose (for example: pricing of the system, material planning, etc...)
class Visitor;
class Component {
public:
virtual void accept(class Visitor &v) = 0;
};
class InputFilter: public Component {
public:
void accept(Visitor &v) override; // calls the right visitor function
};
...
class Visitor
{
public:
virtual void visit(InputFilters *c) = 0; // one virtual funct for each derived component.
virtual void visit(Processor *c) = 0;
...
};
void InputFilter::accept(Visitor &v)
{ v.visit(this); }
...
class SetUp : public Visitor {
private:
bool hasProcessor;
int impedenceFilter;
int circuitResistance;
public:
void visit(InputFilters *c) override;
void visit(Processor *c) override;
...
};
Challenge:
The main challenge you'll have for the visitor, but with other alternatives as well, is that the setup can change the configuration itself (replacing component ? change of order), so that you have to take care of keeping a consitent iterator on the container while making sure not to process items several time.
The best approach depends on the type of the container, and on the kind of changes that your setup is doing. But you'll certainly need some flags to see which element was already processed, or a temporary container (either elements processed or elements remaining to be processed).
In any case, as the visitor is a class, it can also encapsulate any such state data in private members.
(Refer Update #1 for a concise version of the question.)
We have an (abstract) class named Games that has subclasses, say BasketBall and Hockey (and probably many more to come later).
Another class GameSchedule, must contain a collection GamesCollection of various Games objects. The issue is that we would, at times, like to iterate only through the BasketBall objects of GamesCollection and call functions that are specific to it (and not mentioned in the Games class).
That is, GameSchedule deals with a number of objects that broadly belong to Games class, in the sense that they do have common functions that are being accessed; at the same time, there is more granularity at which they are to be handled.
We would like to come up with a design that avoids unsafe downcasting, and is extensible in the sense that creating many subclasses under Games or any of its existing subclasses must not necessitate the addition of too much code to handle this requirement.
Examples:
A clumsy solution that I came up with, that doesn't do any downcasting at all, is to have dummy functions in the Game class for every subclass specific function that has to be called from GameSchedule. These dummy functions will have an overriding implementation in the appropriate subclasses which actually require its implementation.
We could explicitly maintain different containers for various subclasses of Games instead of a single container. But this would require a lot of extra code in GameSchedule, when the number of subclasses grow. Especially if we need to iterate through all the Games objects.
Is there a neat way of doing this?
Note: the code is written in C++
Update# 1: I realized that the question can be put in a much simpler way. Is it possible to have a container class for any object belonging to a hierarchy of classes? Moreover, this container class must have the ability to pick elements belonging to (or derive from) a particular class from the hierarchy and return an appropriate list.
In the context of the above problem, the container class must have functions like GetCricketGames, GetTestCricketGames, GetBaseballGame etc.,
This is exactly one of the problems that The "Tell, Don't Ask" principle was created for.
You're describing an object that holds onto references to other objects, and wants to ask them what type of object they are before telling them what they need to do. From the article linked above:
The problem is that, as the caller, you should not be making decisions based on the state of the called object that result in you then changing the state of the object. The logic you are implementing is probably the called object’s responsibility, not yours. For you to make decisions outside the object violates its encapsulation.
If you break the rules of encapsulation, you not only introduce the runtime risks incurred by rampant downcasts, but also make your system significantly less maintainable by making it easier for components to become tightly coupled.
Now that that's out there, let's look at how the "Tell, Don't Ask" could be applied to your design problem.
Let's go through your stated constraints (in no particular order):
GameSchedule needs to iterate over all games, performing general operations
GameSchedule needs to iterate over a subset of all games (e.g., Basketball), to perform type-specific operations
No downcasts
Must easily accommodate new Game subclasses
The first step to following the "Tell, Don't Ask" principle is identifying the actions that will take place in the system. This lets us take a step back and evaluate what the system should be doing, without getting bogged down into the details of how it should be doing it.
You made the following comment in #MarkB's answer:
If there's a TestCricket class inheriting from Cricket, and it has many specific attributes concerning the timings of the various innings of the match, and we would like to initialize the values of all TestCricket objects' timing attributes to some preset value, I need a loop that picks all TestCricket objects and calls some function like setInningTimings(int inning_index, Time_Object t)
In this case, the action is: "Initialize the inning timings of all TestCricket games to a preset value."
This is problematic, because the code that wants to perform this initialization is unable to differentiate between TestCricket games, and other games (e.g., Basketball). But maybe it doesn't need to...
Most games have some element of time: Basketball games have time-limited periods, while Baseball games have (basically) innings with basically unlimited time. Each type of game could have its own completely unique configuration. This is not something we want to offload onto a single class.
Instead of asking each game what type of Game it is, and then telling it how to initialize, consider how things would work if the GameSchedule simply told each Game object to initialize. This delegates the responsibility of the initialization to the subclass of Game - the class with literally the most knowledge of what type of game it is.
This can feel really weird at first, because the GameSchedule object is relinquishing control to another object. This is an example of the Hollywood Principle. It's a completely different way of solving problems than the approach most developers initially learn.
This approach deals with the constraints in the following ways:
GameSchedule can iterate over a list of Games without any problem
GameSchedule no longer needs to know the subtypes of its Games
No downcasting is necessary, because the subclasses themselves are handling the subclass-specific logic
When a new subclass is added, no logic needs to be changed anywhere - the subclass itself implements the necessary details (e.g., an InitializeTiming() method).
Edit: Here's an example, as a proof-of-concept.
struct Game
{
std::string m_name;
Game(std::string name)
: m_name(name)
{
}
virtual void Start() = 0;
virtual void InitializeTiming() = 0;
};
// A class to demonstrate a collaborating object
struct PeriodLengthProvider
{
int GetPeriodLength();
}
struct Basketball : Game
{
int m_period_length;
PeriodLengthProvider* m_period_length_provider;
Basketball(PeriodLengthProvider* period_length_provider)
: Game("Basketball")
, m_period_length_provider(period_length_provider)
{
}
void Start() override;
void InitializeTiming() override
{
m_period_length = m_time_provider->GetPeriodLength();
}
};
struct Baseball : Game
{
int m_number_of_innings;
Baseball() : Game("Baseball") { }
void Start() override;
void InitializeTiming() override
{
m_number_of_innings = 9;
}
}
struct GameSchedule
{
std::vector<Game*> m_games;
GameSchedule(std::vector<Game*> games)
: m_games(games)
{
}
void StartGames()
{
for(auto& game : m_games)
{
game->InitializeTiming();
game->Start();
}
}
};
You've already identified the first two options that came to my mind: Make the base class have the methods in question, or maintain separate containers for each game type.
The fact that you don't feel these are appropriate leads me to believe that the "abstract" interface you provide in the Game base class may be far too concrete. I suspect that what you need to do is step back and look at the base interface.
You haven't given any concrete example to help, so I'm going to make one up. Let's say your basketball class has a NextQuarter method and hockey has NextPeriod. Instead, add to the base class a NextGameSegment method, or something that abstracts away the game-specific details. All the game-specific implementation details should be hidden in the child class with only a game-general interface needed by the schedule class.
C# supports reflections and by using the "is" keyword or GetType() member function you could do these easily. If you are writing your code in unmanaged C++, I think the best way to do this is add a GetType() method in your base class (Games?). Which in its turn would return an enum, containing all the classes that derive from it (so you would have to create an enum too) for that. That way you can safely determine the type you are dealing with only through the base type. Below is an example:
enum class GameTypes { Game, Basketball, Football, Hockey };
class Game
{
public:
virtual GameTypes GetType() { return GameTypes::Game; }
}
class BasketBall : public Game
{
public:
GameTypes GetType() { return GameTypes::Basketball; }
}
and you do this for the remaining games (e.g. Football, Hockey). Then you keep a container of Game objects only. As you get the Game object, you call its GetType() method and effectively determine its type.
You're trying to have it all, and you can't do that. :) Either you need to do a downcast, or you'll need to utilize something like the visitor pattern that would then require you to do work every time you create a new implementation of Game. Or you can fundamentally redesign things to eliminate the need to pick the individual Basketballs out of a collection of Games.
And FWIW: downcasting may be ugly, but it's not unsafe as long as you use pointers and check for null:
for(Game* game : allGames)
{
Basketball* bball = dynamic_cast<Basketball*>(game);
if(bball != nullptr)
bball->SetupCourt();
}
I'd use the strategy pattern here.
Each game type has its own scheduling strategy which derives from the common strategy used by your game schedule class and decouples the dependency between the specific game and game schedule.
i'm developing the people evacuation simulator. At start, it will load a tree-like building structure data (floor/room/wall, etc.) from XML file, load the initial configuration of people, read user-defined moving model parameters and start simulation. E.g. for each man at each model step i need to find all geometry objects near him, and select the closest path to path to exit from the building - as fast as possible.
So, i need to load a clear OOP-way to represent building, where people will move.
I stuck with this (just example):
class Aperture
{
...
public:
virtual QRectF extent() = 0;
virtual QString description() const = 0;
// other common for all apertures methods
private:
int m_srcRoomID;
int m_dstRoomID;
// other generic aperture properties
};
class Window: Aperture
{
...
public:
virtual void extent() override;
virtual QString description() override;
// other overrides
private:
EGlassType m_glassType;
bool m_bOpenable;
// other window-specific properties
};
// and other descendants: Door, CompositeDoor and so on
The problem: i want to store all building apertures in one collection as pointers to abstract base Aperture, and use only base virtual functions without any casting to derived Apertures.
BUT: during evacuation various Aperture properties may change, so i need to modify them in the type-dependent way. I can't make this functionality common and place it to the base class: window haven't closer, antifire door don't afraid of fire - but others do, and so on. May be there is a way (e.g. design pattern) to dynamically store/add/remove these properties? or just avoid subclassing at all, because complete hierarchy of Building items is very compicated and clunky.
You could add a virtual function to set the properties in the base class Aperture, like
virtual void update(vector) = 0; or
virtual void update(...) = 0;
Well, the entity deciding to close the door needs to know that it can be closed and it should even consider changing that state. Here are some options:
Make the object decide for itself in a genericish virtual method (something like Aperture::update, Aperture::interactWithFire, Aperture::iteractWithPerson or similar).
Use the Visitor pattern, i.e. create virtual Aperture::visit that will call appropriately overloaded method back on the visitor. That allows you to define interaction methods only for the combination of objects for which it makes sense.
Use "interfaces", say IClosable on anything that can be closed like Door and in appropriate place IClosable * closable = dynamic_cast<IClosable *>(apperture); if(closable) { /* consider whether it should be closed */ } Actually you'd probably use such interfaces with the visitor pattern too to reduce the number of necessary overloads.
If any, or any combination of this, works for you, the object oriented approach is viable.
it's my flight simulation application again. I am leaving the mere prototyping phase now and start fleshing out the software design now. At least I try..
Each of the aircraft in the simulation have got a flight plan associated to them, the exact nature of which is of no interest for this question. Sufficient to say that the operator way edit the flight plan while the simulation is running. The aircraft model most of the time only needs to read-acess the flight plan object which at first thought calls for simply passing a const reference. But ocassionally the aircraft will need to call AdvanceActiveWayPoint() to indicate a way point has been reached. This will affect the Iterator returned by function ActiveWayPoint(). This implies that the aircraft model indeed needs a non-const reference which in turn would also expose functions like AppendWayPoint() to the aircraft model. I would like to avoid this because I would like to enforce the useage rule described above at compile time.
Note that class WayPointIter is equivalent to a STL const iterator, that is the way point can not be mutated by the iterator.
class FlightPlan
{
public:
void AppendWayPoint(const WayPointIter& at, WayPoint new_wp);
void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp);
void RemoveWayPoint(WayPointIter at);
(...)
WayPointIter First() const;
WayPointIter Last() const;
WayPointIter Active() const;
void AdvanceActiveWayPoint() const;
(...)
};
My idea to overcome the issue is this: define an abstract interface class for each usage role and inherit FlightPlan from both. Each user then only gets passed a reference of the appropriate useage role.
class IFlightPlanActiveWayPoint
{
public:
WayPointIter Active() const =0;
void AdvanceActiveWayPoint() const =0;
};
class IFlightPlanEditable
{
public:
void AppendWayPoint(const WayPointIter& at, WayPoint new_wp);
void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp);
void RemoveWayPoint(WayPointIter at);
(...)
};
Thus the declaration of FlightPlan would only need to be changed to:
class FlightPlan : public IFlightPlanActiveWayPoint, IFlightPlanEditable
{
(...)
};
What do you think? Are there any cavecats I might be missing? Is this design clear or should I come up with somethink different for the sake of clarity?
Alternatively I could also define a special ActiveWayPoint class which would contain the function AdvanceActiveWayPoint() but feel that this might be unnecessary.
Thanks in advance!
From a strict design point of view, your idea is quite good indeed. It is equivalent to having a single objects and several different 'views' over this object.
However there is a scaling issue here (relevant to the implementation). What if you then have another object Foo that needs access to the flight plan, you would add IFlightPlanFoo interface ?
There is a risk that you will soon face an imbroglio in the inheritance.
The traditional approach is to create another object, a Proxy, and use this object to adapt/restrict/control the usage. It's a design pattern: Proxy
Here you would create:
class FlightPlanActiveWayPoint
{
public:
FlightPlanActiveWayPoint(FlightPlan& fp);
// forwarding
void foo() { fp.foo(); }
private:
FlightPlan& mFp;
};
Give it the interface you planned for IFlightPlanActiveWayPoint, build it with a reference to an actual FlightPlan object, and forward the calls.
There are several advantages to this approach:
Dependency: it's unnecessary to edit flightPlan.h each time you have a new requirement, thus unnecessary to rebuild the whole application
It's faster because there is no virtual call any longer, and the functions can be inlined (thus amounting to almost nothing). Though I would recommend not to inline them to begin with (so you can modify them without recompiling everything).
It's easy to add checks / logging etc without modifying the base class (in case you have a problem in a particular scenario)
My 2 cents.
Not sure about "cavecats" ;-) but isn't the crew of the aircraft sometimes modifying the flight plan themselves in real life? E.g. if there is a bad storm ahead, or the destination airport is unavailable due to thick fog. In crisis situations, it is the right of the captain of the aircraft to make the final decision. Of course, you may decide not to include this in your model, but I thought it is worth mentioning.
An alternative to multiple inheritance could be composition, using a variation of the Pimpl idiom, in which the wrapper class would not expose the full interface of the internal class. As #Matthieu points out, this is also known as a variation of the Proxy design pattern.