Architecture of a director / executive / master / top-level application layer - c++

I have a collection of classes and functions which can interact with one another in rich and complex manners. Now I am devising an architecture for the top-level layer coordinating the interaction of these objects; if this were a word processor (it is not), I am now working on the Document class.
How do you implement the top-level layer of your system?
These are some important requirements:
Stand-alone: this is the one thing that can stand on its own
Serializable: it can be stored into a file and restored from a file
Extensible: I anticipate adding new functionality to the system
These are the options I have considered:
The GOF Mediator Pattern used to define an object that encapsulates how a set of objects interact [...] promotes loose coupling by by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently.
The problem I see with mediator is that I would need to subclass every object from a base class capable of communicating with the Mediator. For example:
class Mediator;
class Colleague {
public:
Colleague(Mediator*);
virtual ~Colleague() = default;
virtual void Changed() {
m_mediator->ColleagueChanged(this);
}
private:
Mediator* m_mediator;
};
This alone makes me walk away from Mediator.
The brute force blob class where I simply define an object and all methods which I need on those objects.
class ApplicationBlob {
public:
ApplicationBlob() { }
SaveTo(const char*);
static ApplicationBlob ReadFrom(const char*);
void DoFoo();
void DoBar();
// other application methods
private:
ClassOne m_cone;
ClassTwo m_ctwo;
ClassThree m_cthree;
std::vector<ClassFour> m_cfours;
std::map<ClassFive, ClassSix> m_cfive_to_csix_map;
// other application variables
};
I am afraid of the Blob class because it seems that every time I need to add behaviour I will need to tag along more and more crap into it. But it may be good enough! I may be over-thinking this.
The complete separation of data and methods, where I isolate the state in a struc-like object (mostly public data) and add new functions taking a reference to such struct-like object. For example:
struct ApplicationBlob {
ClassOne cone;
ClassTwo ctwo;
ClassThree cthree;
std::vector<ClassFour> cfours;
std::map<ClassFive, ClassSix> cfive_to_csix_map;
};
ApplicationBlob Read(const char*);
void Save(const ApplicationBlob&);
void Foo(const ApplicationBlob&);
void Bar(ApplicationBlob&);
While this approach looks exactly like the blob-class defined above, it allows me to physically separate responsibilities without having to recompile the entire thing everytime I add something. It is along the lines (not exactly, but in the same vein) of what Herb Sutter suggests with regards to preferring non-member non-friends functions (of course, everyone is a friend of a struct!).
I am stumped --- I don't want a monolith class, but I feel that at some point or another I need to bring everything together (the whole state of a complex system) and I cannot think of the best way to do it.
Please advise from your own experience (i.e., please tell me how do you do it in your application), literature references, or open source projects from where I can take some inspiration.

Related

OOP design issue: inheritance vs. interface discovery

Sorry for the lack of a better title; I couldn't think of a better one.
I have a class hierarchy like the following:
class Simulator
{
public:
virtual void simulate(unsigned int num_steps);
};
class SpecializedSimulator1 : public Simulator
{
Heap state1; Tree state2; // whatever
public:
double speed() const;
void simulate(unsigned int num_steps) override;
};
class SpecializedSimulator2 : public Simulator
{
Stack state1; Graph state2; // whatever
public:
double step_size() const;
void simulate(unsigned int num_steps) override;
};
class SpecializedSubSimulator2 : public SpecializedSimulator2
{
// more state...
public:
// more parameters...
void simulate(unsigned int num_steps) override;
};
class Component
{
public:
virtual void receive(int port, string data);
virtual void react(Simulator &sim);
};
So far, so good.
Now it gets more complicated.
Components can support one or more types of simulation. (For example, a component that negates its input may support Boolean circuits as well as continuous-time simulation.) Every component "knows" what kinds of simulations it supports, and given a particular kind of simulator, it queries the simulator (via dynamic_cast or double dispatch or whatever means are appropriate) to find out how it needs to react.
Here's where it gets tricky:
Some Components (say, imagine a SimulatorComponent class) themselves need to run sub-simulations inside of them. Part of this involves inheriting some properties of outer simulations, but potentially changing a few of them. For example, a continuous-time sub-simulator might want to lower its step size for its internal components in order to get better accuracy, but otherwise keep everything else the same.
Ideally, SimulatorComponent would be able to inherit from a class (say, SpecializedSimulator2) and override some subset of its properties as desired. The trouble, though, is that it has no idea whether the outer simulator's most-derived type is a SpecializedSimulator2 -- it may very well be the case that SimulatorComponent is running inside a more specialized simulator than that, like a SpecializedSubSimulator2! In that case, sub-components of SimulatorComponent would need to be able to somehow get access to the properties of SpecializedSubSimulator2 that they might need to access, but SimulatorComponent itself would not (and should not) be aware of these properties.
So, we see we can't use inheritance here.
Since the only means in C++ for "discovering" sub-interfaces like this is dynamic_cast, that means the sub-components must be able to directly access the outer simulator themselves, in order to run dynamic_cast on them. But if they do this, then SimulatorComponent can't intercept any of the calls.
At this point, I'm not sure what to do. The problem isn't impossible to solve, obviously -- I can think of some solutions (e.g. hierarchical key/value dictionary maintained at run-time) -- but the solutions involves some massive tradeoffs (e.g. less compile-time checking, performance loss, etc.) and make me wonder what I should be doing.
So, basically: how should I approach this problem? Is there a flaw in my design? Should I be solving this problem differently? Is there a design pattern for this that I'm just not aware of? Any tips?
I'll try to give a partial advice. For the situation in which you need to use a simulator inheriting properties from some parent then a cloning function could be the solution. This way you can ignore what actually the original simulation was, but anyway you end up with a new one with the same props.
It may just require some basic properties (like the simulation time step) which means you need to dynamic_cast to some intermediate class in your simulator hierarcy, but not exactly spot the right one.

Pattern for choosing behaviour based on the types present in a collection derived objects

I have an collection of objects which represents a model of a system. Each of these objects derives from a base class which represents the abstract "component". I would like to be able to look at the system and choose certain behaviours based on what components are present and in what order.
For the sake of argument, let's call the base class Component and the actual components InputFilter, OutputFilter and Processor. Systems that we can deal with are ones with a Processor and one or both filters. The actual system has more types and more complex interaction between them, but I think this will do for now.
I can see two "simple" ways to handle this situation with a marshalComponentSettings() function which takes one of the collections and works out how to most efficiently set up each node. This may require modifying inputs in certain ways or splitting them up differently, so it's not quite as simple as just implementing a virtual handleSettings() function per component.
The first is to report a enumerated type from each class using a pure virtual function and use those to work out what to do, dynamic_cast'ing where needed to access component specific options.
enum CompType {
INPUT_FILTER,
OUTPUT_FILTER,
PROCESSOR
}
void marshal(Settings& stg)
{
if (comps[0].type() == INPUT_FILTER)
setUpInputFilter(stg); //maybe modified the stg, or provides other feedback of what was done
// something similar for outputs
setUpProcessor(stg);
}
The second is to dynamic_cast to anything that might be an option in this function and use the success of that or not (as well as maybe the cast object if needed) to determine what to do.
void marshal(Settings& stg)
{
if (InputFilter* filter = dynamic_cast<InputFilter*>(comp[0]))
setUpInputFilter(stg); //maybe modified the stg, or provides other feedback of what was done
// something similar for outputs
setUpProcessor(stg);
}
It seems that the first is the most efficient way (don't need to speculatively test each object to find out what it is), but even that doesn't quite seem right (maybe due to the annoying details of how those devices affect each other leaking into the general marshaling code).
Is there a more elegant way to handle this situation than a nest of conditionals determining behaviour? Or even a name for the situation or pattern?
Your scenario seems an ideal candidate for the visitor design pattern, with the following roles (see UML schema in the link):
objectStructure: your model, aka collection of Component
element: your Component base class
concreteElementX: your actual components (InputFilter, OutputFilter, Processor, ...)
visitor: the abstract family of algorithms that has to manage your model as a consistent set of elements.
concreteVisitorA: your configuration process.
Main advantages:
Your configuration/set-up corresponds to the design pattern's intent: an operation to be performed on the elements of an object structure. Conversely, this pattern allows you to take into consideration the order and kind of elements encountered during the traversal, as visitors can be stateful.
One positive side effect is that the visitor pattern will give your desing the flexibility to easily add new processes/algortihms with similar traversals but different purpose (for example: pricing of the system, material planning, etc...)
class Visitor;
class Component {
public:
virtual void accept(class Visitor &v) = 0;
};
class InputFilter: public Component {
public:
void accept(Visitor &v) override; // calls the right visitor function
};
...
class Visitor
{
public:
virtual void visit(InputFilters *c) = 0; // one virtual funct for each derived component.
virtual void visit(Processor *c) = 0;
...
};
void InputFilter::accept(Visitor &v)
{ v.visit(this); }
...
class SetUp : public Visitor {
private:
bool hasProcessor;
int impedenceFilter;
int circuitResistance;
public:
void visit(InputFilters *c) override;
void visit(Processor *c) override;
...
};
Challenge:
The main challenge you'll have for the visitor, but with other alternatives as well, is that the setup can change the configuration itself (replacing component ? change of order), so that you have to take care of keeping a consitent iterator on the container while making sure not to process items several time.
The best approach depends on the type of the container, and on the kind of changes that your setup is doing. But you'll certainly need some flags to see which element was already processed, or a temporary container (either elements processed or elements remaining to be processed).
In any case, as the visitor is a class, it can also encapsulate any such state data in private members.

Is it okay when a base class has only one derived class?

I am creating a password module using OOD and design patterns. The module will keep log of recordable events and read/write to a file. I created the interface in the base class and implementation in derived class. Now I am wondering if this is sort of bad smell if a base class has only one derived class. Is this kind of class hierarchy unnecessary? Now to eliminate the class hierarchy I can of course just do everything in one class and not derive at all, here is my code.
class CLogFile
{
public:
CLogFile(void);
virtual ~CLogFile(void);
virtual void Read(CString strLog) = 0;
virtual void Write(CString strNewMsg) = 0;
};
The derived class is:
class CLogFileImpl :
public CLogFile
{
public:
CLogFileImpl(CString strLogFileName, CString & strLog);
virtual ~CLogFileImpl(void);
virtual void Read(CString strLog);
virtual void Write(CString strNewMsg);
protected:
CString & m_strLog; // the log file data
CString m_strLogFileName; // file name
};
Now in the code
CLogFile * m_LogFile = new CLogFileImpl( m_strLogPath, m_strLog );
m_LogFile->Write("Log file created");
My question is that on one hand I am following OOD principal and creating interface first and implementation in a derived class. On the other hand is it an overkill and does it complicate things? My code is simple enough not to use any design patterns but it does get clues from it in terms of general data encapsulation through a derived class.
Ultimately is the above class hierarchy good or should it be done in one class instead?
No, in fact I believe your design is good. You may later need to add a mock or test implementation for your class and your design makes this easier.
The answer depends on how likely it is that you'll have more than one behavior for that interface.
Read and write operations for a file system might make perfect sense now. What if you decide to write to something remote, like a database? In that case, a new implementation still works perfectly without affecting clients.
I'd say this is a fine example of how to do an interface.
Shouldn't you make the destructor pure virtual? If I recall correctly, that's the recommended idiom for creating a C++ interface according to Scott Myers.
Yes, this is acceptable, even with only 1 implementation of your interface, but it may be slower at run time (slightly) than a single class. (virtual dispatch has roughly the cost of following 1-2 function pointers)
This can be used as a way of preventing dependencies on clients on the implementation details. As an example, clients of your interface do not need to be recompiled just because your implementation gets a new data field under your above pattern.
You can also look at the pImpl pattern, which is a way to hide implementation details without using inheritance.
Your model works well with the factory model where you work with a lot of shared-pointers and you call some factory method to "get you" a shared pointer to an abstract interface.
The downside of using pImpl is managing the pointer itself. With C++11 however the pImpl will work well with being movable so will be more workable. At present though, if you want to return an instance of your class from a "factory" function it has copy semantic issues with its internal pointer.
This leads to implementers either returning a shared pointer to the outer class, which is made non-copyable. That means you have a shared pointer to one class holding a pointer to an inner class so function calls go through that extra level of indirection and you get two "new"s per construction. If you have only a small number of these objects that isn't a major concern, but it can be a bit clumsy.
C++11 has the advantage of both having unique_ptr which supports forward declaration of its underlying and move semantics. Thus pImpl will become more feasible where you really do know you are going to have just one implementation.
Incidentally I would get rid of those CStrings and replace them with std::string, and not put C as a prefix to every class. I would also make the data members of the implementation private, not protected.
An alternative model you could have, as defined by Composition over Inheritance and Single Responsibility Principle, both referenced by Stephane Rolland, implemented the following model.
First, you need three different classes:
class CLog {
CLogReader* m_Reader;
CLogWriter* m_Writer;
public:
void Read(CString& strLog) {
m_Reader->Read(strLog);
}
void Write(const CString& strNewMsg) {
m_Writer->Write(strNewMsg);
}
void setReader(CLogReader* reader) {
m_Reader = reader;
}
void setWriter(CLogWriter* writer) {
m_Writer = writer;
}
};
CLogReader handles the Single Responsibility of reading logs:
class CLogReader {
public:
virtual void Read(CString& strLog) {
//read to the string.
}
};
CLogWriter handles the Single Responsibility of writing logs:
class CLogWriter {
public:
virtual void Write(const CString& strNewMsg) {
//Write the string;
}
};
Then, if you wanted your CLog to, say, write to a socket, you would derive CLogWriter:
class CLogSocketWriter : public CLogWriter {
public:
void Write(const CString& strNewMsg) {
//Write to socket?
}
};
And then set your CLog instance's Writer to an instance of CLogSocketWriter:
CLog* log = new CLog();
log->setWriter(new CLogSocketWriter());
log->Write("Write something to a socket");
Pros
The pros to this method are that you follow the Single Responsibility Principle in that every class has a single purpose. It gives you the ability to expand a single purpose without having to drag along code which you would not modify anyways. It also allows you to swap out components as you see fit, without having to create an entire new CLog class for that purpose. For instance, you could have a Writer that writes to a socket, but a reader that reads a local file. Etc.
Cons
Memory management becomes a huge concern here. You have to keep track of when to delete your pointers. In this case, you'd need to delete them on destruction of CLog, as well as when setting a different Writer or Reader. Doing this, if references are stored elsewhere, could lead to dangling pointers. This would be a great opportunity to learn about Strong and Weak references, which are reference counter containers, which automatically delete their pointer when all references to it are lost.
No. If there's no polymorphism in action there's no reason for inheritance and you should use the refactoring rule to put the two classes into one. "Prefer composition over inheritance".
Edit: as #crush commented, "prefer composition over inheritance" may not be the adequate quotation here. So let's say: if you think you need to use inheritance, think twice. And if ever you are really sure you need to use it, think about it once again.

Which design is better: provide direct access to an owned object or give owning object forwarding methods to the owned object?

Which is better:
class Owner
{
public:
void SomeMethodA()
{
_ownee.SomeMethodA();
}
int SomeMethodB()
{
return _ownee.SomeMethodB();
}
private:
Ownee _ownee;
};
or this:
class Ownee
{
public:
void SomeMethodA();
int SomeMethodB();
};
class Owner
{
public:
Ownee& GetOwnee() const
{
return _ownee;
}
private:
Ownee _ownee;
};
I seem to recall reading long ago that the first option is better than the second option, but I can't remember why. I want to say because there is less coupling between users of Owner and Ownee. Clients would only need to know about Owner's interface and not Ownee's whereas with the second option clients would need to be aware of both Owner's and Ownee's interfaces.
One's not superior in every regard, in every case. However, wrapping the interfaces and encapsulating the members (the first one) is generally preferable.
A class and its members have special relationships. A class often uses its interface to restrict functionality to operate within the domain it is intended, and it may be used to for additional internal state validation and to improve program correctness.
When well encapsulated, the Owner has far more freedom to change its implementation or members as needed.
Exposing members can often lead to higher coupling, and ultimately push a lot of long term maintenance onto the client (resulting in more bugs, and redundant implementations for the client). A rather obvious example of this: assume the Owner has two members, and thread safety must be guaranteed -- it makes little sense to push that responsibility onto the clients, and to expect the clients to implement and maintain thread safety over several changes over several years. However, the Owner can abstract all that, and alter its implementation appropriately when its implementation changes, or prevent changes to members (Ownee) from affecting clients in this regard.
Changes to the classes (either Owner or Ownee) will generally affect the client far less when its members are not exposed, and the class (Owner) provides a strict, simple interface to its functionalities.
'Performance' is often regarded as a reason to favor the second ("Just provide clients accessors"). I find that an unfounded over-simplification. Performance may be better or it may worse! It depends on many, many things in C++ when developing nontrivial programs. Again, using the locking example: a bad interface of the first and/or favoring public accessors can require far more locking. As well, the optimizer has locality of the information and its members. There may be a good amount of internal implementation and methods provided by Owner -- if that's largely private, and uses static dispatch, it can result in a very small set of exported (or inline, if you favor) methods. An accessor in itself may not be so bad, but operations and dependencies on the accessed may push a lot of instructions and dependencies out to your clients -- when your internal implementation of Owner may represent all that in a single, compact, optimized form. In C++, abstraction layers can be very cheap (often nothing if done well) when optimized. Performance swings both ways here in the real world, and depends on many variables.
Most often, I use the first form. When I use the second form, it's often with:
Private/Inner classes.
or as components of a larger system (e.g. meta-programming).
Reads:
http://en.wikipedia.org/wiki/Law_Of_Demeter
http://en.wikipedia.org/wiki/Cohesion_(computer_science)
http://en.wikipedia.org/wiki/Coupling_(computer_science)
http://en.wikipedia.org/wiki/Encapsulation_(object-oriented_programming)
http://en.wikipedia.org/wiki/Information_hiding
You are thinking of the Pimpl (pointer to implementation) idiom.
One nice way to do this is to declare the implementation class in the header file and define it in the .c file; this ensures that no one can see the details via .h inclusion.
.h file
#ifndef object_INCLUDED
#define object_INCLUDED
#include <boost/scoped_ptr.hpp>
class Object
{
public:
//...
private:
class ObjectImpl;
boost::scoped_ptr<ObjectImpl> impl_;
}
#endif
.c file
#include "object.h"
//==================================================
// Implementation of ObjectImpl
//==================================================
class Object::ObjectImpl
{
public:
ObjectImpl(int);
int value() const;
private:
int val_;
};
Object::ObjectImpl::ObjectImpl(int val)
: val_(val)
{}
int
Object::ObjectImpl::value() const
{
return val_;
}
//==================================================
// Implementation of Object
//==================================================
Object::Object(int val)
: impl_(new ObjectImpl(val))
{
}

Use multiple inheritance to discriminate useage roles?

it's my flight simulation application again. I am leaving the mere prototyping phase now and start fleshing out the software design now. At least I try..
Each of the aircraft in the simulation have got a flight plan associated to them, the exact nature of which is of no interest for this question. Sufficient to say that the operator way edit the flight plan while the simulation is running. The aircraft model most of the time only needs to read-acess the flight plan object which at first thought calls for simply passing a const reference. But ocassionally the aircraft will need to call AdvanceActiveWayPoint() to indicate a way point has been reached. This will affect the Iterator returned by function ActiveWayPoint(). This implies that the aircraft model indeed needs a non-const reference which in turn would also expose functions like AppendWayPoint() to the aircraft model. I would like to avoid this because I would like to enforce the useage rule described above at compile time.
Note that class WayPointIter is equivalent to a STL const iterator, that is the way point can not be mutated by the iterator.
class FlightPlan
{
public:
void AppendWayPoint(const WayPointIter& at, WayPoint new_wp);
void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp);
void RemoveWayPoint(WayPointIter at);
(...)
WayPointIter First() const;
WayPointIter Last() const;
WayPointIter Active() const;
void AdvanceActiveWayPoint() const;
(...)
};
My idea to overcome the issue is this: define an abstract interface class for each usage role and inherit FlightPlan from both. Each user then only gets passed a reference of the appropriate useage role.
class IFlightPlanActiveWayPoint
{
public:
WayPointIter Active() const =0;
void AdvanceActiveWayPoint() const =0;
};
class IFlightPlanEditable
{
public:
void AppendWayPoint(const WayPointIter& at, WayPoint new_wp);
void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp);
void RemoveWayPoint(WayPointIter at);
(...)
};
Thus the declaration of FlightPlan would only need to be changed to:
class FlightPlan : public IFlightPlanActiveWayPoint, IFlightPlanEditable
{
(...)
};
What do you think? Are there any cavecats I might be missing? Is this design clear or should I come up with somethink different for the sake of clarity?
Alternatively I could also define a special ActiveWayPoint class which would contain the function AdvanceActiveWayPoint() but feel that this might be unnecessary.
Thanks in advance!
From a strict design point of view, your idea is quite good indeed. It is equivalent to having a single objects and several different 'views' over this object.
However there is a scaling issue here (relevant to the implementation). What if you then have another object Foo that needs access to the flight plan, you would add IFlightPlanFoo interface ?
There is a risk that you will soon face an imbroglio in the inheritance.
The traditional approach is to create another object, a Proxy, and use this object to adapt/restrict/control the usage. It's a design pattern: Proxy
Here you would create:
class FlightPlanActiveWayPoint
{
public:
FlightPlanActiveWayPoint(FlightPlan& fp);
// forwarding
void foo() { fp.foo(); }
private:
FlightPlan& mFp;
};
Give it the interface you planned for IFlightPlanActiveWayPoint, build it with a reference to an actual FlightPlan object, and forward the calls.
There are several advantages to this approach:
Dependency: it's unnecessary to edit flightPlan.h each time you have a new requirement, thus unnecessary to rebuild the whole application
It's faster because there is no virtual call any longer, and the functions can be inlined (thus amounting to almost nothing). Though I would recommend not to inline them to begin with (so you can modify them without recompiling everything).
It's easy to add checks / logging etc without modifying the base class (in case you have a problem in a particular scenario)
My 2 cents.
Not sure about "cavecats" ;-) but isn't the crew of the aircraft sometimes modifying the flight plan themselves in real life? E.g. if there is a bad storm ahead, or the destination airport is unavailable due to thick fog. In crisis situations, it is the right of the captain of the aircraft to make the final decision. Of course, you may decide not to include this in your model, but I thought it is worth mentioning.
An alternative to multiple inheritance could be composition, using a variation of the Pimpl idiom, in which the wrapper class would not expose the full interface of the internal class. As #Matthieu points out, this is also known as a variation of the Proxy design pattern.