How to create a loose coupling between parts of a project? - c++

Introduction:
I come from a mechanical engineering background, but took a class in embedded software programming (on a lovely little robot) with the intention of improving some skills I had in programming already. However, the class was largely unsatisfactory in what I hoped to achieve (basically, it taught the basics of c++ with some very superficial composition patterns).
Question We were told to make our code somewhat object oriented by defining classes for various parts of the code. Since all the parts were very dependent of each other, the general structure looked as follows (basically, a Drive, Sensors and WorldModel class with some dependencies, and a Director class trying to make our robot solve the task at hand)
class Drive{
void update();
Drive(Sensors & sensors);
private:
Sensors & sensors
};
class Sensors{
void update();
}
class WorldModel {
void update();
WorldModel(Sensors & sensors, Drive & drive);
private:
Sensors & sensors;
Drive & drive;
};
class Director {
void update();
Director(Sensors & sensors, Drive & drive, WorldModel & worldmodel);
private:
Sensors & sensors;
Drive & drive;
WorldModel & worldmodel;
};
This is actually an extremely condensed version. It seems to me however that this is not really object oriented code as much as Clumsily Split-Up Codeā„¢. In particular, it seemed almost impossible to make e.g. the Sensors class get data from the Drive class without some fudging around in the Director class (i.e., first perform a function in the Drive class to get the velocity setpoint, and then provide that to the update() method in the Sensors class to do some Kalman filtering).
How does one create a project in c++ with various parts being very dependent on each other, without this becoming a problem? I read an SO answer on interfaces but I'm not sure how to apply that to this problem - is that even the way to go here? Is there a design pattern (not necessarily an object oriented one) that is suitable for projects such as this one?

No, there's not a design pattern for projects "like this".
Design patterns are not the goal.
So, let me put a few guesses straight:
you want light weight code (because otherwise you'd be using Java, right)
you want maintainable code (because otherwise, spaghetti would be fine)
you want idiomatic code
Here's what I'd do:
declare classes in separate headers
use forward defines to reduce header coupling
move implementations in the corresponding source files
keep unwanted implementation dependencies out of the header file. Optionally use the Pimpl Idiom here.
e.g. if you use library X to implement Y::frobnicate don't include libX.h in your Y.h. Instead, include it in Y.cpp only.
If you find that you need class member declaration that would require libX.h in the header, use the Pimpl Idiom.
I don't know what else you could want here :)
Maybe, if you need "interfaces" consider using template composition. Policy, strategy, state patterns. E.g. Instead of
#include <set>
struct ISensors {
virtual int get(int id) const = 0;
virtual int set(int id, int newval) const = 0;
virtual std::set<int> sensors() const = 0;
};
class Drive {
void update();
Drive(ISensors &sensors);
private:
ISensors &sensors;
};
You could consider
template <typename Sensors>
class Drive {
void update();
Drive(Sensors &sensors);
private:
Sensors &sensors;
};
Which leaves you free to implement Sensors in any which way that statically compiles. The "limitation" is that the injection of dependencies needs to be statically defined/typed. The benefit is ultimate flexibility and zero-overhead: e.g. you couldn't have virtual member function templates, but you can use this as a Sensors policy:
struct TestSensors {
int get(int) { return 9; }
int set(int, int) { return -9; }
template<typename OutputIterator>
OutputIterator sensors(OutputIterator out) const {
int available[] = { 7, 8, 13, 21 };
return std::copy(std::begin(available), std::end(available), out);
}
};
using TestDrive = Drive<TestSensors>;

Related

Use case of dynamic_cast

In many places you can read that dynamic_cast means "bad design". But I cannot find any article with appropriate usage (showing good design, not just "how to use").
I'm writing a board game with a board and many different types of cards described with many attributes (some cards can be put on the board). So I decided to break it down to the following classes/interfaces:
class Card {};
class BoardCard : public Card {};
class ActionCard : public Card {};
// Other types of cards - but two are enough
class Deck {
Card* draw_card();
};
class Player {
void add_card(Card* card);
Card const* get_card();
};
class Board {
void put_card(BoardCard const*);
};
Some guys suggested that I should use only one class describing a card. But I would mean many mutually excluding attributes. And in the case of the Board class' put_card(BoardCard const&) - it is a part of the interface that I cannot put any card on the board. If I had only one type of card I would have to check it inside the method.
I see the flow like the following:
a generic card is in the deck (it's not important what its type is)
a generic card is drawn from the deck and given to a player (the same as above)
if a player chosen a BoardCard then it can be put on the board
So I use dynamic_cast before putting a card on the board. I think that using some virtual method is out of the question in this case (additionally I wouldn't make any sense to add some action about board to every card).
So my question is: What have I designed badly? How could I avoid dynamic_cast? Using some type attribute and ifs would be a better solution...?
P.S.
Any source treating about dynamic_cast usage in the context of design is more than appreciated.
Yes, dynamic_cast is a code smell, but so is adding functions that try to make it look like you have a good polymorphic interface but are actually equal to a dynamic_cast i.e. stuff like can_put_on_board. I'd go as far as to say that can_put_on_board is worse - you're duplicating code otherwise implemented by dynamic_cast and cluttering the interface.
As with all code smells, they should make you wary and they don't necessarily mean that your code is bad. This all depends on what you're trying to achieve.
If you're implementing a board game that will have 5k lines of code, two categories of cards, then anything that works is fine. If you're designing something larger, extensible and possibly allowing for cards being created by non-programmers (whether it's an actual need or you're doing it for research) then this probably won't do.
Assuming the latter, let's look at some alternatives.
You could put the onus of applying the card properly to the card, instead of some external code. E.g. add a play(Context& c) function to the card (the Context being a means to access the board and whatever may be necessary). A board card would know that it may only be applied to a board and a cast would not be necessary.
I would entirely give up using inheritance however. One of its many issues is how it introduces a categorisation of all cards. Let me give you an example:
you introduce BoardCard and ActionCard putting all cards in these two buckets;
you then decide that you want to have a card that can be used in two ways, either as an Action or a Board card;
let's say you solved the issue (through multiple-inheritance, a BoardActionCard type, or any different way);
you then decide you want to have card colours (as in MtG) - how do you do this? Do you create RedBoardCard, BlueBoardCard, RedActionCard etc?
Other examples of why inheritance should be avoided and how to achieve runtime polymorphism otherwise you may want to watch Sean Parent's excellent "Inheritance is the Base Class of Evil" talk. A promising looking library that implements this sort of polymorphism is dyno, I have not tried it out yet though.
A possible solution might be:
class Card final {
public:
template <class T>
Card(T model) :
model_(std::make_shared<Model<T>>(std::move(model)))
{}
void play(Context& c) const {
model_->play(c);
}
// ... any other functions that can be performed on a card
private:
class Context {
public:
virtual ~Context() = default;
virtual void play(Context& c) const = 0;
};
template <class T>
class Model : public Context {
public:
void play(Context& c) const override {
play(model_, c);
// or
model_.play(c);
// depending on what contract you want to have with implementers
}
private:
T model_;
};
std::shared_ptr<const Context> model_;
};
Then you can either create classes per card type:
class Goblin final {
void play(Context& c) const {
// apply effects of card, e.g. take c.board() and put the card there
}
};
Or implement behaviours for different categories, e.g. have a
template <class T>
void play(const T& card, Context& c);
template and then use enable_if to handle it for different categories:
template <class T, class = std::enable_if<IsBoardCard_v<T>>
void play(const T& card, Context& c) {
c.board().add(Card(card));
}
where:
template <class T>
struct IsBoardCard {
static constexpr auto value = T::IS_BOARD_CARD;
};
template <class T>
using IsBoardCard_v = IsBoardCard<T>::value;
then defining your Goblin as:
class Goblin final {
public:
static constexpr auto IS_BOARD_CARD = true;
static constexpr auto COLOR = Color::RED;
static constexpr auto SUPERMAGIC = true;
};
which would allow you to categorise your cards in many dimensions also leaving the possibility to entirely specialise the behaviour by implementing a different play function.
The example code uses std::shared_ptr to store the model, but you can definitely do something smarter here. I like to use a static-sized storage and only allow Ts of a certain maximum size and alignment to be used. Alternatively you could use a std::unique_ptr (which would disable copying though) or a variant leveraging small-size optimisation.
Why not use dynamic_cast
dynamic_cast is generally disliked because it can be easily abused to completely break the abstractions used. And it is not wise to depend on specific implementations. Of course it may needed, but really rarely, so nearly everyone takes a rule of thumb - probably you should not use it. It's a code smell that may imply that you should rethink Your abstractions because they may be not the ones needed in Your domain. Maybe in Your game the Board should not have put_card method - maybe instead card should have method play(const PlaySpace *) where Board implements PlaySpace or something like that. Even CppCoreGuidelines discourage using dynamic_cast in most cases.
When use
Generally few people ever have problems like this but I came across it multiple times already. The problem is called Double (or Multiple) Dispatch. Here is pretty old, but quite relevant article about double dispatch (mind the prehistoric auto_ptr):
http://www.drdobbs.com/double-dispatch-revisited/184405527
Also Scott Meyers in one of his books wrote something about building double dispatch matrix with dynamic_cast. But, all in all, these dynamic_casts are 'hidden` inside this matrix - users don't know what kind of magic happens inside.
Noteworthy - multiple dispatch is also considered code smell :-).
Reasonable alternative
Check out the visitor pattern. It can be used as replace for dynamic_cast but it is also some kind of code smell.
I generally recommend using dynamic_cast and visitor as a last resort tools for design problems as they break abstraction which increases complexity.
You could apply the principles behind Microsoft's COM and provide a series of interfaces, with each interface describing a set of related behaviors. In COM you determine if a specific interface is available by calling QueryInterface, but in modern C++ dynamic_cast works similarly and is more efficient.
class Card {
virtual void ~Card() {} // must have at least one virtual method for dynamic_cast
};
struct IBoardCard {
virtual void put_card(Board* board);
};
class BoardCard : public Card, public IBoardCard {};
class ActionCard : public Card {};
// Other types of cards - but two are enough
class Deck {
Card* draw_card();
};
class Player {
void add_card(Card* card);
Card const* get_card();
};
class Board {
void put_card(Card const* card) {
const IBoardCard *p = dynamic_cast<const IBoardCard*>(card);
if (p != null) p->put_card(this);
};
That may be a bad example, but I hope you get the idea.
It seems to me that the two types of cards are quite different. The things a board card and an action card can do are mutually exclusive, and the common thing is just that they can be drawn from the deck. Moreover, that's not a thing a card does, it's a player / deck action.
If this is true, a question one should ask is whether they should really descend from a common type, Card. An alternative design would be that of a tagged union: let Card instead be a std::variant<BoardCard, ActionCard...>, and contain an instance of the appropriate type. When deciding what to do with the card, you use a switch on the index() and then std::get<> only the appropriate type. This way you don't need any *_cast operator, and get a complete freedom of what methods (neither of which would make sense for the other types) each type of card supports.
If it's only almost true but not for all types, you can variate slightly: only group together those types of cards that can sensibly be superclassed, and put the set of those common types into the variant.
I always found the usage of a cast a code smell, and in my experience, the 90% of the time the cast was due to bad design.
I saw usage of dynamic_cast in some time-critical application where it was providing more performance improvement than inherit from multiple interfaces or retrieving an enumeration of some kind from the object (like a type). So the code smelt, but the usage of the dynamic cast was worth it in that case.
That said, I will avoid dynamic cast in your case as well as multiple inheritances from different interfaces.
Before reaching my solution, your description sounds like there are a lot of details omitted about the behavior of the cards or the consequence they have on the board
and the game itself. I used that as a further constraint, trying to keep thing boxed and maintainable.
I would go for a composition instead of an inheritance. It will provide you evenly the chance of using the card as a 'factory':
it can spawn more game modifiers - something to be applied to the board, and one to a specific enemy
the card can be reused - the card could stays in the hands of the player and the effect on the game is detached from it (there is no 1-1 binding between cards and effects)
the card itself can sit back on the deck, while the effects of what it did are still alive on the board.
a card can have a representation (drawing methods) and react to the touch in a way, where instead the BoardElement can be evenly a 3d miniature with animation
See [https://en.wikipedia.org/wiki/Composition_over_inheritance for further details]. I'd like to quote:
Composition also provides a more stable business domain in the long term as it is less prone to the quirks of the family members.In other words, it is better to compose what an object can do (HAS - A) than extend what it is(IS - A).[1]
A BoardCard/Element can be something like this:
//the card placed on the board.
class BoardElement {
public:
BoardElement() {}
virtual ~BoardElement() {};
//up to you if you want to add a read() methods to read data from the card description (XML / JSON / binary data)
// but that should not be part of the interface. Talking about a potential "Wizard", it's probably more related to the WizardCard - WizardElement relation/implementation
//some helpful methods:
// to be called by the board when placed
virtual void OnBoard() {}
virtual void Frame(const float time) { /*do something time based*/ }
virtual void Draw() {}
// to be called by the board when removed
virtual void RemovedFromBoard() {}
};
the Card could represent something to be used in a deck or in the user's hands, I'll add an interface of that kind
class Card {
public:
Card() {}
virtual ~Card() {}
//that will be invoked by the user in order to provide something to the Board, or NULL if nothing should be added.
virtual std::shared_ptr<BoardElement*> getBoardElement() { return nullptr; }
virtual void Frame(const float time) { /*do something time based*/ }
virtual void Draw() {}
//usefull to handle resources or internal states
virtual void OnUserHands() {}
virtual void Dropped() {}
};
I'd like to add that this pattern allows many tricks inside the getBoardElement() method, from acting as a factory (so something should be spawned with its own lifetime),
returning an Card data member such as a std:shared_ptr<BoardElement> wizard3D; (as example), create a binding between the Card and the BoardElement as for:
class WizardBoardElement : public BoardElement {
public:
WizardBoardElement(const Card* owner);
// other members omitted ...
};
The binding can be useful in order to read some configuration data or whatever...
So inheritance from Card and from BoardElement will be used to implement the features exposed by the base classes and not for providing other methods that can be reached only through a dynamic_cast.
For completeness:
class Player {
void add(Card* card) {
//..
card->OnUserHands();
//..
}
void useCard(Card* card) {
//..
//someway he's got to retrieve the board...
getBoard()->add(card->getBoardElement());
//..
}
Card const* get_card();
};
class Board {
void add(BoardElement* el) {
//..
el->OnBoard();
//..
}
};
In that way, we have no dynamic_cast, Player and board do simple things without knowing about the inner details of the card they are handled, providing good separations between the different objects and increasing maintainability.
Talking about the ActionCard, and about "effects" that may be applied to other players or your avatar, we can think about having a method like:
enum EffectTarget {
MySelf, //a player on itself, an enemy on itself
MainPlayer,
Opponents,
StrongOpponents
//....
};
class Effect {
public:
//...
virtual void Do(Target* target) = 0;
//...
};
class Card {
public:
//...
struct Modifiers {
EffectTarget eTarget;
std::shared_ptr<Effect> effect;
};
virtual std::vector<Modifiers> getModifiers() { /*...*/ }
//...
};
class Player : public Target {
public:
void useCard(Card* card) {
//..
//someway he's got to retrieve the board...
getBoard()->add(card->getBoardElement());
auto modifiers = card->getModifiers();
for each (auto modifier in modifiers)
{
//this method is supposed to look at the board, at the player and retrieve the instance of the target
Target* target = getTarget(modifier.eTarget);
modifier.effect->Do(target);
}
//..
}
};
That's another example of the same pattern to apply the effects from the card, avoiding the cards to know details about the board and it's status, who is playing the card, and keep the code in Player pretty simple.
Hope this may help,
Have a nice day,
Stefano.
What have I designed badly?
The problem is that you always need to extend that code whenever a new type of Card is introduced.
How could I avoid dynamic_cast?
The usual way to avoid that is to use interfaces (i.e. pure abstract classes):
struct ICard {
virtual bool can_put_on_board() = 0;
virtual ~ICard() {}
};
class BoardCard : public ICard {
public:
bool can_put_on_board() { return true; };
};
class ActionCard : public ICard {
public:
bool can_put_on_board() { return false; };
};
This way you can simply use a reference or pointer to ICard and check, if the actual type it holds can be put on the Board.
But I cannot find any article with appropriate usage (showing good design, not just "how to use").
In general I'd say there aren't any good, real life use cases for dynamic cast.
Sometimes I have used it in debug code for CRTP realizations like
template<typename Derived>
class Base {
public:
void foo() {
#ifndef _DEBUG
static_cast<Derived&>(*this).doBar();
#else
// may throw in debug mode if something is wrong with Derived
// not properly implementing the CRTP
dynamic_cast<Derived&>(*this).doBar();
#endif
}
};
I think that I would end up with something like this (compiled with clang 5.0 with -std=c++17). I'm couroius about your comments. So whenever I want to handle different types of Cards I need to instantiate a dispatcher and supply methods with proper signatures.
#include <iostream>
#include <typeinfo>
#include <type_traits>
#include <vector>
template <class T, class... Args>
struct any_abstract {
static bool constexpr value = std::is_abstract<T>::value || any_abstract<Args...>::value;
};
template <class T>
struct any_abstract<T> {
static bool constexpr value = std::is_abstract<T>::value;
};
template <class T, class... Args>
struct StaticDispatcherImpl {
template <class P, class U>
static void dispatch(P* ptr, U* object) {
if (typeid(*object) == typeid(T)) {
ptr->do_dispatch(*static_cast<T*>(object));
return;
}
if constexpr (sizeof...(Args)) {
StaticDispatcherImpl<Args...>::dispatch(ptr, object);
}
}
};
template <class Derived, class... Args>
struct StaticDispatcher {
static_assert(not any_abstract<Args...>::value);
template <class U>
void dispatch(U* object) {
if (object) {
StaticDispatcherImpl<Args...>::dispatch(static_cast<Derived *>(this), object);
}
}
};
struct Card {
virtual ~Card() {}
};
struct BoardCard : Card {};
struct ActionCard : Card {};
struct Board {
void put_card(BoardCard const& card, int const row, int const column) {
std::cout << "Putting card on " << row << " " << column << std::endl;
}
};
struct UI : StaticDispatcher<UI, BoardCard, ActionCard> {
void do_dispatch(BoardCard const& card) {
std::cout << "Get row to put: ";
int row;
std::cin >> row;
std::cout << "Get row to put:";
int column;
std::cin >> column;
board.put_card(card, row, column);
}
void do_dispatch(ActionCard& card) {
std::cout << "Handling action card" << std::endl;
}
private:
Board board;
};
struct Game {};
int main(int, char**) {
Card* card;
ActionCard ac;
BoardCard bc;
UI ui;
card = &ac;
ui.dispatch(card);
card = &bc;
ui.dispatch(card);
return 0;
}
As I can't see why you wouldn't use virtual methods, I'm just gonna present, how I would do it. First I have the ICard interface for all cards. Then I would distinguish, between the card types (i.e. BoardCard and ActionCard and whatever cards you have). And All the cards inherit from either one of the card types.
class ICard {
virtual void put_card(Board* board) = 0;
virtual void accept(CardVisitor& visitor) = 0; // See later, visitor pattern
}
class ActionCard : public ICard {
void put_card(Board* board) final {
// std::cout << "You can't put Action Cards on the board << std::endl;
// Or just do nothing, if the decision of putting the card on the board
// is not up to the user
}
}
class BoardCard : public ICard {
void put_card(Board* board) final {
// Whatever implementation puts the card on the board, mb something like:
board->place_card_on_board(this);
}
}
class SomeBoardCard : public BoardCard {
void accept(CardVisitor& visitor) final { // visitor pattern
visitor.visit(this);
}
void print_information(); // see BaseCardVisitor in the next code section
}
class SomeActionCard : public ActionCard {
void accept(CardVisitor& visitor) final { // visitor pattern
visitor.visit(this);
}
void print_information(); // see BaseCardVisitor
}
class Board {
void put_card(ICard* const card) {
card->put_card(this);
}
void place_card_on_board(BoardCard* card) {
// place it on the board
}
}
I guess the user has to know somehow what card he has drawn, so for that I would implement the visitor pattern. You could also place the accept-method, which I placed in the most derived classes/cards, in the card types (BoardCard, ActionCard), depeneding on where you want to draw the line on what information shall be given to the user.
template <class T>
class BaseCardVisitor {
void visit(T* card) {
card->print_information();
}
}
class CardVisitor : public BaseCardVisitor<SomeBoardCard>,
public BaseCardVisitor<SomeActionCard> {
}
class Player {
void add_card(ICard* card);
ICard const* get_card();
void what_is_this_card(ICard* card) {
card->accept(visitor);
}
private:
CardVisitor visitor;
};
Hardly a complete answer but just wanted to pitch in with an answer similar to Mark Ransom's but just very generally speaking, I've found downcasting to be useful in cases where duck typing is really useful. There can be certain architectures where it is very useful to do things like this:
for each object in scene:
{
if object can fly:
make object fly
}
Or:
for each object in scene that can fly:
make object fly
COM allows this type of thing somewhat like so:
for each object in scene:
{
// Request to retrieve a flyable interface from
// the object.
IFlyable* flyable = object.query_interface<IFlyable>();
// If the object provides such an interface, make
// it fly.
if (flyable)
flyable->fly();
}
Or:
for each flyable in scene.query<IFlyable>:
flyable->fly();
This implies a cast of some form somewhere in the centralized code to query and obtain interfaces (ex: from IUnknown to IFlyable). In such cases, a dynamic cast checking run-time type information is the safest type of cast available. First there might be a general check to see if an object provides the interface that doesn't involve casting. If it doesn't, this query_interface function might return a null pointer or some type of null handle/reference. If it does, then using a dynamic_cast against RTTI is the safest thing to do to fetch the actual pointer to the generic interface (ex: IInterface*) and return IFlyable* to the client.
Another example is entity-component systems. In that case instead of querying abstract interfaces, we retrieve concrete components (data):
Flight System:
for each object in scene:
{
if object.has<Wings>():
make object fly using object.get<Wings>()
}
Or:
for each wings in scene.query<Wings>()
make wings fly
... something to this effect, and that also implies casting somewhere.
For my domain (VFX, which is somewhat similar to gaming in terms of application and scene state), I've found this type of ECS architecture to be the easiest to maintain. I can only speak from personal experience, but I've been around for a long time and have faced many different architectures. COM is now the most popular style of architecture in VFX and I used to work on a commercial VFX application used widely in films and games and archviz and so forth which used a COM architecture, but I've found ECS as popular in game engines even easier to maintain than COM for my particular case*.
One of the reasons I find ECS so much easier is because the bulk of the systems in this domain like PhysicsSystem, RenderingSystem, AnimationSystem, etc. boil down to just data transformers and the ECS model just fits beautifully for that purpose without abstractions getting in the way. With COM in this domain, the number of subtypes implementing an interface like a motion interface like IMotion might be in the hundreds (ex: a PointLight which implements IMotion along with 5 other interfaces), requiring hundreds of classes implementing different combinations of COM interfaces to maintain individually. With the ECS, it uses a composition model over inheritance, and reduces those hundreds of classes down to just a couple dozen simple component structs which can be combined in endless ways by the entities that compose them, and only a handful of systems have to provide behavior: everything else is just data which the systems loop through as input to then provide some output.
Between legacy codebases that used a bunch of global variables and brute force coding (ex: sprinkling conditionals all over the place instead of using polymorphism), deep inheritance hierarchies, COM, and ECS, in terms of maintainability for my particular domain, I'd say ECS > COM, while deep inheritance hierarchies and brute force coding with global variables all over the place were both incredibly hard to maintain (OOP using deep inheritance with protected data fields is almost as hard to reason about in terms of maintaining invariants as a boatload of global variables IMO, but further can invite the most nightmarish cascading changes spilling across entire hierarchies if designs need to change -- at least the brute force legacy codebase didn't have the cascading problem since it was barely reusing any code to begin with).
COM and ECS are somewhat similar except with COM, the dependencies flow towards central abstractions (COM interfaces provided by COM objects, like IFlyable). With an ECS, the dependencies flow towards central data (components provided by ECS entities, like Wings). At the heart of both is often the idea that we have a bunch of non-homogeneous objects (or "entities") of interest whose provided interfaces or components are not known in advance, since we're accessing them through a non-homogeneous collection (ex: a "Scene"). As a result we need to discover their capabilities at runtime when iterating through this non-homogeneous collection by either querying the collection or the objects individually to see what they provide.
Either way, both involve some type of centralized casting to retrieve either an interface or a component from an entity, and if we have to downcast, then a dynamic_cast is at least the safest way to do that which involves runtime type checking to make sure the cast is valid. And with both ECS and COM, you generally only need one line of code in the entire system which performs this cast.
That said, the runtime checking does have a small cost. Typically if dynamic_cast is used in COM and ECS architectures, it's done in a way so that a std::bad_cast should never be thrown and/or that dynamic_cast itself never returns nullptr (the dynamic_cast is just a sanity check to make sure there are no internal programmer errors, not as a way to determine if an object inherits a type). Another type of runtime check is made to avoid that (ex: just once for an entire query in an ECS when fetching all PosAndVelocity components to determine which component list to use which is actually homogeneous and only stores PosAndVelocity components). If that small runtime cost is non-negligible because you're looping over a boatload of components every frame and doing trivial work to each, then I found this snippet useful from Herb Sutter in C++ Coding Standards:
template<class To, class From> To checked_cast(From* from) {
assert( dynamic_cast<To>(from) == static_cast<To>(from) && "checked_cast failed" );
return static_cast<To>(from);
}
template<class To, class From> To checked_cast(From& from) {
assert( dynamic_cast<To>(from) == static_cast<To>(from) && "checked_cast failed" );
return static_cast<To>(from);
}
It basically uses dynamic_cast as a sanity check for debug builds with an assert, and static_cast for release builds.

A program has many systems (class). Enable system to call others by class name?

Suppose that I have a game engine.
Let's say it contains class Graphic, GamePlay, and Physics system.
(The real case are 20+ systems.)
All 3 of them are derived from System.
This is a draft of the simple initialization.
main(){
Game_Engine* engine = new Game_Engine();
Graphic* sys1= new Graphic(engine); //set to System::engine_pointer
GamePlay* sys2= new GamePlay(engine);
Physics* sys3= new Physics(engine);
engine->addSystem(sys1); //add to Game_Engine's hash map
engine->addSystem(sys2);
engine->addSystem(sys3);
}
Then, I want to make all system can call each other.
Ex. Graphic can call GamePlay.
So I design the addSystem() as :-
class Game_Engine {
std::unordered_map<std::type_index,Sys*> hashTable;
void addSystem (System* system){
hashTable.add( std::type_index(typeid(*system)), system );
}
template <class SysXXX> SysXXX* getSystem(){
return hashTable.get(std::type_index(typeid(SysXXX)) );
}
}
The result is that each System can call each other by using only class name :-
class Graphic : public System {
void call_me_every_time_step(){
engine_pointer->getSystem<GamePlay>()->... do something ;
}
}
Now, it works as I wished, but
I heard that typeid is bad for performance.
Game_Engine.h now has to #include all Graphic.h, GamePlay.h and Physics.h, so compilation time increases.
(I tried to not include them -> typeid of 3 derived System will return wrong result.)
Is it possible to avoid those drawback? How?
Are there any other disadvantage?
Is this a bad design in the first place? If so, what is a good design?
(because I have very limited experience on C++.)
Edit 1 : Below section responses to gudok's answer
Adding a certain get/set function for each system is what I did.
However, I realized that it become harder to manage when there are more systems, at least for me.
I ran away from it and use the template code instead, as above.
For gudok's solution, a single system will increase programmer's work as followed:-
add the field declaration in the "GameEngine"
add another function to return a certain system
when rename a class e.g. "Graphics" to "Render" by using automatic refactor tool, I have to rename the getGraphics() to getRender() too (to make code readable)
Comparing the code in the question, a single system cost only 1 line.
engine->addSystem(new Graphics(engine));
It is not so trivial, especially when most systems are changing name, and amount of systems are increasing constantly.
Edit 2 : Response to gudok's enhanced answer
Make the GameEngine derived from SystemHolder{T} can reduce the work per System to 2 places :-
: public SystemHolder<Graphics>
and
engine.addSystem<Graphics>(new Graphics());
It is still 2 places, though.
The code in question uses only 1 place.
Therefore, it is not good enough, but thank for trying!
What is the reason to use hash map and typeids instead of storing each of systems separately in GameEngine? Semantically, all these systems do different things. I'd rather do following:
class GameEngine {
std::vector<System*> systems;
Graphics* graphics;
Gameplay* gameplay;
Physics* physics;
void setGraphics(Graphics* graphics) {
this->graphics = graphics;
this->systems.push_back(graphics);
}
Graphics* getGraphics() {
return this->graphics;
}
...
};
The idea behind this solution is that:
Each of systems is different from semantical point of view. When you access graphics from somewhere, most likely you will use functions specific to Graphics and not functions universal for all Systems. Storing each of systems separately removes necessity for typeids and unnecessary type conversions.
When you need to handle all systems in some uniform way (for example, advancing game time), you use systems field:
for (auto it = systems.begin(); it != systems.end(); it++) {
it->tick();
}
EDIT Here is enhanced solution. You add new system by additionally inheriting GameEngine from SystemHodler. Getting and setting instances of particular System is uniform by using getSystem<T> and setSystem<T> methods -- as you wanted.
#include <vector>
class System {
public:
virtual ~System() {}
};
class Graphics : public System {};
class Physics: public System {};
template<typename T>
class SystemHolder {
public:
T* getSystem() { return system; }
void setSystem(T* system) { this->system = system; }
private:
T* system;
};
class GameEngine: public SystemHolder<Physics>, public SystemHolder<Graphics> {
public:
template<typename T>
inline void addSystem(T* system) {
systems.push_back(system);
SystemHolder<T>::setSystem(system);
}
template<typename T>
inline T* getSystem() {
return SystemHolder<T>::getSystem();
}
private:
std::vector<System*> systems;
};
int main(int argc, char* argv[]) {
GameEngine engine;
engine.addSystem<Physics>(new Physics());
engine.addSystem<Graphics>(new Graphics());
engine.getSystem<Physics>();
engine.getSystem<Graphics>();
}

Efficient configuration of class hierarchy at compile-time

This question is specifically about C++ architecture on embedded, hard real-time systems. This implies that large parts of the data-structures as well as the exact program-flow are given at compile-time, performance is important and a lot of code can be inlined. Solutions preferably use C++03 only, but C++11 inputs are also welcome.
I am looking for established design-patterns and solutions to the architectural problem where the same code-base should be re-used for several, closely related products, while some parts (e.g. the hardware-abstraction) will necessarily be different.
I will likely end up with a hierarchical structure of modules encapsulated in classes that might then look somehow like this, assuming 4 layers:
Product A Product B
Toplevel_A Toplevel_B (different for A and B, but with common parts)
Middle_generic Middle_generic (same for A and B)
Sub_generic Sub_generic (same for A and B)
Hardware_A Hardware_B (different for A and B)
Here, some classes inherit from a common base class (e.g. Toplevel_A from Toplevel_base) while others do not need to be specialized at all (e.g. Middle_generic).
Currently I can think of the following approaches:
(A): If this was a regular desktop-application, I would use virtual inheritance and create the instances at run-time, using e.g. an Abstract Factory.
Drawback: However the *_B classes will never be used in product A and hence the dereferencing of all the virtual function calls and members not linked to an address at run-time will lead to quite some overhead.
(B) Using template specialization as inheritance mechanism (e.g. CRTP)
template<class Derived>
class Toplevel { /* generic stuff ... */ };
class Toplevel_A : public Toplevel<Toplevel_A> { /* specific stuff ... */ };
Drawback: Hard to understand.
(C): Use different sets of matching files and let the build-scripts include the right one
// common/toplevel_base.h
class Toplevel_base { /* ... */ };
// product_A/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// product_B/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// build_script.A
compiler -Icommon -Iproduct_A
Drawback: Confusing, tricky to maintain and test.
(D): One big typedef (or #define) file
//typedef_A.h
typedef Toplevel_A Toplevel_to_be_used;
typedef Hardware_A Hardware_to_be_used;
// etc.
// sub_generic.h
class sub_generic {
Hardware_to_be_used the_hardware;
// etc.
};
Drawback: One file to be included everywhere and still the need of another mechnism to actually switch between different configurations.
(E): A similar, "Policy based" configuration, e.g.
template <class Policy>
class Toplevel {
Middle_generic<Policy> the_middle;
// ...
};
// ...
template <class Policy>
class Sub_generic {
class Policy::Hardware_to_be_used the_hardware;
// ...
};
// used as
class Policy_A {
typedef Hardware_A Hardware_to_be_used;
};
Toplevel<Policy_A> the_toplevel;
Drawback: Everything is a template now; a lot of code needs to be re-compiled every time.
(F): Compiler switch and preprocessor
// sub_generic.h
class Sub_generic {
#if PRODUCT_IS_A
Hardware_A _hardware;
#endif
#if PRODUCT_IS_B
Hardware_B _hardware;
#endif
};
Drawback: Brrr..., only if all else fails.
Is there any (other) established design-pattern or a better solution to this problem, such that the compiler can statically allocate as many objects as possible and inline large parts of the code, knowing which product is being built and which classes are going to be used?
I'd go for A. Until it's PROVEN that this is not good enough, go for the same decisions as for desktop (well, of course, allocating several kilobytes on the stack, or using global variables that are many megabytes large may be "obvious" that it's not going to work). Yes, there is SOME overhead in calling virtual functions, but I would go for the most obvious and natural C++ solution FIRST, then redesign if it's not "good enough" (obviously, try to determine performance and such early on, and use tools like a sampling profiler to determine where you are spending time, rather than "guessing" - humans are proven pretty poor guessers).
I'd then move to option B if A is proven to not work. This is indeed not entirely obvious, but it is, roughly, how LLVM/Clang solves this problem for combinations of hardware and OS, see:
https://github.com/llvm-mirror/clang/blob/master/lib/Basic/Targets.cpp
First I would like to point out that you basically answered your own question in the question :-)
Next I would like to point out that in C++
the exact program-flow are given at compile-time, performance is
important and a lot of code can be inlined
is called templates. The other approaches that leverage language features as opposed to build system features will serve only as a logical way of structuring the code in your project to the benefit of developers.
Further, as noted in other answers C is more common for hard real-time systems than are C++, and in C it is customary to rely on MACROS to make this kind of optimization at compile time.
Finally, you have noted under your B solution above that template specialization is hard to understand. I would argue that this depends on how you do it and also on how much experience your team has on C++/templates. I find many "template ridden" projects to be extremely hard to read and the error messages they produce to be unholy at best, but I still manage to make effective use of templates in my own projects because I respect the KISS principle while doing it.
So my answer to you is, go with B or ditch C++ for C
I understand that you have two important requirements :
Data types are known at compile time
Program-flow is known at compile time
The CRTP wouldn't really address the problem you are trying to solve as it would allow the HardwareLayer to call methods on the Sub_generic, Middle_generic or TopLevel and I don't believe it is what you are looking for.
Both of your requirements can be met using the Trait pattern (another reference). Here is an example proving both requirements are met. First, we define empty shells representing two Hardwares you might want to support.
class Hardware_A {};
class Hardware_B {};
Then let's consider a class that describes a general case which corresponds to Hardware_A.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long int64_t;
static int64_t getCPUSerialNumber() {return 0;}
};
Now let's see a specialization for Hardware_B :
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int int64_t;
static int64_t getCPUSerialNumber() {return 1;}
};
Now, here is a usage example within the Sub_generic layer :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::int64_t int64_t;
int64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And finally, a short main that executes both code paths and use both data types :
int main(int argc, const char * argv[]) {
std::cout << "Hardware_A : " << Sub_generic<Hardware_A>().doSomething() << std::endl;
std::cout << "Hardware_B : " << Sub_generic<Hardware_B>().doSomething() << std::endl;
}
Now if your HardwareLayer needs to maintain state, here is another way to implement the HardLayer and Sub_generic layer classes.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 0;
};
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 1;
};
template <typename Hardware>
class Sub_generic : public HardwareLayer<Hardware>
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And here is a last variant where only the Sub_generic implementation changes :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return hw.getCPUSerialNumber();}
private:
HwLayer hw;
};
On a similar train of thought to F, you could just have a directory layout like this:
Hardware/
common/inc/hardware.h
hardware1/src/hardware.cpp
hardware2/src/hardware.cpp
Simplify the interface to only assume a single hardware exists:
// sub_generic.h
class Sub_generic {
Hardware _hardware;
};
And then only compile the folder that contains the .cpp files for the hardware for that platform.
The benefits to this approach are:
It's simple to understand whats happening and to add a hardware3
hardware.h still serves as your API
It takes away the abstraction from the compiler (for your speed concerns)
Compiler 1 doesn't need to compile hardware2.cpp or hardware3.cpp which may contain things Compiler 1 can't do (like inline assembly, or some other specific Compiler 2 thing)
hardware3 might be much more complicated for some reason you haven't considered yet.. so giving it a whole directory structure encapsulates it.
Since this is for a hard real time embedded system, usually you would go for a C type of solution not c++.
With modern compilers I'd say that the overhead of c++ is not that great, so it's not entirely a matter of performance, but embedded systems tend to prefer c instead of c++.
What you are trying to build would resemble a classic device drivers library (like the one for ftdi chips).
The approach there would be (since it's written in C) something similar to your F, but with no compile time options - you would specialize the code, at runtime, based on somethig like PID, VID, SN, etc...
Now if you what to use c++ for this, templates should probably be your last option (code readability usually ranks higher than any advantage templates bring to the table). So you would probably go for something similar to A: a basic class inheritance scheme, but no particularly fancy design pattern is required.
Hope this helps...
I am going to assume that these classes only need to be created a single time, and that their instances persist throughout the entire program run time.
In this case I would recommend using the Object Factory pattern since the factory will only get run one time to create the class. From that point on the specialized classes are all a known type.

C++ Help on refactoring a monster class

I have a C background and am a newb on C++. I have a basic design question. I have a class (I'll call it "chef" b/c the problem I have seems very analogous to this, both in terms of complexity and issues) that basically works like this
class chef
{
public:
void prep();
void cook();
void plate();
private:
char name;
char dish_responsible_for;
int shift_working;
etc...
}
in pseudo code, this gets implemented along the lines of:
int main{
chef my_chef;
kitchen_class kitchen;
for (day=0; day < 365; day++)
{
kitchen.opens();
....
my_chef.prep();
my_chef.cook();
my_chef.plate();
....
kitchen.closes();
}
}
The chef class here seems to be a monster class, and has the potential of becoming one. chef also seems to violate the single responsibility principle, so instead we should have something like:
class employee
{
protected:
char name;
int shift_working;
}
class kitchen_worker : employee
{
protected:
dish_responsible_for;
}
class cook_food : kitchen_worker
{
public:
void cook();
etc...
}
class prep_food : kitchen_worker
{
public:
void prep();
etc...
}
and
class plater : kitchen_worker
{
public:
void plate();
}
etc...
I'm admittedly still struggling with how to implement it at run time so that, if for example plater (or "chef in his capacity as plater") decides to go home midway through dinner service, then the chef has to work a new shift.
This seems to be related to a broader question I have that if the same person invariably does the prepping, cooking and plating in this example, what is the real practical advantage of having this hierarchy of classes to model what a single chef does? I guess that runs into the "fear of adding classes" thing, but at the same time, right now or in the foreseeable future I don't think maintaining the chef class in its entirety is terribly cumbersome. I also think that it's in a very real sense easier for a naive reader of the code to see the three different methods in the chef object and move on.
I understand it might threaten to become unwieldy when/if we add methods like "cut_onions()", "cut_carrots()", etc..., perhaps each with their own data, but it seems those can be dealt with by having making the prep() function, say, more modular. Moreover, it seems that the SRP taken to its logical conclusion would create a class "onion_cutters" "carrot_cutters" etc... and I still have a hard time seeing the value of that, given that somehow the program has to make sure that the same employee cuts the onions and the carrots which helps with keeping the state variable the same across methods (e.g., if the employee cuts his finger cutting onions he is no longer eligible to cut carrots), whereas in the monster object chef class it seems that all that gets taken care of.
Of course, I understand that this then becomes less about having a meaningful "object oriented design", but it seems to me that if we have to have separate objects for each of the chef's tasks (which seems unnatural, given that the same person is doing all three function) then that seems to prioritize software design over the conceptual model. I feel an object oriented design is helpful here if we want to have, say, "meat_chef" "sous_chef" "three_star_chef" that are likely different people. Moreover, related to the runtime problem is that there is an overhead in complexity it seems, under the strict application of the single responsibility principle, that has to make sure the underlying data that make up the base class employee get changed and that this change is reflected in subsequent time steps.
I'm therefore rather tempted to leave it more or less as is. If somebody could clarify why this would be a bad idea (and if you have suggestions on how best to proceed) I'd be most obliged.
To avoid abusing class heirarchies now and in future, you should really only use it when an is relationship is present. As yourself, "is cook_food a kitchen_worker". It obviously doesn't make sense in real life, and doesn't in code either. "cook_food" is an action, so it might make sense to create an action class, and subclass that instead.
Having a new class just to add new methods like cook() and prep() isn't really an improvement on the original problem anyway - since all you've done is wrapped the method inside a class. What you really wanted was to make an abstraction to do any of these actions - so back to the action class.
class action {
public:
virtual void perform_action()=0;
}
class cook_food : public action {
public:
virtual void perform_action() {
//do cooking;
}
}
A chef can then be given a list of actions to perform in the order you specify. Say for example, a queue.
class chef {
...
perform_actions(queue<action>& actions) {
for (action &a : actions) {
a.perform_action();
}
}
...
}
This is more commonly known as the Strategy Pattern. It promotes the open/closed principle, by allowing you to add new actions without modifying your existing classes.
An alternative approach you could use is a Template Method, where you specify a sequence of abstract steps, and use subclasses to implement the specific behaviour for each one.
class dish_maker {
protected:
virtual void prep() = 0;
virtual void cook() = 0;
virtual void plate() = 0;
public:
void make_dish() {
prep();
cook();
plate();
}
}
class onion_soup_dish_maker : public dish_maker {
protected:
virtual void prep() { ... }
virtual void cook() { ... }
virtual void plate() { ... }
}
Another closely related pattern which might be suitable for this is the Builder Pattern
These patterns can also reduce of the Sequential Coupling anti-pattern, as it's all too easy to forget to call some methods, or call them in the right order, particularly if you're doing it multiple times. You could also consider putting your kitchen.opens() and closes() into a similar template method, than you don't need to worry about closes() being called.
On creating individual classes for onion_cutter and carrot_cutter, this isn't really the logical conclusion of the SRP, but in fact a violation of it - because you're making classes which are responsible for cutting, and holding some information about what they're cutting. Both cutting onions and carrots can be abstracted into a single cutting action - and you can specify which object to cut, and add a redirection to each individual class if you need specific code for each object.
One step would be to create an abstraction to say something is cuttable. The is relationship for subclassing is candidate, since a carrot is cuttable.
class cuttable {
public:
virtual void cut()=0;
}
class carrot : public cuttable {
public:
virtual void cut() {
//specific code for cutting a carrot;
}
}
The cutting action can take a cuttable object and perform any common cutting action that's applicable to all cuttables, and can also apply the specific cut behaviour of each object.
class cutting_action : public action {
private:
cuttable* object;
public:
cutting_action(cuttable* obj) : object(obj) { }
virtual void perform_action() {
//common cutting code
object->cut(); //specific cutting code
}
}

C++ Object Design issue: efficiently and safely construct objects and save/load with database

my English is not good enough to explain my problem. But I will try my best.
I used to be a Java programmer but have been using C++ more than a year. The one thing always bothers me is the strategy of creating business objects from network(like through SNMP, Web Service or other data sources...) and save it to database and load it when application startup. Usually my design is like following :
class Object{
/* this is just a demonstration, in real code, there are all kinds of Object and has relationships*/
friend class DBConnection;
friend class SNMPConn
private:
std::string& m_strName;
//... all kinds of properties
}
class DBConnection
{
int load(Object& obj);
int save(Object& obj);
int modify(Object& obj);
int loadAll(std::vector);
}
class SNMPConn
{
int load(Object& obj);
...
}
The thing I am not conmforable with is the line of "friend class ..." . It breaks the encapsulation.I found some framework, like litesql(sourceforge.net/apps/trac/litesql) and other commercial ones, but these frameworks are difficult to integrate with my existing code. I am trying to do it manually and trying to find a common strategy for this kind of work.
I was a Java deveoper, design in C++ is the thing I'm not good at. I don't know what's the best practice for this kind of design work.
As I understand from this problem (breaking encapsulation during reading and writing to DB or SNMP connection), first you need a proper design to eliminate these "friend"s. please define an abstract class for connections (i.e. IDBConnection) also persistent objects (i.e. IPersistent). You may use "Abstract Factory" pattern to create them. Furthermore, isolate load and save methods to another class and use "visitor pattern" to initialize or save your objects from/to your DB.
Another point, if you need an embedded DB for your application, use SQLite there are tons of good C++ wrappers for it. Hope it helps
Here's how I might do it in pseudo-code:
class Result {
public:
int getField(name);
string getField(name);
}
class Connection {
public:
void save(list<pair<string, string>> properties);
Result query();
}
class DBConnection {
private:
class DBResult : public Result {
}
public:
Result query() {
return ( DBResult );
}
void save
}
class Object {
public:
void load(Result);
void save(Connection) {
// make properties list
connection.save(properties);
}
}
Without Java-style reflection, that's probably how I'd do it without getting into "friend"-ship relationships. Then you're not tightly coupling the knowledge of connection logic into the connection classes.
...
You could also build template functions to do it, but you'd still need a friend relationship.
class Object {
public:
friend template<class Conn, class Obj> load(Conn c, Obj o);
friend template<class Conn, class Obj> save(Conn c, Obj o);
}
load<Connection, Object>(Connection c, Object o) {
//access o.private to load into c
}
I'm not sure which way I'd go. In one respect, you encapsulate load/save logic in your Object classes, which is great for locality, but it might tightly couple your persistence and business logic all in one location.