Pattern for storing multiple types of struct in a C++ std::<vector> container - c++

I have a data structure which represents a train, which can be made up of many types of car, for example the train engines, a grain car, a passenger car, and so on:
struct TrainCar {
// ...
Color color;
std::string registration_number;
unsigned long destination_id;
}
struct PowerCar : TrainCar {
// ...
const RealPowerCar &engine;
}
struct CargoCar : TrainCar {
// ...
const RealCargoCar &cargo;
bool full;
}
std::vector<TrainCar*> cars;
cars.push_back(new TrainCar(...));
cars.push_back(new TrainCar(...));
cars.push_back(new CargoCar(...));
cars.push_back(new CargoCar(...));
cars.push_back(new CargoCar(...));
An algorithm will iterate through the cars in the train, and decide how to route/shunt each car (whether to keep it in the train, move it to another point in the train, remove it from the train). This code looks like:
std::vector<TrainCar*>::iterator it = cars.begin();
for (; it != cars.end(); ++it) {
PowerCar *pc = dynamic_cast<PowerCar*>(*it);
CargoCar *cc = dynamic_cast<CargoCar*>(*it);
if (pc) {
// Apply some PowerCar routing specific logic here
if (start_of_train) {
// Add to some other data structure
}
else if (end_of_train && previous_car_is_also_a_powercar) {
// Add to some other data structure, remove from another one, check if something else...
}
else {
// ...
}
}
else if (cc) {
// Apply some CargoCar routing specific logic here
// Many business logic cases here
}
}
I am unsure whether this pattern (with the dynamic_casts, and chain of if statements) is the best way to process the list of simple structs of varying types. The use of dynamic_cast seems incorrect.
One option would be to move the routing logic to the structs (so like (*it)->route(is_start_of_car, &some_other_data_structure...)), however I'd like to keep the routing logic together if possible.
Is there a better way of iterating through different types of simple struct (with no methods)?, or do I keep the dynamic_cast approach?

The standard solution to this is called double-dispatch. Basically, you first wrap your algorithms in separate functions that are overloaded for each type of car:
void routeCar(PowerCar *);
void routeCar(CargoCar *);
Then, you add a route method to car that is pure virtual in the base-class, and implemented in each of the subclasses:
struct TrainCar {
// ...
Color color;
std::string registration_number;
unsigned long destination_id;
virtual void route() = 0;
}
struct PowerCar : TrainCar {
// ...
const RealPowerCar &engine;
virtual void route() {
routeCar(this);
}
}
struct CargoCar : TrainCar {
// ...
const RealCargoCar &cargo;
bool full;
virtual void route() {
routeCar(this);
}
}
Your loop then looks like this:
std::vector<TrainCar*>::iterator it = cars.begin();
for (; it != cars.end(); ++it) {
(*it)->route();
}
If you want to choose between different routing-algorithms at run-time, you can wrap the routeCar-functions in an abstract base class and provide different implementations for that. You would then pass the appropriate instance of that class to TrainCar::route.

If the number of classes is manageable, you can give a try to boost::variant.
Using "sum types" in C++ is sometimes a mess, so it is either this or double dispatching.

The classical OO solution would be to make all of the relevant functions
virtual in the base class TrainCar, and put the concrete logic in each
class. You say, however, that you'd like to keep the routing logic
together if possible. There are cases where this is justified, and the
classical solution in such cases is a variant union (boost::variant,
for example). It's up to you to decide which is better in your case.
Compromises are possible as well. For example, one can easily imagine a
case where the routing logic is somewhat independent of the car type
(and you don't want to duplicate it in each car type), but it does
depend on a certain number of characteristics of the car type. In this
case, the virtual function in TrainCar could simply return an object
with the necessary dependent information, to be used by the routing
algorithm. This solution has the advantage of reducing the coupling
between the routing and TrainCar to the minimum necessary.
Depending on the nature of this information, and how it is
used, the returned object could be polymorphic, with it's inheritance
hierarchy reflecting that of TrainCar; in this case, it must be
allocated dynamically, and managed: std::auto_ptr was designed with
exactly this idiom in mind.

Related

c++ particle system inheritance

i'm creating particle system and i want to have possibility to choose what kind of object will be showing on the screen (like simply pixels, or circle shapes). I have one class in which all parameters are stored (ParticleSettings), but without those entities that stores points, or circle shapes, etc. I thought that i may create pure virtual class (ParticlesInterface) as a base class, and its derived classes like ParticlesVertex, or ParticlesCircles for storing those drawable objects. It is something like that:
class ParticlesInterface
{
protected:
std::vector<ParticleSettings> m_particleAttributes;
public:
ParticlesInterface(long int amount = 100, sf::Vector2f position = { 0.0,0.0 });
const std::vector<ParticleSettings>& getParticleAttributes() { return m_particleAttributes; }
...
}
and :
class ParticlesVertex : public ParticlesInterface
{
private:
std::vector<sf::Vertex> m_particleVertex;
public:
ParticlesVertex(long int amount = 100, sf::Vector2f position = { 0.0,0.0 });
std::vector<sf::Vertex>& getParticleVertex() { return m_particleVertex; }
...
}
So... I know that i do not have access to getParticleVertex() method by using polimorphism. And I really want to have that access. I want to ask if there is any better solution for that. I have really bad times with decide how to connect all that together. I mean i was thinking also about using template classes but i need it to be dynamic binding not static. I thought that this idea of polimorphism will be okay, but i'm really need to have access to that method in that option. Can you please help me how it should be done? I want to know what is the best approach here, and also if there is any good answer to that problem i have if i decide to make that this way that i show you above.
From the sounds of it, the ParticlesInterface abstract class doesn't just have a virtual getParticleVertex because that doesn't make sense in general, only for the specific type ParticlesVertex, or maybe a group of related types.
The recommended approach here is: Any time you need code that does different things depending on the actual concrete type, make those "different things" a virtual function in the interface.
So starting from:
void GraphicsDriver::drawUpdate(ParticlesInterface &particles) {
if (auto* vparticles = dynamic_cast<ParticlesVertex*>(&particles)) {
for (sf::Vertex v : vparticles->getParticleVertex()) {
draw_one_vertex(v, getCanvas());
}
} else if (auto* cparticles = dynamic_cast<ParticlesCircle*>(&particles)) {
for (CircleWidget& c : cparticles->getParticleCircles()) {
draw_one_circle(c, getCanvas());
}
}
// else ... ?
}
(CircleWidget is made up. I'm not familiar with sf, but that's not the point here.)
Since getParticleVertex doesn't make sense for every kind of ParticleInterface, any code that would use it from the interface will necessarily have some sort of if-like check, and a dynamic_cast to get the actual data. The drawUpdate above also isn't extensible if more types are ever needed. Even if there's a generic else which "should" handle everything else, the fact one type needed something custom hints that some other future type or a change to an existing type might want its own custom behavior at that point too. Instead, change from a thing code does with the interface to a thing the interface can be asked to do:
class ParticlesInterface {
// ...
public:
virtual void drawUpdate(CanvasWidget& canvas) = 0;
// ...
};
class ParticlesVertex {
// ...
void drawUpdate(CanvasWidget& canvas) override;
// ...
};
class ParticlesCircle {
// ...
void drawUpdate(CanvasWidget& canvas) override;
// ...
};
Now the particles classes are more "alive" - they actively do things, rather than just being acted on.
For another example, say you find ParticlesCircle, but not ParticlesVertex, needs to make some member data updates whenever the coordinates are changed. You could add a virtual void coordChangeCB() {} to ParticlesInterface and call it after each motion model tick or whenever. With the {} empty definition in the interface class, any class like ParticlesVertex that doesn't care about that callback doesn't need to override it.
Do try to keep the interface's virtual functions simple in intent, following the Single Responsibility Principle. If you can't write in a sentence or two what the purpose or expected behavior of the function is in general, it might be too complicated, and maybe it could more easily be thought of in smaller steps. Or if you find the virtual overrides in multiple classes have similar patterns, maybe some smaller pieces within those implementations could be meaningful virtual functions; and the larger function might or might not stay virtual, depending on whether what remains can be considered really universal for the interface.
(Programming best practices are advice, backed by good reasons, but not absolute laws: I'm not going to say "NEVER use dynamic_cast". Sometimes for various reasons it can make sense to break the rules.)

Use case of dynamic_cast

In many places you can read that dynamic_cast means "bad design". But I cannot find any article with appropriate usage (showing good design, not just "how to use").
I'm writing a board game with a board and many different types of cards described with many attributes (some cards can be put on the board). So I decided to break it down to the following classes/interfaces:
class Card {};
class BoardCard : public Card {};
class ActionCard : public Card {};
// Other types of cards - but two are enough
class Deck {
Card* draw_card();
};
class Player {
void add_card(Card* card);
Card const* get_card();
};
class Board {
void put_card(BoardCard const*);
};
Some guys suggested that I should use only one class describing a card. But I would mean many mutually excluding attributes. And in the case of the Board class' put_card(BoardCard const&) - it is a part of the interface that I cannot put any card on the board. If I had only one type of card I would have to check it inside the method.
I see the flow like the following:
a generic card is in the deck (it's not important what its type is)
a generic card is drawn from the deck and given to a player (the same as above)
if a player chosen a BoardCard then it can be put on the board
So I use dynamic_cast before putting a card on the board. I think that using some virtual method is out of the question in this case (additionally I wouldn't make any sense to add some action about board to every card).
So my question is: What have I designed badly? How could I avoid dynamic_cast? Using some type attribute and ifs would be a better solution...?
P.S.
Any source treating about dynamic_cast usage in the context of design is more than appreciated.
Yes, dynamic_cast is a code smell, but so is adding functions that try to make it look like you have a good polymorphic interface but are actually equal to a dynamic_cast i.e. stuff like can_put_on_board. I'd go as far as to say that can_put_on_board is worse - you're duplicating code otherwise implemented by dynamic_cast and cluttering the interface.
As with all code smells, they should make you wary and they don't necessarily mean that your code is bad. This all depends on what you're trying to achieve.
If you're implementing a board game that will have 5k lines of code, two categories of cards, then anything that works is fine. If you're designing something larger, extensible and possibly allowing for cards being created by non-programmers (whether it's an actual need or you're doing it for research) then this probably won't do.
Assuming the latter, let's look at some alternatives.
You could put the onus of applying the card properly to the card, instead of some external code. E.g. add a play(Context& c) function to the card (the Context being a means to access the board and whatever may be necessary). A board card would know that it may only be applied to a board and a cast would not be necessary.
I would entirely give up using inheritance however. One of its many issues is how it introduces a categorisation of all cards. Let me give you an example:
you introduce BoardCard and ActionCard putting all cards in these two buckets;
you then decide that you want to have a card that can be used in two ways, either as an Action or a Board card;
let's say you solved the issue (through multiple-inheritance, a BoardActionCard type, or any different way);
you then decide you want to have card colours (as in MtG) - how do you do this? Do you create RedBoardCard, BlueBoardCard, RedActionCard etc?
Other examples of why inheritance should be avoided and how to achieve runtime polymorphism otherwise you may want to watch Sean Parent's excellent "Inheritance is the Base Class of Evil" talk. A promising looking library that implements this sort of polymorphism is dyno, I have not tried it out yet though.
A possible solution might be:
class Card final {
public:
template <class T>
Card(T model) :
model_(std::make_shared<Model<T>>(std::move(model)))
{}
void play(Context& c) const {
model_->play(c);
}
// ... any other functions that can be performed on a card
private:
class Context {
public:
virtual ~Context() = default;
virtual void play(Context& c) const = 0;
};
template <class T>
class Model : public Context {
public:
void play(Context& c) const override {
play(model_, c);
// or
model_.play(c);
// depending on what contract you want to have with implementers
}
private:
T model_;
};
std::shared_ptr<const Context> model_;
};
Then you can either create classes per card type:
class Goblin final {
void play(Context& c) const {
// apply effects of card, e.g. take c.board() and put the card there
}
};
Or implement behaviours for different categories, e.g. have a
template <class T>
void play(const T& card, Context& c);
template and then use enable_if to handle it for different categories:
template <class T, class = std::enable_if<IsBoardCard_v<T>>
void play(const T& card, Context& c) {
c.board().add(Card(card));
}
where:
template <class T>
struct IsBoardCard {
static constexpr auto value = T::IS_BOARD_CARD;
};
template <class T>
using IsBoardCard_v = IsBoardCard<T>::value;
then defining your Goblin as:
class Goblin final {
public:
static constexpr auto IS_BOARD_CARD = true;
static constexpr auto COLOR = Color::RED;
static constexpr auto SUPERMAGIC = true;
};
which would allow you to categorise your cards in many dimensions also leaving the possibility to entirely specialise the behaviour by implementing a different play function.
The example code uses std::shared_ptr to store the model, but you can definitely do something smarter here. I like to use a static-sized storage and only allow Ts of a certain maximum size and alignment to be used. Alternatively you could use a std::unique_ptr (which would disable copying though) or a variant leveraging small-size optimisation.
Why not use dynamic_cast
dynamic_cast is generally disliked because it can be easily abused to completely break the abstractions used. And it is not wise to depend on specific implementations. Of course it may needed, but really rarely, so nearly everyone takes a rule of thumb - probably you should not use it. It's a code smell that may imply that you should rethink Your abstractions because they may be not the ones needed in Your domain. Maybe in Your game the Board should not have put_card method - maybe instead card should have method play(const PlaySpace *) where Board implements PlaySpace or something like that. Even CppCoreGuidelines discourage using dynamic_cast in most cases.
When use
Generally few people ever have problems like this but I came across it multiple times already. The problem is called Double (or Multiple) Dispatch. Here is pretty old, but quite relevant article about double dispatch (mind the prehistoric auto_ptr):
http://www.drdobbs.com/double-dispatch-revisited/184405527
Also Scott Meyers in one of his books wrote something about building double dispatch matrix with dynamic_cast. But, all in all, these dynamic_casts are 'hidden` inside this matrix - users don't know what kind of magic happens inside.
Noteworthy - multiple dispatch is also considered code smell :-).
Reasonable alternative
Check out the visitor pattern. It can be used as replace for dynamic_cast but it is also some kind of code smell.
I generally recommend using dynamic_cast and visitor as a last resort tools for design problems as they break abstraction which increases complexity.
You could apply the principles behind Microsoft's COM and provide a series of interfaces, with each interface describing a set of related behaviors. In COM you determine if a specific interface is available by calling QueryInterface, but in modern C++ dynamic_cast works similarly and is more efficient.
class Card {
virtual void ~Card() {} // must have at least one virtual method for dynamic_cast
};
struct IBoardCard {
virtual void put_card(Board* board);
};
class BoardCard : public Card, public IBoardCard {};
class ActionCard : public Card {};
// Other types of cards - but two are enough
class Deck {
Card* draw_card();
};
class Player {
void add_card(Card* card);
Card const* get_card();
};
class Board {
void put_card(Card const* card) {
const IBoardCard *p = dynamic_cast<const IBoardCard*>(card);
if (p != null) p->put_card(this);
};
That may be a bad example, but I hope you get the idea.
It seems to me that the two types of cards are quite different. The things a board card and an action card can do are mutually exclusive, and the common thing is just that they can be drawn from the deck. Moreover, that's not a thing a card does, it's a player / deck action.
If this is true, a question one should ask is whether they should really descend from a common type, Card. An alternative design would be that of a tagged union: let Card instead be a std::variant<BoardCard, ActionCard...>, and contain an instance of the appropriate type. When deciding what to do with the card, you use a switch on the index() and then std::get<> only the appropriate type. This way you don't need any *_cast operator, and get a complete freedom of what methods (neither of which would make sense for the other types) each type of card supports.
If it's only almost true but not for all types, you can variate slightly: only group together those types of cards that can sensibly be superclassed, and put the set of those common types into the variant.
I always found the usage of a cast a code smell, and in my experience, the 90% of the time the cast was due to bad design.
I saw usage of dynamic_cast in some time-critical application where it was providing more performance improvement than inherit from multiple interfaces or retrieving an enumeration of some kind from the object (like a type). So the code smelt, but the usage of the dynamic cast was worth it in that case.
That said, I will avoid dynamic cast in your case as well as multiple inheritances from different interfaces.
Before reaching my solution, your description sounds like there are a lot of details omitted about the behavior of the cards or the consequence they have on the board
and the game itself. I used that as a further constraint, trying to keep thing boxed and maintainable.
I would go for a composition instead of an inheritance. It will provide you evenly the chance of using the card as a 'factory':
it can spawn more game modifiers - something to be applied to the board, and one to a specific enemy
the card can be reused - the card could stays in the hands of the player and the effect on the game is detached from it (there is no 1-1 binding between cards and effects)
the card itself can sit back on the deck, while the effects of what it did are still alive on the board.
a card can have a representation (drawing methods) and react to the touch in a way, where instead the BoardElement can be evenly a 3d miniature with animation
See [https://en.wikipedia.org/wiki/Composition_over_inheritance for further details]. I'd like to quote:
Composition also provides a more stable business domain in the long term as it is less prone to the quirks of the family members.In other words, it is better to compose what an object can do (HAS - A) than extend what it is(IS - A).[1]
A BoardCard/Element can be something like this:
//the card placed on the board.
class BoardElement {
public:
BoardElement() {}
virtual ~BoardElement() {};
//up to you if you want to add a read() methods to read data from the card description (XML / JSON / binary data)
// but that should not be part of the interface. Talking about a potential "Wizard", it's probably more related to the WizardCard - WizardElement relation/implementation
//some helpful methods:
// to be called by the board when placed
virtual void OnBoard() {}
virtual void Frame(const float time) { /*do something time based*/ }
virtual void Draw() {}
// to be called by the board when removed
virtual void RemovedFromBoard() {}
};
the Card could represent something to be used in a deck or in the user's hands, I'll add an interface of that kind
class Card {
public:
Card() {}
virtual ~Card() {}
//that will be invoked by the user in order to provide something to the Board, or NULL if nothing should be added.
virtual std::shared_ptr<BoardElement*> getBoardElement() { return nullptr; }
virtual void Frame(const float time) { /*do something time based*/ }
virtual void Draw() {}
//usefull to handle resources or internal states
virtual void OnUserHands() {}
virtual void Dropped() {}
};
I'd like to add that this pattern allows many tricks inside the getBoardElement() method, from acting as a factory (so something should be spawned with its own lifetime),
returning an Card data member such as a std:shared_ptr<BoardElement> wizard3D; (as example), create a binding between the Card and the BoardElement as for:
class WizardBoardElement : public BoardElement {
public:
WizardBoardElement(const Card* owner);
// other members omitted ...
};
The binding can be useful in order to read some configuration data or whatever...
So inheritance from Card and from BoardElement will be used to implement the features exposed by the base classes and not for providing other methods that can be reached only through a dynamic_cast.
For completeness:
class Player {
void add(Card* card) {
//..
card->OnUserHands();
//..
}
void useCard(Card* card) {
//..
//someway he's got to retrieve the board...
getBoard()->add(card->getBoardElement());
//..
}
Card const* get_card();
};
class Board {
void add(BoardElement* el) {
//..
el->OnBoard();
//..
}
};
In that way, we have no dynamic_cast, Player and board do simple things without knowing about the inner details of the card they are handled, providing good separations between the different objects and increasing maintainability.
Talking about the ActionCard, and about "effects" that may be applied to other players or your avatar, we can think about having a method like:
enum EffectTarget {
MySelf, //a player on itself, an enemy on itself
MainPlayer,
Opponents,
StrongOpponents
//....
};
class Effect {
public:
//...
virtual void Do(Target* target) = 0;
//...
};
class Card {
public:
//...
struct Modifiers {
EffectTarget eTarget;
std::shared_ptr<Effect> effect;
};
virtual std::vector<Modifiers> getModifiers() { /*...*/ }
//...
};
class Player : public Target {
public:
void useCard(Card* card) {
//..
//someway he's got to retrieve the board...
getBoard()->add(card->getBoardElement());
auto modifiers = card->getModifiers();
for each (auto modifier in modifiers)
{
//this method is supposed to look at the board, at the player and retrieve the instance of the target
Target* target = getTarget(modifier.eTarget);
modifier.effect->Do(target);
}
//..
}
};
That's another example of the same pattern to apply the effects from the card, avoiding the cards to know details about the board and it's status, who is playing the card, and keep the code in Player pretty simple.
Hope this may help,
Have a nice day,
Stefano.
What have I designed badly?
The problem is that you always need to extend that code whenever a new type of Card is introduced.
How could I avoid dynamic_cast?
The usual way to avoid that is to use interfaces (i.e. pure abstract classes):
struct ICard {
virtual bool can_put_on_board() = 0;
virtual ~ICard() {}
};
class BoardCard : public ICard {
public:
bool can_put_on_board() { return true; };
};
class ActionCard : public ICard {
public:
bool can_put_on_board() { return false; };
};
This way you can simply use a reference or pointer to ICard and check, if the actual type it holds can be put on the Board.
But I cannot find any article with appropriate usage (showing good design, not just "how to use").
In general I'd say there aren't any good, real life use cases for dynamic cast.
Sometimes I have used it in debug code for CRTP realizations like
template<typename Derived>
class Base {
public:
void foo() {
#ifndef _DEBUG
static_cast<Derived&>(*this).doBar();
#else
// may throw in debug mode if something is wrong with Derived
// not properly implementing the CRTP
dynamic_cast<Derived&>(*this).doBar();
#endif
}
};
I think that I would end up with something like this (compiled with clang 5.0 with -std=c++17). I'm couroius about your comments. So whenever I want to handle different types of Cards I need to instantiate a dispatcher and supply methods with proper signatures.
#include <iostream>
#include <typeinfo>
#include <type_traits>
#include <vector>
template <class T, class... Args>
struct any_abstract {
static bool constexpr value = std::is_abstract<T>::value || any_abstract<Args...>::value;
};
template <class T>
struct any_abstract<T> {
static bool constexpr value = std::is_abstract<T>::value;
};
template <class T, class... Args>
struct StaticDispatcherImpl {
template <class P, class U>
static void dispatch(P* ptr, U* object) {
if (typeid(*object) == typeid(T)) {
ptr->do_dispatch(*static_cast<T*>(object));
return;
}
if constexpr (sizeof...(Args)) {
StaticDispatcherImpl<Args...>::dispatch(ptr, object);
}
}
};
template <class Derived, class... Args>
struct StaticDispatcher {
static_assert(not any_abstract<Args...>::value);
template <class U>
void dispatch(U* object) {
if (object) {
StaticDispatcherImpl<Args...>::dispatch(static_cast<Derived *>(this), object);
}
}
};
struct Card {
virtual ~Card() {}
};
struct BoardCard : Card {};
struct ActionCard : Card {};
struct Board {
void put_card(BoardCard const& card, int const row, int const column) {
std::cout << "Putting card on " << row << " " << column << std::endl;
}
};
struct UI : StaticDispatcher<UI, BoardCard, ActionCard> {
void do_dispatch(BoardCard const& card) {
std::cout << "Get row to put: ";
int row;
std::cin >> row;
std::cout << "Get row to put:";
int column;
std::cin >> column;
board.put_card(card, row, column);
}
void do_dispatch(ActionCard& card) {
std::cout << "Handling action card" << std::endl;
}
private:
Board board;
};
struct Game {};
int main(int, char**) {
Card* card;
ActionCard ac;
BoardCard bc;
UI ui;
card = &ac;
ui.dispatch(card);
card = &bc;
ui.dispatch(card);
return 0;
}
As I can't see why you wouldn't use virtual methods, I'm just gonna present, how I would do it. First I have the ICard interface for all cards. Then I would distinguish, between the card types (i.e. BoardCard and ActionCard and whatever cards you have). And All the cards inherit from either one of the card types.
class ICard {
virtual void put_card(Board* board) = 0;
virtual void accept(CardVisitor& visitor) = 0; // See later, visitor pattern
}
class ActionCard : public ICard {
void put_card(Board* board) final {
// std::cout << "You can't put Action Cards on the board << std::endl;
// Or just do nothing, if the decision of putting the card on the board
// is not up to the user
}
}
class BoardCard : public ICard {
void put_card(Board* board) final {
// Whatever implementation puts the card on the board, mb something like:
board->place_card_on_board(this);
}
}
class SomeBoardCard : public BoardCard {
void accept(CardVisitor& visitor) final { // visitor pattern
visitor.visit(this);
}
void print_information(); // see BaseCardVisitor in the next code section
}
class SomeActionCard : public ActionCard {
void accept(CardVisitor& visitor) final { // visitor pattern
visitor.visit(this);
}
void print_information(); // see BaseCardVisitor
}
class Board {
void put_card(ICard* const card) {
card->put_card(this);
}
void place_card_on_board(BoardCard* card) {
// place it on the board
}
}
I guess the user has to know somehow what card he has drawn, so for that I would implement the visitor pattern. You could also place the accept-method, which I placed in the most derived classes/cards, in the card types (BoardCard, ActionCard), depeneding on where you want to draw the line on what information shall be given to the user.
template <class T>
class BaseCardVisitor {
void visit(T* card) {
card->print_information();
}
}
class CardVisitor : public BaseCardVisitor<SomeBoardCard>,
public BaseCardVisitor<SomeActionCard> {
}
class Player {
void add_card(ICard* card);
ICard const* get_card();
void what_is_this_card(ICard* card) {
card->accept(visitor);
}
private:
CardVisitor visitor;
};
Hardly a complete answer but just wanted to pitch in with an answer similar to Mark Ransom's but just very generally speaking, I've found downcasting to be useful in cases where duck typing is really useful. There can be certain architectures where it is very useful to do things like this:
for each object in scene:
{
if object can fly:
make object fly
}
Or:
for each object in scene that can fly:
make object fly
COM allows this type of thing somewhat like so:
for each object in scene:
{
// Request to retrieve a flyable interface from
// the object.
IFlyable* flyable = object.query_interface<IFlyable>();
// If the object provides such an interface, make
// it fly.
if (flyable)
flyable->fly();
}
Or:
for each flyable in scene.query<IFlyable>:
flyable->fly();
This implies a cast of some form somewhere in the centralized code to query and obtain interfaces (ex: from IUnknown to IFlyable). In such cases, a dynamic cast checking run-time type information is the safest type of cast available. First there might be a general check to see if an object provides the interface that doesn't involve casting. If it doesn't, this query_interface function might return a null pointer or some type of null handle/reference. If it does, then using a dynamic_cast against RTTI is the safest thing to do to fetch the actual pointer to the generic interface (ex: IInterface*) and return IFlyable* to the client.
Another example is entity-component systems. In that case instead of querying abstract interfaces, we retrieve concrete components (data):
Flight System:
for each object in scene:
{
if object.has<Wings>():
make object fly using object.get<Wings>()
}
Or:
for each wings in scene.query<Wings>()
make wings fly
... something to this effect, and that also implies casting somewhere.
For my domain (VFX, which is somewhat similar to gaming in terms of application and scene state), I've found this type of ECS architecture to be the easiest to maintain. I can only speak from personal experience, but I've been around for a long time and have faced many different architectures. COM is now the most popular style of architecture in VFX and I used to work on a commercial VFX application used widely in films and games and archviz and so forth which used a COM architecture, but I've found ECS as popular in game engines even easier to maintain than COM for my particular case*.
One of the reasons I find ECS so much easier is because the bulk of the systems in this domain like PhysicsSystem, RenderingSystem, AnimationSystem, etc. boil down to just data transformers and the ECS model just fits beautifully for that purpose without abstractions getting in the way. With COM in this domain, the number of subtypes implementing an interface like a motion interface like IMotion might be in the hundreds (ex: a PointLight which implements IMotion along with 5 other interfaces), requiring hundreds of classes implementing different combinations of COM interfaces to maintain individually. With the ECS, it uses a composition model over inheritance, and reduces those hundreds of classes down to just a couple dozen simple component structs which can be combined in endless ways by the entities that compose them, and only a handful of systems have to provide behavior: everything else is just data which the systems loop through as input to then provide some output.
Between legacy codebases that used a bunch of global variables and brute force coding (ex: sprinkling conditionals all over the place instead of using polymorphism), deep inheritance hierarchies, COM, and ECS, in terms of maintainability for my particular domain, I'd say ECS > COM, while deep inheritance hierarchies and brute force coding with global variables all over the place were both incredibly hard to maintain (OOP using deep inheritance with protected data fields is almost as hard to reason about in terms of maintaining invariants as a boatload of global variables IMO, but further can invite the most nightmarish cascading changes spilling across entire hierarchies if designs need to change -- at least the brute force legacy codebase didn't have the cascading problem since it was barely reusing any code to begin with).
COM and ECS are somewhat similar except with COM, the dependencies flow towards central abstractions (COM interfaces provided by COM objects, like IFlyable). With an ECS, the dependencies flow towards central data (components provided by ECS entities, like Wings). At the heart of both is often the idea that we have a bunch of non-homogeneous objects (or "entities") of interest whose provided interfaces or components are not known in advance, since we're accessing them through a non-homogeneous collection (ex: a "Scene"). As a result we need to discover their capabilities at runtime when iterating through this non-homogeneous collection by either querying the collection or the objects individually to see what they provide.
Either way, both involve some type of centralized casting to retrieve either an interface or a component from an entity, and if we have to downcast, then a dynamic_cast is at least the safest way to do that which involves runtime type checking to make sure the cast is valid. And with both ECS and COM, you generally only need one line of code in the entire system which performs this cast.
That said, the runtime checking does have a small cost. Typically if dynamic_cast is used in COM and ECS architectures, it's done in a way so that a std::bad_cast should never be thrown and/or that dynamic_cast itself never returns nullptr (the dynamic_cast is just a sanity check to make sure there are no internal programmer errors, not as a way to determine if an object inherits a type). Another type of runtime check is made to avoid that (ex: just once for an entire query in an ECS when fetching all PosAndVelocity components to determine which component list to use which is actually homogeneous and only stores PosAndVelocity components). If that small runtime cost is non-negligible because you're looping over a boatload of components every frame and doing trivial work to each, then I found this snippet useful from Herb Sutter in C++ Coding Standards:
template<class To, class From> To checked_cast(From* from) {
assert( dynamic_cast<To>(from) == static_cast<To>(from) && "checked_cast failed" );
return static_cast<To>(from);
}
template<class To, class From> To checked_cast(From& from) {
assert( dynamic_cast<To>(from) == static_cast<To>(from) && "checked_cast failed" );
return static_cast<To>(from);
}
It basically uses dynamic_cast as a sanity check for debug builds with an assert, and static_cast for release builds.

A better design pattern than factory?

In the code I am now creating, I have an object that can belong to two discrete types, differentiated by serial number. Something like this:
class Chips {
public:
Chips(int shelf) {m_nShelf = shelf;}
Chips(string sSerial) {m_sSerial = sSerial;}
virtual string GetFlavour() = 0;
virtual int GetShelf() {return m_nShelf;}
protected:
string m_sSerial;
int m_nShelf;
}
class Lays : Chips {
string GetFlavour()
{
if (m_sSerial[0] == '0') return "Cool ranch";
else return "";
}
}
class Pringles : Chips {
string GetFlavour()
{
if (m_sSerial.find("cool") != -1) return "Cool ranch";
else return "";
}
}
Now, the obvious choice to implement this would be using a factory design pattern. Checking manually which serial belongs to which class type wouldn't be too difficult.
However, this requires having a class that knows all the other classes and refers to them by name, which is hardly truly generic, especially if I end up having to add a whole bunch of subclasses.
To complicate things further, I may have to keep around an object for a while before I know its actual serial number, which means I may have to write the base class full of dummy functions rather than keeping it abstract and somehow replace it with an instance of one of the child classes when I do get the serial. This is also less than ideal.
Is factory design pattern truly the best way to deal with this, or does anyone have a better idea?
You can create a factory which knows only the Base class, like this:
add pure virtual method to base class: virtual Chips* clone() const=0; and implement it for all derives, just like operator= but to return pointer to a new derived. (if you have destructor, it should be virtual too)
now you can define a factory class:
Class ChipsFactory{
std::map<std::string,Chips*> m_chipsTypes;
public:
~ChipsFactory(){
//delete all pointers... I'm assuming all are dynamically allocated.
for( std::map<std::string,Chips*>::iterator it = m_chipsTypes.begin();
it!=m_chipsTypes.end(); it++) {
delete it->second;
}
}
//use this method to init every type you have
void AddChipsType(const std::string& serial, Chips* c){
m_chipsTypes[serial] = c;
}
//use this to generate object
Chips* CreateObject(const std::string& serial){
std::map<std::string,Chips*>::iterator it = m_chipsTypes.find(serial);
if(it == m_chipsTypes.end()){
return NULL;
}else{
return it->clone();
}
}
};
Initialize the factory with all types, and you can get pointers for the initialized objects types from it.
From the comments, I think you're after something like this:
class ISerialNumber
{
public:
static ISerialNumber* Create( const string& number )
{
// instantiate and return a concrete class that
// derives from ISerialNumber, or NULL
}
virtual void DoSerialNumberTypeStuff() = 0;
};
class SerialNumberedObject
{
public:
bool Initialise( const string& serialNum )
{
m_pNumber = ISerialNumber::Create( serialNum );
return m_pNumber != NULL;
}
void DoThings()
{
m_pNumber->DoSerialNumberTypeStuff();
}
private:
ISerialNumber* m_pNumber;
};
(As this was a question on more advanced concepts, protecting from null/invalid pointer issues is left as an exercise for the reader.)
Why bother with inheritance here? As far as I can see the behaviour is the same for all Chips instances. That behaviour is that the flavour is defined by the serial number.
If the serial number only changes a couple of things then you can inject or lookup the behaviours (std::function) at runtime based on the serial number using a simple map (why complicate things!). This way common behaviours are shared among different chips via their serial number mappings.
If the serial number changes a LOT of things, then I think you have the design a bit backwards. In that case what you really have is the serial number defining a configuration of the Chips, and your design should reflect that. Like this:
class SerialNumber {
public:
// Maybe use a builder along with default values
SerialNumber( .... );
// All getters, no setters.
string getFlavour() const;
private:
string flavour;
// others (package colour, price, promotion, target country etc...)
}
class Chips {
public:
// Do not own the serial number... 'tis shared.
Chips(std::shared_ptr<SerialNumber> poSerial):m_poSerial{poSerial}{}
Chips(int shelf, SerialNumber oSerial):m_poSerial{oSerial}, m_nShelf{shelf}{}
string GetFlavour() {return m_poSerial->getFlavour()};
int GetShelf() {return m_nShelf;}
protected:
std::shared_ptr<SerialNumber> m_poSerial;
int m_nShelf;
}
// stores std::shared_ptr but you could also use one of the shared containers from boost.
Chips pringles{ chipMap.at("standard pringles - sour cream") };
This way once you have a set of SerialNumbers for your products then the product behaviour does not change. The only change is the "configuration" which is encapsulated in the SerialNumber. Means that the Chips class doesn't need to change.
Anyway, somewhere someone needs to know how to build the class. Of course you could you template based injection as well but your code would need to inject the correct type.
One last idea. If SerialNumber ctor took a string (XML or JSON for example) then you could have your program read the configurations at runtime, after they have been defined by a manager type person. This would decouple the business needs from your code, and that would be a robust way to future-proof.
Oh... and I would recommend NOT using Hungarian notation. If you change the type of an object or parameter you also have to change the name. Worse you could forget to change them and other will make incorrect assumptions. Unless you are using vim/notepad to program with then the IDE will give you that info in a clearer manner.
#user1158692 - The party instantiating Chips only needs to know about SerialNumber in one of my proposed designs, and that proposed design stipulates that the SerialNumber class acts to configure the Chips class. In that case the person using Chips SHOULD know about SerialNumber because of their intimate relationship. The intimiate relationship between the classes is exactly the reason why it should be injected via constructor. Of course it is very very simple to change this to use a setter instead if necessary, but this is something I would discourage, due to the represented relationship.
I really doubt that it is absolutely necessary to create the instances of chips without knowing the serial number. I would imagine that this is an application issue rather than one that is required by the design of the class. Also, the class is not very usable without SerialNumber and if you did allow construction of the class without SerialNumber you would either need to use a default version (requiring Chips to know how to construct one of these or using a global reference!) or you would end up polluting the class with a lot of checking.
As for you complaint regarding the shared_ptr... how on earth to you propose that the ownership semantics and responsibilities are clarified? Perhaps raw pointers would be your solution but that is dangerous and unclear. The shared_ptr clearly lets designers know that they do not own the pointer and are not responsible for it.

How can I switch element and container types for purpose of benchmarking, without producing a large number of test client applications?

I have a ClientInterface class, that uses the Strategy pattern to organize two complex algorithms conforming to interfaces Abase and Bbase, respectively. The ClientInterface agglomerates (via composition) the data on which the algorithms operate, which needs to conform to the Data interface.
What I tried to do is to have a single ClientInterface class, which is able to choose different Strategies and Data implementations at run-time. The algorithms and data implementations are chosen using the Factory Method which reads the strings from an input file and selects the algorithm and data implementation in the ClientInterface constructor. The run-time choice of data and algorithms is not provided in the code model below.
The Data implementation can be based on a map, a list, an unordered_map , etc. to test how does the efficiency of two complex algorithms (Abase and Bbase implemented Strategies) change with different containers used for the Data.
Additionally, the Data agglomerates different Elements (ElementBase implementations). Different element implementations will also have significant impact on the efficiency of theh ClientInterface, but the Elements are really disjoint types with implementations coming from different libraries. I know this for a fact, since profiling the existing application shows the Element operation to be one of the bottlenecks.
I know that if I use polymorphism with containers, there is "boost/ptr_container" out there, but the Data will store hundreds of thousands, if not millions of Elements. Using polymorphism for Elements in this case will have a significant overhead on the ClientInterface, but if I choose to make data a class Template for the Element type, I will end up statically defining the ClientInterface class, which means producing a client application per each Element type at least.
Can I assume that for the same number of Elements and the ClientInterface configuration obtained at run-time, the overhead induced by the use of polymorphism for the Element type will have the same impact on all configurations of the Data and Algorithm implementations? In this case, I can run the automated tests, decide on the configuration of the Data implementation and the Element implementation, and define a statically configured EfficientClientInterface to be used in the productive code?
Goal: I have a test harness prepared, and what I am trying to do is to automatize the testing on the family of test cases, since changing the Algorithms and Elements at run-time, allows me to use a single application in a loop, which is configured at run-time and whose output is measured for efficiency. In the real implementation, I am dealing with at least 6 algorithm interfaces, 3-4 Data implementations, and I estimate 3 Element implementations at least.
So, my questions are:
1) How can an Element support different operations when overloading is not working for return types? If I make the operation a template, it needs to be defined at compile-time, which messes with my automated testing procedure.
2) How can I design this code better to achieve the goal?
3) Is there a better overall approach to this problem?
Here is the code model:
#include <iostream>
#include <memory>
class ElementOpResultFirst
{};
class ElementOpResultSecond
{};
class ElementBase
{
public:
// Overloading does not allow different representation of the solution for the element operation.
virtual ElementOpResultFirst elementOperation() = 0;
//virtual ElementOpResultSecond elementOperation() = 0;
};
class InterestingElement
:
public ElementBase
{
public:
ElementOpResultFirst elementOperation()
{
// Implementation dependant operation code.
return ElementOpResultFirst();
}
//ElementOpResultSecond elementOperation()
//{
//// Implementation dependant operation code.
//return ElementOpResultSecond();
//}
};
class EfficientElement
:
public ElementBase
{
public:
ElementOpResultFirst elementOperation()
{
// Implementation dependant operation code.
return ElementOpResultFirst();
}
//ElementOpResultSecond elementOperation()
//{
//// Implementation dependant operation code.
//return ElementOpResultSecond();
//}
};
class Data
{
public:
virtual void insertElement(const ElementBase&) = 0;
virtual const ElementBase& getElement(int key) = 0;
};
class DataConcreteMap
:
public Data
{
// Map implementation
public:
void insertElement(const ElementBase&)
{
// Insert element into the Map implementation.
}
const ElementBase& getElement(int key)
{
// Get element from the Map implementation.
}
};
class DataConcreteVector
:
public Data
{
// Vector implementation
public:
void insertElement(const ElementBase&)
{
// Insert element into the vector implementation.
}
const ElementBase& getElement(int key)
{
// Get element from the Vector implementation
}
};
class Abase
{
public:
virtual void aFunction() = 0;
};
class Aconcrete
:
public Abase
{
public:
virtual void aFunction()
{
std::cout << "Aconcrete::function() " << std::endl;
}
};
class Bbase
{
public:
virtual void bFunction(Data& data) = 0;
};
class Bconcrete
:
public Bbase
{
public:
virtual void bFunction(Data& data)
{
data.getElement(0);
std::cout << "Bconcrete::function() " << std::endl;
}
};
// Add a static abstract factory for algorithm and data generation.
class ClientInterface
{
std::unique_ptr<Data> data_;
std::unique_ptr<Abase> algorithmA_;
std::unique_ptr<Bbase> algorithmB_;
public:
ClientInterface()
:
// A Factory Method is defined for Data, Abase and Bbase that
// produces the concrete type based on an entry in a text-file.
data_ (std::unique_ptr<Data> (new DataConcreteMap())),
algorithmA_(std::unique_ptr<Abase> (new Aconcrete())),
algorithmB_(std::unique_ptr<Bbase> (new Bconcrete()))
{}
void aFunction()
{
return algorithmA_->aFunction();
}
void bFunction()
{
return algorithmB_->bFunction(*data_);
}
};
// Single client code: both for testing and final version.
int main()
{
ClientInterface cli;
cli.aFunction();
cli.bFunction();
return 0;
};
What I tried to do is to have a single ClientInterface class, which is
able to choose different Strategies and Data implementations at
run-time. The algorithms and data implementations are chosen using the
Factory Method which reads the strings from an input file and selects
the algorithm and data implementation in the ClientInterface
constructor. The run-time choice of data and algorithms is not
provided in the code model below.
Sounds like you have the basis for some of it here: Either just produce a set of files to test with that produce the right different sets of inputs. Or refactor the Factory function so that the reading of the file and the strings are separate, so you can call your factory function [internals] with a a string from the code.
Can I assume that for the same number of Elements and the
ClientInterface configuration obtained at run-time, the overhead
induced by the use of polymorphism for the Element type will have the
same impact on all configurations of the Data and Algorithm
implementations? In this case, I can run the automated tests, decide
on the configuration of the Data implementation and the Element
implementation, and define a statically configured
EfficientClientInterface to be used in the productive code?
I don't think you can make that assumption. Different implementations may well have different effects on the algorithms - copying a 100 byte string is significantly harder than copying a 4 byte integer, for example. So what the data is, and how it's organized will have some effect on the work you do. Of course, since you haven't described in much detail what your Elements actually contain, it's all guesswork.
1) How can an Element support different operations when overloading is
not working for return types? If I make the operation a template, it
needs to be defined at compile-time, which messes with my automated
testing procedure.
Make a factory class that returns an ElementBase reference or pointer? That's my immediate reaction to this question, but again, the detail in your question is sufficiently vague that it's hard to say for sure.
In the real application, how does this work? Is it done by templates, then you'd better implement the testcode by templates, and fill it out with a selection of realistic variations on what you think the real system is likely to do.
2) How can I design this code better to achieve the goal?
Try to reuse the production code?
3) Is there a better overall approach to this problem?
Not sure yet.

Practical use of dynamic_cast?

I have a pretty simple question about the dynamic_cast operator. I know this is used for run time type identification, i.e., to know about the object type at run time. But from your programming experience, can you please give a real scenario where you had to use this operator? What were the difficulties without using it?
Toy example
Noah's ark shall function as a container for different types of animals. As the ark itself is not concerned about the difference between monkeys, penguins, and mosquitoes, you define a class Animal, derive the classes Monkey, Penguin, and Mosquito from it, and store each of them as an Animal in the ark.
Once the flood is over, Noah wants to distribute animals across earth to the places where they belong and hence needs additional knowledge about the generic animals stored in his ark. As one example, he can now try to dynamic_cast<> each animal to a Penguin in order to figure out which of the animals are penguins to be released in the Antarctic and which are not.
Real life example
We implemented an event monitoring framework, where an application would store runtime-generated events in a list. Event monitors would go through this list and examine those specific events they were interested in. Event types were OS-level things such as SYSCALL, FUNCTIONCALL, and INTERRUPT.
Here, we stored all our specific events in a generic list of Event instances. Monitors would then iterate over this list and dynamic_cast<> the events they saw to those types they were interested in. All others (those that raise an exception) are ignored.
Question: Why can't you have a separate list for each event type?
Answer: You can do this, but it makes extending the system with new events as well as new monitors (aggregating multiple event types) harder, because everyone needs to be aware of the respective lists to check for.
A typical use case is the visitor pattern:
struct Element
{
virtual ~Element() { }
void accept(Visitor & v)
{
v.visit(this);
}
};
struct Visitor
{
virtual void visit(Element * e) = 0;
virtual ~Visitor() { }
};
struct RedElement : Element { };
struct BlueElement : Element { };
struct FifthElement : Element { };
struct MyVisitor : Visitor
{
virtual void visit(Element * e)
{
if (RedElement * p = dynamic_cast<RedElement*>(e))
{
// do things specific to Red
}
else if (BlueElement * p = dynamic_cast<BlueElement*>(e))
{
// do things specific to Blue
}
else
{
// error: visitor doesn't know what to do with this element
}
}
};
Now if you have some Element & e;, you can make MyVisitor v; and say e.accept(v).
The key design feature is that if you modify your Element hierarchy, you only have to edit your visitors. The pattern is still fairly complex, and only recommended if you have a very stable class hierarchy of Elements.
Imagine this situation: You have a C++ program that reads and displays HTML. You have a base class HTMLElement which has a pure virtual method displayOnScreen. You also have a function called renderHTMLToBitmap, which draws the HTML to a bitmap. If each HTMLElement has a vector<HTMLElement*> children;, you can just pass the HTMLElement representing the element <html>. But what if a few of the subclasses need special treatment, like <link> for adding CSS. You need a way to know if an element is a LinkElement so you can give it to the CSS functions. To find that out, you'd use dynamic_cast.
The problem with dynamic_cast and polymorphism in general is that it's not terribly efficient. When you add vtables into the mix, it only get's worse.
When you add virtual functions to a base class, when they are called, you end up actually going through quite a few layers of function pointers and memory areas. That will never be more efficient than something like the ASM call instruction.
Edit: In response to Andrew's comment bellow, here's a new approach: Instead of dynamic casting to the specific element type (LinkElement), instead you have another abstract subclass of HTMLElement called ActionElement that overrides displayOnScreen with a function that displays nothing, and creates a new pure virtual function: virtual void doAction() const = 0. The dynamic_cast is changed to test for ActionElement and just calls doAction(). You'd have the same kind of subclass for GraphicalElement with a virtual method displayOnScreen().
Edit 2: Here's what a "rendering" method might look like:
void render(HTMLElement root) {
for(vector<HTLMElement*>::iterator i = root.children.begin(); i != root.children.end(); i++) {
if(dynamic_cast<ActionElement*>(*i) != NULL) //Is an ActionElement
{
ActionElement* ae = dynamic_cast<ActionElement*>(*i);
ae->doAction();
render(ae);
}
else if(dynamic_cast<GraphicalElement*>(*i) != NULL) //Is a GraphicalElement
{
GraphicalElement* ge = dynamic_cast<GraphicalElement*>(*i);
ge->displayToScreen();
render(ge);
}
else
{
//Error
}
}
}
Operator dynamic_cast solves the same problem as dynamic dispatch (virtual functions, visitor pattern, etc): it allows you to perform different actions based on the runtime type of an object.
However, you should always prefer dynamic dispatch, except perhaps when the number of dynamic_cast you'd need will never grow.
Eg. you should never do:
if (auto v = dynamic_cast<Dog*>(animal)) { ... }
else if (auto v = dynamic_cast<Cat*>(animal)) { ... }
...
for maintainability and performance reasons, but you can do eg.
for (MenuItem* item: items)
{
if (auto submenu = dynamic_cast<Submenu*>(item))
{
auto items = submenu->items();
draw(context, items, position); // Recursion
...
}
else
{
item->draw_icon();
item->setup_accelerator();
...
}
}
which I've found quite useful in this exact situation: you have one very particular subhierarchy that must be handled separately, this is where dynamic_cast shines. But real world examples are quite rare (the menu example is something I had to deal with).
dynamic_cast is not intended as an alternative to virtual functions.
dynamic_cast has a non-trivial performance overhead (or so I think) since the whole class hierarchy has to be walked through.
dynamic_cast is similar to the 'is' operator of C# and the QueryInterface of good old COM.
So far I have found one real use of dynamic_cast:
(*) You have multiple inheritance and to locate the target of the cast the compiler has to walk the class hierarchy up and down to locate the target (or down and up if you prefer). This means that the target of the cast is in a parallel branch in relation to where the source of the cast is in the hierarchy. I think there is NO other way to do such a cast.
In all other cases, you just use some base class virtual to tell you what type of object you have and ONLY THEN you dynamic_cast it to the target class so you can use some of it's non-virtual functionality. Ideally there should be no non-virtual functionality, but what the heck, we live in the real world.
Doing things like:
if (v = dynamic_cast(...)){} else if (v = dynamic_cast(...)){} else if ...
is a performance waste.
Casting should be avoided when possible, because it is basically saying to the compiler that you know better and it is usually a sign of some weaker design decission.
However, you might come in situations where the abstraction level was a bit too high for 1 or 2 sub-classes, where you have the choice to change your design or solve it by checking the subclass with dynamic_cast and handle it in a seperate branch. The trade-of is between adding extra time and risk now against extra maintenance issues later.
In most situations where you are writing code in which you know the type of the entity you're working with, you just use static_cast as it's more efficient.
Situations where you need dynamic cast typically arrive (in my experience) from lack of foresight in design - typically where the designer fails to provide an enumeration or id that allows you to determine the type later in the code.
For example, I've seen this situation in more than one project already:
You may use a factory where the internal logic decides which derived class the user wants rather than the user explicitly selecting one. That factory, in a perfect world, returns an enumeration which will help you identify the type of returned object, but if it doesn't you may need to test what type of object it gave you with a dynamic_cast.
Your follow-up question would obviously be: Why would you need to know the type of object that you're using in code using a factory?
In a perfect world, you wouldn't - the interface provided by the base class would be sufficient for managing all of the factories' returned objects to all required extents. People don't design perfectly though. For example, if your factory creates abstract connection objects, you may suddenly realize that you need to access the UseSSL flag on your socket connection object, but the factory base doesn't support that and it's not relevant to any of the other classes using the interface. So, maybe you would check to see if you're using that type of derived class in your logic, and cast/set the flag directly if you are.
It's ugly, but it's not a perfect world, and sometimes you don't have time to refactor an imperfect design fully in the real world under work pressure.
The dynamic_cast operator is very useful to me.
I especially use it with the Observer pattern for event management:
#include <vector>
#include <iostream>
using namespace std;
class Subject; class Observer; class Event;
class Event { public: virtual ~Event() {}; };
class Observer { public: virtual void onEvent(Subject& s, const Event& e) = 0; };
class Subject {
private:
vector<Observer*> m_obs;
public:
void attach(Observer& obs) { m_obs.push_back(& obs); }
public:
void notifyEvent(const Event& evt) {
for (vector<Observer*>::iterator it = m_obs.begin(); it != m_obs.end(); it++) {
if (Observer* const obs = *it) {
obs->onEvent(*this, evt);
}
}
}
};
// Define a model with events that contain data.
class MyModel : public Subject {
public:
class Evt1 : public Event { public: int a; string s; };
class Evt2 : public Event { public: float f; };
};
// Define a first service that processes both events with their data.
class MyService1 : public Observer {
public:
virtual void onEvent(Subject& s, const Event& e) {
if (const MyModel::Evt1* const e1 = dynamic_cast<const MyModel::Evt1*>(& e)) {
cout << "Service1 - event Evt1 received: a = " << e1->a << ", s = " << e1->s << endl;
}
if (const MyModel::Evt2* const e2 = dynamic_cast<const MyModel::Evt2*>(& e)) {
cout << "Service1 - event Evt2 received: f = " << e2->f << endl;
}
}
};
// Define a second service that only deals with the second event.
class MyService2 : public Observer {
public:
virtual void onEvent(Subject& s, const Event& e) {
// Nothing to do with Evt1 in Service2
if (const MyModel::Evt2* const e2 = dynamic_cast<const MyModel::Evt2*>(& e)) {
cout << "Service2 - event Evt2 received: f = " << e2->f << endl;
}
}
};
int main(void) {
MyModel m; MyService1 s1; MyService2 s2;
m.attach(s1); m.attach(s2);
MyModel::Evt1 e1; e1.a = 2; e1.s = "two"; m.notifyEvent(e1);
MyModel::Evt2 e2; e2.f = .2f; m.notifyEvent(e2);
}
Contract Programming and RTTI shows how you can use dynamic_cast to allow objects to advertise what interfaces they implement. We used it in my shop to replace a rather opaque metaobject system. Now we can clearly describe the functionality of objects, even if the objects are introduced by a new module several weeks/months after the platform was 'baked' (though of course the contracts need to have been decided on up front).