Cross-Casting To Get Another Interface Considered Bad Design? - c++

I've been searching around but can't find much about this, but would cross-casting from one interface to another be considered bad design? Here is a sample of the code I'm using:
class IShip {
// strictly ship_like interface
// i.e. move, attack, dock, etc.
};
class Sim_object {
// all game objects are derived from this and represents component in composite pattern
// get_name()
// get_location()
// add
// remove
// etc.
};
template<typename T>
class Group : public Sim_object {
// composite functions
// add
// remove
// display
// map<T> container;
};
class Ship_group : public Group<IShip>, public IShip {
// added IShip functionality
};
class Ship : public Sim_object, public IShip {
// actual ship object
};
Anyway, I'm using MVC where my controller will manipulate IShip objects and depending on if they are composites or leafs, will perform some function. My question is at times I need to go from IShip to Sim_object to get a different interface (requiring a dynamic_cast). Would this be considered bad design/practice? I didn't really want to pollute the IShip interface just to get access to the Sim_object commands.

Casting generally implies a bad design, unless the casted type is already known in that context. For example, if you have an interface IRenderer that draws textures represented by the interface ITexture and you have an implementation for OpenGL which has an OpenGLRenderer and an OpenGLTexture, casting the ITexture to OpenGLTexture in OpenGLRenderer wouldn't be a design issue.
If you really need to cast the IShip to a Sim_object it would be reasonable to think that IShip should actually be a Sim_object.

Related

Better solution than dynamic_cast in C++

I have a class hierarchy that I designed for a project of mine, but I am not sure how to go about implement part of it.
Here is the class hierarchy:
class Shape { };
class Colored { // Only pure virtual functions
};
class Square : public Shape { };
class Circle : public Shape { };
class ColoredSquare : public Square, public Colored { };
class ColoredCircle : public Circle, public Colored { };
In part of my project, I have a std::vector of different type shapes. In order to run an algorithm though, I need to put them in a std::vector of colored objects (all of which are derived types of different concrete shapes, so I need a method to cast a Square into a ColoredSquare and a Circle into a ColoredCircle at runtime.
The tricky thing is that the 'shape' classes are in a different library than the 'colored' classes.
What is the best method to acomplish this? I have thought about doing a dynamic_cast check, but if there is a better way, I would rather go with that.
Edit 1:
Here's a bit better of an Example:
class Traceable {
public:
// All virtual functions
virtual bool intersect(const Ray& r) = 0;
// ...
};
class TraceableSphere : public Sphere, public Traceable {
};
class IO {
public:
// Reads shapes from a file, constructs new concrete shapes, and returns them to
// whatever class needs them.
std::vector<Shape*> shape_reader(std::string file_name);
};
class RayTracer {
public:
void init(const std::vector<Shape*>& shapes);
void run();
private:
std::vector<Traceable*> traceable_shapes;
};
void RayTracer::init(const std::vector<Shape*>& shapes) {
// ??? traceable_shapes <- shapes
}
void RayTracer::run() {
// Do algorithm
}
You could use the decorator pattern:
class ColorDecorator public Colored
{
ColorDecorator(Shape* shape): m_shape(shape) {}
... //forward/implement whatever you want
};
If you want to store a Square in a Colored vector, wrap it in such a decorator.
Whether this makes sense is questionable though, it depends on your design and the alternatives. Just in case, also check out the visitor pattern (aka double dispatch) which you could use to just visit a subset of objects in a container or treat them differently depending on their type.
Looks like you are going to design the class library in a "is-a" style, welcome to the Inheritance-Hell.
Can you elaborate a bit about your "algorithm" ?
Typically it is bad design if you need to "type-test" on objects, since that is what you want to avoid with polymorphism. So the object should provide the proper implementation the algorithm uses (design-pattern: "strategy"), advanced concepts utilize "policy-based class design".
With careful design, you can avoid casting. In particular, care for SRP. Implement methods carefully so that they use a single Interface to achieve a single goal/fulfill a single responsibility. You have not posted anything about the algorithms or how the objects will be used. Below is a hypothetical sample design:
class A {
public:
void doSomeThing();
};
class B{
public:
void doSomeOtherThing();
};
class C:public A,public B{};
void f1( A* a){
//some operation
a->doSomeThing();
//more operation
}
void f2(B* b){
//some operation
b->doSomeOtherThing();
//more operation
}
int main(int argc, char* argv[])
{
C c;
f1(&c);
f2(&c);
return 0;
}
Note using the object c in different context. The idea is to use only the interface of C that is relevant for a specific purpose. This example can have classes instead of the functions f or f2. For example, you have some Algorithms classes that do some operation using the objects in the inheritance hierarchy, you should create the classes such that they perform a single responsibility, which most of the time requires a single interface to use, and then you can create/pass objects as instance of that interface only.
Object-oriented programming only makes sense if all implementations of an interface implement the same operations in a different way. Object-orientation is all about operations. You have not shown us any operations, so we cannot tell you if object-orientation even makes sense for your problem at all. You do not have to use object-oriented programming if it doesn't make sense, especially in C++, which offers a few other ways to manage code.
As for dynamic_cast -- in well-designed object-oriented code, it should be rare. If you really need to know the concrete type in some situation (and there are such situations in real-life software engineering, especially when you maintain legacy code), then it's the best tool for the job, and much cleaner than trying to reimplement the wheel by putting something like virtual Concrete* ToConcrete() in the base class.
I think the simplest & cleanest solution for you would be something like the following similar to what Chris suggests at the end.
class Shape {
virtual Colored *getColored() {
return NULL;
}
};
class Colored { // Only pure virtual functions
};
class Square : public Shape { };
class Circle : public Shape { };
class ColoredSquare : public Square, public Colored {
virtual Colored *getColored() {
return this;
}
};
class ColoredCircle : public Circle, public Colored {
virtual Colored *getColored() {
return this;
}
};
I do not completely understand this statement though
" The tricky thing is that the 'shape' classes are in a different library than the 'colored' classes."
How does this not allow you to do what's being suggested here (but still allow you to create a class ColoredSquare) ?

C++ - Recommended way to allow access objects in a class to access the class they are in

I have a class Game with class EnemyManager. EnemyManager deals with spawning of enemies and the logic behind it. The problem is that EnemyManager needs access to functions and other Game objects in the game class. I can think of two ways to handle this.
Pass the address of the class object Game* using this as one of the arguments in the EnemyManager.
Declare a global pointer to a Game object and set it when initializing the Game class. Then extern it into the enemymanager.cpp.
Which is the more advisable way to do this?
Whenever I encounter situations like this I review the overall design of the related classes. If EnemyManager is a member of a Game object and needs to call things within Game, maybe those functions in Game can be factored out into a separate component. If something you are writing is beginning to feel overly-complex or like a hack it's usually time to do some factoring.
When dealing with object oriented designs, it is typically good to think about who will act how on what to find a first version of a design. After having written this version, one often finds the weaknesses and rewrite it for the second iteration.
So, in this case, the Game class manages the world (I assume) and offers different ways to manipulate it. Your EnemyManager manages one aspect of the world, enemies, but they do live inside the world.
class Enemy {
public:
Enemy(Location& location, int hitpoints);
bool Move(Direction& direction);
};
class Game {
public:
bool CreateInitialState();
bool AddEnemy(Enemy& enemy);
bool AddBullet(Location& location, Direction& direction, int speed);
void Render();
};
class EnemyManager {
public:
EnemyManager(Game& game);
void MoveEnemies();
};
In this first version, all types see each other as proper classes and manipulates things by calling the appropriate method. This offers little support for expanding on the game if you want to add new things to it.
This is where interfaces become handy and you can try to think about how the different parts will interact instead of how they should be implemented.
class Canvas {
public:
// Different graphical primitives.
};
class GameObject {
public:
virtual ~GameObject() {};
virtual void Draw(Canvas& canvas) = 0;
virtual bool Move(Direction& direction) = 0;
};
class GlobalState {
public:
virtual AddGameObject(GameObject& gameObject) = 0;
};
class Game : public Canvas, public GlobalState {
public:
bool CreateInitialState();
void Render() {
// Send itself to the Draw method in all GameObjects that have been added
}
// Other game logic
};
class Enemy : public GameObject {
// This can be specialized even more if you need to
};
class Bullet : public GameObject {
// This can also be specialized even more if you need to
};
This separates design from implementation and, as I see it, is a good way to end up with a proper first attempt.
It is hard to say without knowing the overall architecture layout, but my 2 cents:
The way you describe as a first one is called the dependency injection and is widely used all around. You should keep an eye on what methods/fields you're making public.
I assume that Game class has the methods that should not be accessible from the EnemyManager class, thus is seems like it's a good idea to create the interface which has the declaration of the methods that are used by EnemyManager and then pass the pointer to the EnemyManager instance (instead of the Game).
For example: The Game class implements IGameEnemyManager, and you're passing IGameEnemyManager using this as one of the initialization arguments.
If you are handling game objects in EnemyManager, why is it part of the class Game ? I suppose you should consider reviewing your design as there are chances of circular reference problem if you don't handle the scenarios well.
Consider segregating both the classes to ensure a single responsibility principle.
Define proper interface in yourEnemyManagerto Game object as argument and act on the functions
These are little suggestions that I can think of with limited idea about your design
You're absolutely need to use the 1st approach, but with a few changes: you should disintegrate your Game class to more components. For example you can create a SceneManager class, which is responsible for all game object's creation/management. When you're instantiating the EnemyManager - just pass a pointer to it:
// in SceneManager
EnemyManager* emgr = new EnemyManager(this);
InterfaceManager* imgr = new InterfaceManager(this);
Note that your SceneManager class should provide a complete interface
// in EnemyManager
GameObject* spawnEnemyAt(string name, EnemyClass* eclass, Vector3 position, AIBehaviour* behaviour)
{
GameObject* myEnemy = smgr->createGameObject(name, position, etc...);
//register an enemy in the enemies list, initialize it's behaviour and do any other logic
return myEnemy
}
This approach should help you not to ruin your architecture and not to be captured in the friend(class)-zone.
[Upd.] Note that my approach assumes that all objects on the scene are GameObjects, there's neither Enemy nor Player classes. Every GameObject may has a Renderer, Animator, AIBehaviour components.

Composition pattern

How should one approach composition instead of inheritance? Consider the following class:
class GameObject {...};
class Sprite {
public:
void changeImage(...);
};
class VisibleGameObject: public Sprite, public GameObject {};
class VisibleGameObject : public GameObject {
protected:
Sprite m_sprite;
};
The first VisibleGameObject class uses inheritance. Multiple inheritance. Does not looks good. Second one is what i would like to use, but it won't allow me to access Sprite's API like this:
VisibleGameObject man;
man.changeImage();
How can that be accomplished without inheritance (or code duplication)?
EDIT:
I do know I can just use inheritance or make m_sprite a public member and I can't access the Sprite class because it's private. That's the point, the question is about the best way to change a VisibleGameObject's Sprite, following the rules of data encapsulation.
I think you are still one step behing "composition over inheritance" mindset. The base class should know what to composite. To change image, you should change sprite instance, you shouldn't provide interface of composed instances. For example:
class GameObject {
public:
// you can choose public access or getters and setters, it's your choice
Sprite sprite;
PhysicalBody body;
};
object = GameObject();
object.sprite = graphicalSystem.getSpriteFromImage("image.png");
// or if you prefer setters and getters
object.setSprite(sprite);
More generally GameObject should contain instances (or pointers to instances, depends on your implementation) of base class Component. It makes sense to use inheritance in this case, because this way they can be in one storage like std::map. For example:
class Component {
// ...
};
class Sprite : public Component {
//...
};
class PhysicalBody : public Component {
//...
};
class GameObject {
protected:
std::map<std::string, Component*> components;
//...
public:
Component* getComponent(const std::string& name) const;
void setComponent(const std::string& name, Component* component);
//...
};
For component creation and rendering in main loop use Systems. For example GraphicalSystem knows all instances of Sprite it has created and while rendering it renders only sprites attached to some GameObject instance. Detached component can be garbage collected. Information about position and size might be part of the GameObject or it might be a component "physical".
The best way to understand it is to write your own prototype or to check existing implementations (Artemis, Unity 3D and many others). For more information see Cowboy programming: Evolve Your Hierarchy or try to find Entity/component system.
First of all, the alternative for composition is private inheritance (and not public one) since both model a has-a relationship.
The important question is how can we expose Sprite public members (e.g. changeImage) to VisibleGameObject clients? I present the 4 methods that I know:
(Private) inheritance
I understand that you want to avoid (multiple) inheritance, but for the sake of completeness, I present one suggestion based on private inheritance:
class VisibleGameObject: private Sprite, public GameObject {
...
};
In this case VisibleGameObject privately derives from Sprite. Then users of former cannot access any member of the latter (as if it it were a private member). In particular, Sprite's public and protected members are hidden to VisibleGameObject clients.
Had the inheritance been public, then all Sprite's public and protected members would be exposed by VisibleGameObject to its clients. With private inheritance we have a finer control of which methods should be exposed through using declarations. For instance, this exposes Sprite::changeImage:
class VisibleGameObject1: private Sprite, public GameObject {
public:
using Sprite::changeImage;
...
};
Forwarding methods
We can give to VisibleGameObject public methods that forward the call to m_sprite as show below.
class VisibleGameObject2: public GameObject {
public:
void changeImage() {
m_sprite.changeImage();
}
private:
Sprite m_sprite;
...
};
I believe this is the best design, especially as far as encapsulation is concerned. However, it might require a lot of typing in respect to other alternatives.
Structure dereference operator
Even plain old C provides types that exposes another type's interface as if it was its own: pointers.
Indeed, suppose that p is of type Sprite*. Then by using the structure dereference operator -> we can access members of Sprite (pointed by p) as shown below.
p->changeImage();
C++ allows us to endow classes with customised struct dereference operators (a feature well used by smart pointers). Our example becomes:
class VisibleGameObject3 : public GameObject {
public:
Sprite* operator ->() {
return &m_sprite;
}
private:
Sprite m_sprite;
...
};
and
VisibleGameObject v;
v->changeImage();
Although convenient, this method has many flaws:
As for public inheritance, this approach doesn't give a fine control over which Sprite public members should be exposed.
It works only for one member (that is, you cannot use the same trick to expose two members interfaces).
It messes up with the interface. Indeed, consider for instance that VisualGameObject has a method doSomething(). Then, to call this method on an object v one should do v.doSomething() whereas to call changeImage() one should uses v->changeImage(). This is confusing.
It makes VisibleGameInterface to look like a smart pointer. This is semantically wrong!
C++11 Wrapper Pattern
Finally, there's Sutter's C++11 Wrapper Pattern (watch his presentation, specifically the second slide of page 9):
class VisibleGameObject4 : public GameObject {
private:
Sprite m_sprite;
public:
template <typename F>
auto operator()(F f) -> decltype(f(m_sprite)) {
return f(m_sprite);
}
};
Clients use it this way:
VisibleGameObject4 v4;
v4( [](Sprite& s) { return s.changeImage(); } );
As we can see, compared to the forwarding methods approach this transfer the burden of typing from the class writter to the class clients.
It looks like you are trying to directly access Sprite's function without referencing it first. Try this:
man.m_sprite.changeImage() ;
Note that m_sprite and changeImage() should be public for you to do this. Otherwise use a public accessor function to manipulate private class members.

Choosing a design pattern for a class that might change its internal attributes

I have a class that holds arbitrary state and it's defined like this:
class AbstractFoo
{
};
template <class StatePolicy>
class Foo : public StatePolicy, public AbstractFoo
{
};
The state policy contains only protected attributes that represent the state.
The state might be the same for multiple behaviors and they can be replaced at runtime.
All Foo objects have the same interface to abstract the state itself and to enable storing Foo objects in containers.
I would like to find the least verbose and the most maintainable way to express this.
EDIT:
Here's some more info on my problem:
Foo is a class that represents a state and behavior of a certain hardware that can be changed either physically or through a UI (and there are multiple UIs).
I have four more questions:
1) Would a signal/slot mechanism will do?
2) Is it possible to bind every emitted signal from a slot in Foo to have a pointer to Foo like it's a member class?
3) Should I use a visitor instead and treat Foo as a visited class?
4) Why is the StatePolicy a bad design?
Here's the updated API:
class AbstractFoo
{
public:
virtual void /*or boost::signal*/ notify() = 0; // Updates the UI.
virtual void /*or boost::signal*/ updateState() = 0 // Updates the state
};
I don't understand your situation exactly, but here's my shot at it: what if you make an AbstractStatePolicy instead? Example:
class AbstractStatePolicy
{
};
class Foo
{
AbstractStatePolicy *state_policy;
public:
Foo(AbstractStatePolicy *state_policy)
: state_policy(state_policy)
{
}
};
This way, instead of statically defining Foo as a template using a StatePolicy, you can dynamically set the StatePolicy using an approach like this.
If you don't like the idea of having to specify the state_policy every time you create a Foo, consider using a default value or writing a factory to instantiate Foos.
I don't think what you have is a very sensible approach. You should have a pure virtual base class that describes what your implementations can actually do, and then you can create concrete classes that inherit from the base class using whatever state you would need. You would then be able to interact with the state through whatever interface you defined for that base class. Now, if you have arbitrary, dynamic attributes that can change at runtime, then a good way to accomplish that is with a map or dictionary type; you can either map from strings (names of attributes) to strings (representing the attribute values), or if you want a little bit more type safety, you map from strings (names of attributes), to instances of boost::any.

Modeling "optional" inheritance

I'm having trouble deciding on a way to model this type of relationship...
All bosses can do certain things and have certain things (velocities, health, etc.) so these are part of the "main" abstract boss class.
class Boss // An abstract base class
{
//Stuff that all Bosses can do/have and pure virtual functions
};
Now I want to specify a few more pure virtual functions and members for bosses that can shoot. I'm wondering how I should model this? I've considered deriving a ShootingBoss Class from the Boss class, but specific bosses are classes in themselves (with Boss just being an abstract base class that they are derived from.) Thus if ShootingBoss is derived from Boss, and a specific boss derives from ShootingBoss, that boss won't be able to access the protected data in the Boss class.
Boss(ABC) -> ShootingBoss(ABC) -> SomeSpecificBoss(can't access protected data from Boss?)
Basically, I'm wondering what the recommended way to model this is. Any help is appreciated. If more information is needed, I'd be happy to offer.
I think you need to look into Mixin classes.
For example, you could create the following classes:
class Boss {
// Here you will include all (pure virtual) methods which are common
// to all bosses, and all bosses MUST implement.
};
class Shooter {
// This is a mixin class which defines shooting capabilities
// Here you will include all (pure virtual) methods which are common
// to all shooters, and all shooters MUST implement.
};
class ShootingBoss : public Boss, public Shooter
{
// A boss who shoots!
// This is where you implement the correct behaviour.
};
Mixins require multiple inheritance to be used, and there are many pitfalls and complexities to doing so. I suggest you look at answers to questions like this one to ensure that you avoid these pitfalls.
Why not start using interfaces? So, rather than simply uber base class, you spread out your things into capabilities.
struct IBoss : public IObject
{
}
struct ICanShoot : public IObject
{
}
Generally to implement this you derive your interfaces from another interface which allows you to query for an interface.
struct IObject
{
int getId(); // returns a unique ID for this interface.
int addRef();
int release();
bool queryInterface(int id, void** pp);
}
That way, you implement your Boss more easily:
class Boss : public IBoss, public ICanShoot
{
};
It might be overkill for some, but if your class heirachy is all screwed up, this is the best way out of the mess.
Have a look at M$'s IUnknown interface.
There are two different ways of doing this:
1) Mixin classes (already explained)
2) Role playing classes.
Role playing has it's advantages and disadvantages. Roles, that object can play (boss, shooter, whatever) are implemented using containment. They must be derived from the common base interface class, which will have to be downcasted dynamicaly (argh..). Caller will ask object of your class for the role pointer (this is where downcast will come in) and if object can play the role (returned non-NULL pointer) client will call appropriate function of the role.
Main advantage of role playing approach (appart from avoiding multiple inheritance) - it is dynamic. Object can accept new roles at runtime, as opposed to mixin that has to be defined at compile time.
Also it is scalable. In multiple inheritance (mixin) approach if you decide to expand your hierarchy with "Protector" and say that boss can be simple Boss, ShootingBoss, ProtectingBoss, ShootingProtectingBoss, and later expand it ufrther with Сoward (Boss, ShootingBoss, ProtectingBoss, ShootingProtectingBoss, CowardBoss, CowardShootingBoss, CowardProtectingBoss, CowardShootingProtectingBoss) - you see your hierarchy explodes. This is when you need to switch to role playing model, where object will simply have to accept new role Coward. But until you are sure that you need it - stick with mixin classes.
Below is hierarchy sketch for role playing lcases:
class IRole
{
// some very basic common interface here
public:
virtual ~IRole() {}
};
class IBoss : public IRole
{
};
class IShooter : public IRole
{
};
class IProtector : public IRole
{
};
class MultifunctionalPerson
{
public:
bool AcceptRole(IRole* pRole); // pass some concrete role here
IRole* GetRole(int roleID);
};
// your clinet will be using it like that
MultifunctionalPerson& clPerson = ... (coming from somewhere);
// dynamic_cast below may be replaced with static_cast if you are sure
// that role at PROTECTOR_ROLE location is guaranteed to be of IProtector type or NULL
IProtector* pProtector = dynamic_cast<IProtector*>(clPerson.GetRole(PROTECTOR_ROLE));
if( 0 != pProtector )
{
pProtector->DoSomething();
}