I was wondering:
With a tree, the root can have multiple children and no id. All nodes (except the root) have an id and the leaf nodes can not have children. It is fixed what type must be used for each depth. So the leaves are always of the same type and so are the parents of the leaves.
Since the root and the nodes can have children and only the nodes have an id I was wondering if the following use of multiple inheritance is acceptable:
class NodeWithId
{
private:
std::string m_id;
};
template<typename T>
class NodeWithChildren
{
private:
std::vector<T> m_nodes;
};
class Network: public NodeWithChildren<Subnet>
{
};
class Subnet: public NodeWithChildren<Machine>,
public NodeWithId
{
};
class Machine: public NodeWithChildren<Application>,
public NodeWithId
{
};
class Application: public NodeWithId
{
};
Or is there a better way to implement this?
edit:
removed virtual
changed classnames
Or is there a better way to implement this?
IMHO, your design creates classes for stuff that are best treated as object instances. At a class level I do not see the need to differentiate between Level1 nodes and Level2 nodes.
Use a design that is simple. Ask yourself, if this design has any potential benefits or not than the naive approach of having a single Node class and creating a tree structure out of Node instances (which you create at runtime).
You could do it with single inheritance:
class NodeWithId
{
private:
std::string m_id;
};
template<typename T>
class NodeWithChildren : public NodeWithId
{
private:
std::vector<T> m_nodes;
};
class Root: public NodeWithChildren<Level1Node>
{
};
class Level1Node: public NodeWithChildren<Level2Node>
{
};
class Level2Node: public NodeWithChildren<LeafNode>
{
};
class LeafNode: public NodeWithId
{
};
You would only need multiple inheritance in the case that you can have a NodeWithChildren that DOESN'T have an ID. In your design above every NodeWithChildren also has a NodeWithId so you may as well derive NodeWithChildren from NodeWithId and totally bypass any potential multiple inheritance problems.
Seems like a "better" design to me ...
First of all, there is no need for virtual inheritance, based on the sample code you posted, there is no 'dreaded diamond'
But I don't really get your design at all, there's no common base class for anything representing your tree, why are you using inheritance at all? It looks like everything could be achieved using composition.
Is this simply a slimmed down version of your hierarchy made for this question?
Your performance will be unbearable, and your code unbelievably convoluted.
First you use a template, which has a vector in it, for each node.
The vector alone will dramatically slow things. Having to cast things would make your traversal code very very slow, and the code itself would be hard to comprehend for anyone else.
Also, since you have different classes, how is the vector able to deal with that? Answer, it can't. That means it has to be a vector of pointers to a base class. Then you have to figure out what the proper type is at runtime to get any use out of them.
If this added some sort of benefit then it might be worth it for some uses, but it's really the opposite of what you want from a structure like a tree, which should be as simple as possible to use and comprehend and have as few memory allocations as possible and ideally good performance.
Related
Currently I'm trying to understand "evilness" of MI. I've just watched a video on youtube where a js guy speaks against inheritance. Here is his example (I've rewrite it in C++):
struct Robot
{ void drive(); };
struct MurderRobot : public Robot
{ void kill(); };
struct CleanerRobot : public Robot
{ void clean(); };
struct Animal
{ void poop(); };
struct Dog : public Animal
{ void bark(); };
struct Cat : public Animal
{ void meow(); };
Then he suggested a new class MurderRobotDog, which, from his point of view, can't be done gracefully by means of inheritance. Surely, it can't be done by means of single inheritance. But I don't see any problem to do that with MI.
I think we could create a base class BarkingObject, which would have all barking stuff. Then the Dog inherits from the Animal, which has common poop(), and from the BarkingObject. And when you need a killing dog-robot, it must inherit from the BarkingObject and the MurderRobot. It makes more sense. The MurderRobotDog can't inherit from a live creature, because then it becomes alive and that contradicts with the definition of a robot. Of course, for that you have to use multiple inheritance that is considered to be EVIL by many people. It's unfortunate, as it seems we can't efficiently reuse different unrelated (you don't need poop() in order to bark(), and the robot case confirms this assertion) functionality without it.
What is your arguments against my suggestion?
A multiple inheritance implementation is an old-fashioned way of solving these sorts of problems.
Composition is the new way.
You define interfaces which describe a particular behaviour or set of behaviours:
struct Murderer
{
virtual ~Murderer() = default;
void kill();
};
struct Pooper
{
virtual ~Pooper() = default;
void poop();
};
Actual things, like a cat, dog, or robot, inherit (i.e. implement) these interfaces accordingly. You use a dynamic_cast or similar runtime technique to query an object for an interface before making the appropriate action.
Recently, I've learnt about composite pattern. I want to use it in my assignment which I have to implement File and Folder classes. I realize that sub-classes like CFile and Cfolder got to have the same attributes (name and size). So is it alright for me to put the attributes into the interface? As far as I know, it is not good practice to do so. However, I don't understand why I shouldn't. Or is there any other solutions?
I would say its not a problem. Th difference is that instead of a pure interface class you have an abstract base class. However, if you want to retain the flexibility to use the interface for implementations that are not tied down to those specific member variables then you can always create an interface class as well as an abstract base class for full flexibility. Though that may be getting overly complex overly soon, you can always split the interface from the abstract base later if you need to.
using CItemUPtr = std::unique_ptr<class CItem>;
/**
* Interface class
*/
class CItem
{
public:
virtual ~CItem() {}
virtual CItemUPtr findByName(std::string const& name) = 0;
virtual void setHidden(bool a, bool b) = 0;
};
/**
* Abstract base class
*/
class AbstractCItem
: public CItem
{
protected:
std::string name;
std::size_t size;
};
class CFile
: public AbstractCItem
{
public:
CItemUPtr findByName(std::string const& name) override
{
// stuff
return {};
}
void setHidden(bool a, bool b) override {}
};
It's not really a question of "is it a good practice". By creating an interface, you're defining a standard. The question is, do you NEED the implementation of the interface to contain those data members? You are in the best position to understand your implementation, so you're really the only one who can answer this.
As a general rule, the class implementing the interface should be a black box, and the outside world shouldn't have access to any internals (including member data). Interfaces define common functionality that is required to be present to be able to support the interface, and I'd expect those implementation details to be buried in the underlying implementation of the class only, as a general rule. YMMV.
The design principle for a class should be:
'It is impossible to break the class invariant from the outside'
If the constructor(s) set up the class invariant, and all members
uphold the class invariant, this is achieved.
However, if the class does not have a class invariant, having
public members achieves the same thing.
// in C++, this is a perfectly fine, first order class
struct Pos
{
int x,y;
Pos& operator+=(const Pos&);
};
also see https://en.wikipedia.org/wiki/Class_invariant
I have somewhat complicated inheritance structure which is mainly there to avoid code-duplicating and to facilitate common interface for various classes. It relies on virtual and non-virtual inheritance and looks more or less like this:
class AbstractItem
{
//bunch of abstract methods
};
class AbstractNode : virtual public AbstractItem
{
//some more virtual abstract methods
};
class AbstractEdge : virtual public AbstractItem
{
//yet some different virtual abstract methods
};
and then some "real" classes like this
class Item : virtual public AbstractItem
{
//implements AbstractItem
};
class Node : public Item, public AbstractNode
{
//implements AbstractNode
};
class Edge : public Item, public AbstractEdge
{
//implemetns AbstractEdge
};
and this is packed into a graph model class so that:
class AbstractGraph
{
virtual QList<AbstractNode*> nodes() const = 0;
virtual QList<AbstractEdge*> edges() const = 0;
};
class GraphModel : public AbstractGraph
{
public:
virtual QList<AbstractNode*> nodes() const override; //this converts m_Nodes to a list of AbstractNode*
virtual QList<AbstractEdge*> edges() const override; //dtto
private:
QList<Node*> m_Nodes;
QList<Edge*> m_Edge;
};
The reason for this convoluted structure is that there are different classes implementing AbstractGraph such as sorting models, filtering and those come in different variants - some store their data just as the model shown and have their own sets of AbstractItem/Node/Edge derived classes, others are dynamic and rely on the data of underlying graph/model without data of their own. Example:
class FilterNode : public AbstractNode
{
//access the data in the m_Item via AbstractItem interface and implements differently AbstractNode interface
private:
AbstractItem *m_Item = nullptr; //this variable holds pointer to some real item with actual data such as the one from GraphModel
};
class GraphFilter : public AbstractGraph
{
//implements the interface differently to the GraphModel
private:
QList<FilterNode*> m_Nodes;
AbstractGraph *m_Source = nullptr; //source graph...
};
I have second thoughts about this because it relies on (virtual)inheritance, relies on abstract methods called through base etc. Is the overhead from this all that significant?
The alternative would be either:
a) Copy-paste lots of code to avoid virtual methods and most of the inheritance but it would be code maintanance nightmare. Plus no common interfaces...
b) Template it all out somehow... this I am somewhat unsure about and I do not know whether it is even possible. I do use them at few places in this already to avoid code duplication.
So does it seem reasonable or like an overkill? I might add that in some cases I will call the methods directly (inside the models) bypassing the virtual calls but on the outside it will pretty much always called via the abstract base.
Trying to implement generic graph algorithms using dynamic polymorphism with C++ makes things
Unnecessary hard.
Unnecessary slow.
The virtual function overhead stands out more significantly the simpler the functions are. In the quoted interface you also do return containers from various functions. Even if these are COW container, there is some work involved and accessing the sequence casually may easily unshare (i.e., copy) the represetation.
In the somewhat distant past (roughly 1990 to 1996) I have experimented with a dynamic polymorphism based generic implementtion of graph algorithms and was struggling with various problems to make it work. When I read first about STL it turned out that most of the problems can be addressed via a similar abstraction (although one key idea was still missing: property maps; see the reference to BGL below for details).
I found it preferable to implement graph algorithms in terms of an STL-like abstraction. The algorithms are function template implemented in terms of specific concepts which are sort of like base classes except for two key differences:
There are no virtual function calls involved in the abstraction and functions can normally be inlined.
The types returned from functions only need to model an appropriate concept rather than having to be compatible via some form of inheritance to some specific interface.
Admittedly I'm biased because I wrote my diploma thesis on this topic. For an [independently develop] application of this approach have a look at the Boost Graph Library (BGL).
For some performance measurements comparing different function call approaches, have a look at the function call benchmarks. They are modelled after the performance measurements for function calls from the Performance TR.
Suppose I have following inheritance tree:
SDLBullet inherits from Bullet inherits from Entity
EnemyBullet inherits form Bullet inherits from Entity
Now I need a new class, SDLEnemyBullet, which needs the draw as implemented in SDLBullet, and the collision as implemented in EnemyBullet. How would I do this? Is this to be solved using multiple inheritance? If not, feel free to edit my question and title. If so, how would I implement such thing?
Some code examples below:
class Entity {
bool collision(Entity) = 0;
void draw() = 0;
}
class Bullet : Entity {
bool collision(Entity) {/*some implementation*/};
void draw() {/*draw me*/};
}
class SDLBullet : Bullet {
void draw() {/*draw me using SDL*/};
}
class EnemyBullet : Bullet {
bool collision(Entity) {/*if Entity is a fellow enemy, don't collide*/};
}
class SDLEnemyBullet : ????? {
/*I need SDLBullet::draw() here*/
/*I need EnemyBullet::collision(Entity) here*/
/*I certainly do not want EnemyBullet::draw nor SDLBullet::collision here*/
}
Any help is much appreciated!
(BTW: This is a school project, and an inheritance tree like this was suggested to us. No one is stopping us from doing it different and better. Thats why I asked the question.)
The textbook solution involves multiple and virtual inheritance.
class SDLBullet : public virtual Bullet {
void draw() {/*draw me using SDL*/};
};
class EnemyBullet : public virtual Bullet {
bool collision(Entity) {/*if Entity is a fellow enemy, don't collide*/};
};
class SDLEnemyBullet : public SDLBullet, public EnemyBullet {
// just one Bullet subobject here
};
Normally, collision stuff is done using multiple dispatch, or in C++, who hasn't this feature, using the visitor pattern.
BUT
why don't you have a hierarchy like this instead ?
class Entity;
class Bullet : public Entity
{
public:
virtual draw();
}
class FriendlyBullet : public Bullet
{
public:
bool collide(EnnemyBullet*);
bool collide(FriendlyBullet*);
}
class EnnemyBullet : public Bullet
{
public:
bool collide(EnnemyBullet*);
bool collide(FriendlyBullet*);
}
This would work too, and wouldn't require multidispatch or multiple inheritance
You need to specify a comma separated list of the super classes:
class SDLEnemyBullet : public SDLBullet, public EnemyBullet {
/*I need SDLBullet::draw() here*/
/*I need EnemyBullet::collision(Entity) here*/
/*I certainly do not want EnemyBullet::draw nor SDLBullet::collision here*/
}
It looks like you're making a game (engine). To avoid the need for complex inheritance structures like this favor composition over inheritance for entities i.e. Have an entity object that contains separate 'component' objects for rendering etc. That way you can mix and match the components however you like without having an explosion of classes with all the different combinations of super classes.
Here's a good article on the subject: http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/
Prefer composition over inheritance
You don't need inheritance to combine stuff that's not related like that. Make up basic objects (entities?) for game logic, physics, sound, input, graphics (which may use inheritance) and combine those a GameObject which just has an array of said objects.
Some nifty cross-linking is useful since they will all share a Frame or Transform, but that can be done during creation by iterating over all other objects and using dynamic_cast... (it's useful if you do not need to depend on initialization order).
But there's really no need to build this with inheritance. It doesn't fit your usecase properly. (Although virtual inheritance is useful, it's not a good thing to use inheritance to force different things to become the same, i.e. making everything be a something, instead of being made up of different parts (render, damage, sound, etc...).
Read this and this for more info, or just click the title to google for it. :)
I'm having trouble deciding on a way to model this type of relationship...
All bosses can do certain things and have certain things (velocities, health, etc.) so these are part of the "main" abstract boss class.
class Boss // An abstract base class
{
//Stuff that all Bosses can do/have and pure virtual functions
};
Now I want to specify a few more pure virtual functions and members for bosses that can shoot. I'm wondering how I should model this? I've considered deriving a ShootingBoss Class from the Boss class, but specific bosses are classes in themselves (with Boss just being an abstract base class that they are derived from.) Thus if ShootingBoss is derived from Boss, and a specific boss derives from ShootingBoss, that boss won't be able to access the protected data in the Boss class.
Boss(ABC) -> ShootingBoss(ABC) -> SomeSpecificBoss(can't access protected data from Boss?)
Basically, I'm wondering what the recommended way to model this is. Any help is appreciated. If more information is needed, I'd be happy to offer.
I think you need to look into Mixin classes.
For example, you could create the following classes:
class Boss {
// Here you will include all (pure virtual) methods which are common
// to all bosses, and all bosses MUST implement.
};
class Shooter {
// This is a mixin class which defines shooting capabilities
// Here you will include all (pure virtual) methods which are common
// to all shooters, and all shooters MUST implement.
};
class ShootingBoss : public Boss, public Shooter
{
// A boss who shoots!
// This is where you implement the correct behaviour.
};
Mixins require multiple inheritance to be used, and there are many pitfalls and complexities to doing so. I suggest you look at answers to questions like this one to ensure that you avoid these pitfalls.
Why not start using interfaces? So, rather than simply uber base class, you spread out your things into capabilities.
struct IBoss : public IObject
{
}
struct ICanShoot : public IObject
{
}
Generally to implement this you derive your interfaces from another interface which allows you to query for an interface.
struct IObject
{
int getId(); // returns a unique ID for this interface.
int addRef();
int release();
bool queryInterface(int id, void** pp);
}
That way, you implement your Boss more easily:
class Boss : public IBoss, public ICanShoot
{
};
It might be overkill for some, but if your class heirachy is all screwed up, this is the best way out of the mess.
Have a look at M$'s IUnknown interface.
There are two different ways of doing this:
1) Mixin classes (already explained)
2) Role playing classes.
Role playing has it's advantages and disadvantages. Roles, that object can play (boss, shooter, whatever) are implemented using containment. They must be derived from the common base interface class, which will have to be downcasted dynamicaly (argh..). Caller will ask object of your class for the role pointer (this is where downcast will come in) and if object can play the role (returned non-NULL pointer) client will call appropriate function of the role.
Main advantage of role playing approach (appart from avoiding multiple inheritance) - it is dynamic. Object can accept new roles at runtime, as opposed to mixin that has to be defined at compile time.
Also it is scalable. In multiple inheritance (mixin) approach if you decide to expand your hierarchy with "Protector" and say that boss can be simple Boss, ShootingBoss, ProtectingBoss, ShootingProtectingBoss, and later expand it ufrther with Сoward (Boss, ShootingBoss, ProtectingBoss, ShootingProtectingBoss, CowardBoss, CowardShootingBoss, CowardProtectingBoss, CowardShootingProtectingBoss) - you see your hierarchy explodes. This is when you need to switch to role playing model, where object will simply have to accept new role Coward. But until you are sure that you need it - stick with mixin classes.
Below is hierarchy sketch for role playing lcases:
class IRole
{
// some very basic common interface here
public:
virtual ~IRole() {}
};
class IBoss : public IRole
{
};
class IShooter : public IRole
{
};
class IProtector : public IRole
{
};
class MultifunctionalPerson
{
public:
bool AcceptRole(IRole* pRole); // pass some concrete role here
IRole* GetRole(int roleID);
};
// your clinet will be using it like that
MultifunctionalPerson& clPerson = ... (coming from somewhere);
// dynamic_cast below may be replaced with static_cast if you are sure
// that role at PROTECTOR_ROLE location is guaranteed to be of IProtector type or NULL
IProtector* pProtector = dynamic_cast<IProtector*>(clPerson.GetRole(PROTECTOR_ROLE));
if( 0 != pProtector )
{
pProtector->DoSomething();
}