I have a class item. Each instance of this class is an object in 3D space, can be basic shapes like cylinder, sphere and cone. The class Item has a convenient API for geometry (radius, top radius, bot radius, length) and transformations (rotation, translation, scale).
enum ItemType {
Sphere = 1,
Cone
}
class Item
{
// ...
public:
ItemType type();
void setType(const ItemType &t);
float radius();
float length();
float topRadius();
float botRadious();
QMatrix4x4 transformations();
void setRadius(const float &r);
void setLength(const float &l);
void setTopRadius(const float &tr);
void setBotRadius(const float &br);
void setTransformations(const QMatrix4x4 &matrix);
// ...
}
Frequently, I want to glue multiple objects together to form a unified shape. For example, two spheres and a cone are connected below. The geometry and transformations of the unified object is dependent upon those of two spheres and one cone.
The problem is:
Convenient handling of the unified object is not possible
By handling, I mean, for example, transforming. Like changing the length of the unified object which requires, changing the length of the middle cone and location of the two spheres accordingly.
class Item has API for convenient handling of each individual object, but not the unified one
For handling of the unified object, I have to work with three different objects which is torturous
The question is:
Which design patterns are best suited to conveniently handle the unified objects?
Note: This question is about object oriented software design and software patterns, it has nothing to do specifically with C++. The only part that is C++ specific is the use of the virtual keyword, but even that, is just the C++-specific keyword that gives you polymorphism, which is again, an object-oriented principle, not something unique to C++.
So, what you first of all need to do, is to extract a true interface for what you call "API". I would call this Primitive3D, and it would be a class containing nothing but pure virtual methods. (In C++, that would be virtual function(parameters) = 0.)
Then, each one of your primitives would be implementing the interface by providing an implementation for each pure virtual method. If you have some basic functionality that all implementations will share, then in addition to implementing this interface you can also keep a common base class. However, the introduction of the interface will keep your options more open.
Then, introduce a new primitive, called perhaps Conglomerate. Again, that would be yet one more class implementing Primitive3D. That class would provide its own implementations for setting various attributes like length and transformation, and these implementations would work by setting some of the attributes of contained primitives.
The Conglomerate class would also provide a few functions which are specific to it and cannot be found in the Primitive3D interface. You would use these functions to configure the conglomerate, at the very least to populate the conglomerate with its contents.
The function which adds a new member primitive to the conglomerate may accept additional parameters to indicate precisely at which position of the conglomeration the new member should appear, what kind of transformation to perform when scaling the primitive, what kind of transformation to perform when translating the primitive, etc.
Internally, the conglomeration would probably make use of a vector containing instances of some internal member structure, which would contain a reference to a Primitive3D and any other information that is necessary so as to know how to handle that primitive. Do not make the mistake of adding this information to the Primitive3D itself, it does not belong there, because a primitive does not know, and should not know, that it is a member of a conglomeration. I would even go as far as to say that the location of a primitive is not a feature of the primitive itself; it is a feature of the space that contains the primitive, whether this space is the universe, or a conglomeration.
Looking at your structure, composite is the pattern you should consider. Also identifying concrete shape with 'type' attribute is against object oriented design. It kills polymorphism, a great unique tool available in OO. Composite pattern will allow you to address elements as well as their aggregation in one hiearchy.
Related
I've read thes question about visitor patterns https://softwareengineering.stackexchange.com/questions/132403/should-i-use-friend-classes-in-c-to-allow-access-to-hidden-members. In one of the answers I've read
Visitor give you the ability to add functionality to a class without actually touching the class itself.
But in visited object we have to add either new interface, so we actualy "touch" the class (or at least in some cases to put setters and getters, also changing the class).
How exactly I will add functionality with visitor without changing visiting class?
The visitor pattern indeed assumes that each class interface is general enough, so that, if you would know the actual type of the object, you would be able to perform the operation from outside the class. If this is not the starting point, visitor indeed might not apply.
(Note that this assumption is relatively weak - e.g., if each data member has a getter, then it is trivially achieved for any const operation.)
The focus of this pattern is different. If
this is the starting point
you need to support an increasing number of operations
then what changes to the classs' code do you need to do in order to dispatch new operations applied to pointers (or references) to the base class.
To make this more concrete, take the classic visitor CAD example:
Consider the design of a 2D CAD system. At its core there are several types to represent basic geometric shapes like circles, lines and arcs. The entities are ordered into layers, and at the top of the type hierarchy is the drawing, which is simply a list of layers, plus some additional properties.
A fundamental operation on this type hierarchy is saving the drawing to the system's native file format. At first glance it may seem acceptable to add local save methods to all types in the hierarchy. But then we also want to be able to save drawings to other file formats, and adding more and more methods for saving into lots of different file formats soon clutters the relatively pure geometric data structure we started out with.
The starting point of the visitor pattern is that, say, a circle, has sufficient getters for its specifics, e.g., its radius. If that's not the case, then, indeed, there's a problem (in fact, it's probably a badly designed CAD code base anyway).
Starting from this point, though, when considering new operations, e.g., writing to file type A, there are two approaches:
implement a virtual method like write_to_file_type_a for each class and each operation
implement a virtual method accept_visitor for each class only, only once
The "without actually touching the class itself" in your question means, in point 2 just above, that this is all that's now needed to dispatch future visitors to the correct classes. It doesn't mean that the visitor will start writing getters, for example.
Once a visitor interface has been written for one purpose, you can visit the class in different ways. The different visiting does not require touching the class again, assuming you are visiting the same compontnts.
Background Info
I am writing a graph-drawing program. I have encountered a problem with templates and inheritance, and I do not know how to proceed. I do not know how I should design my code to enable me to do what I am trying to do. (Explanation below.)
Target
I have a template class, which represents "data". It looks something like the following:
template<typename T>
class GraphData
{
std::vector<T> data_x;
std::vector<T> data_y; // x and y should be held in separate vectors
}
This class is part of an inheritance hierarchy involving several classes.
The hierarchy looks something like this... (Sorry this is from my notes, awful diagram.)
Explanation
There is a base class. No real reason to have it right now, but I anticipate using it later.
Base_Legend adds functionality for legend drawing. New members added include a std::string, and Get/Set functions.
Base_Drawable adds a pure abstract = 0 member. void Draw(...). This is to force overloading in all inherited objects which are drawable.
GraphData_Generic adds functionality for adding/removing data points to a set of vectors. These are pure abstract methods, and must be overridden by any data classes which inherit.
GraphData and HistogramData are 2 data types which have implementations of the functions from GraphData_Generic. (No implementation of Draw().)
GraphData_GenericDrawable doesn't do anything. It is to be used as a base class pointer, so that a vector of these objects can be used as data (add/remove data points) and can be draw (using void Draw()). This class also can be used to call the Get()/Set() methods for the std::string to be used in the legend.
Finally, at the bottom are GraphData_Drawable and HistogramData_Drawable which overload the void Draw() function. This code specifies exactly how the data should be drawn, depending on whether we have a Histogram or general set of data points.
Problem
Currently, I am using template types. The type of data for the datapoints / histogram bin values is specified by using a template.
For example, one can have a HistogramData<double, HistogramData_Drawable<double>, HistogramData_Drawable<int>, etc... Similarly, one can have GraphData<double>, GraphData<float>, GraphData_Drawable`, etc...
So hopefully it should be fairly obvious what's going on here without me uploading my ~ 10000 lines of code...
Right, so, in addition I have some class Graph, which contains a std::vector<GraphData_Generic_Drawable*>, hence the use of the base class pointer, as suggested above.
BUT! One has to decide what type of data should be used as the underlying type. I MUST choose either std::vector<GraphData_Generic_Drawable<double>*> or std::vector<GraphData_Generic_Drawable<float>*>.
This isn't useful, for obvious reasons! (I could choose double and force the user to convert all values manually, but that's just an easy way out which creates more work later on.)
A (very) ugly solution would be to have a std::vector<> for each possible type... int long unsigned long long double float unsigned char... etc...
Obviously this is going to be hideous and essentially repeat loads of code..
So, I intend to implement an AddData method which adds data to that vector, and I also currently have the following method:
// In class Graph
void DrawAll()
{
for(std::vector<GraphData_Drawable*>::iterator it = m_data.begin(); it != m_data.end(); ++ it)
(*iterator)->Draw(arguments);
} // Draw function takes arguments including a canvas to draw to, but this isn't directly relevant to the question
Which iterates over the vector and calls Draw for each set of data in there.
How to fix it?
My current thoughts are something along the lines of; I need to implement some sort of interface for an underlying data class, which retrieves values independent of the underlying type. But this is only a very vague initial idea and I'm not really sure how I would go about implementing this, hence the question... I'm not sure this is even what I should be doing...
If this isn't clear ask me a question and I'll update this with more details.
I have some quite complex, virtual objects hierarchy that represents all the elements in 3D Engine as abstract classes (interfaces).
For example, I have Renderable which parent is Sizeable (with getSize() method). Sizeable inherits from Positionable (with getPosition()) etc.
That structure is fine and logic (e.g. 3D Model is Renderable, bone of skeleton for skinning is Sizeable, and the Camera is only Positionable).
There is also one "uber-class", Engine3D.
My aim is:
I have to write the "implementation" for that "graphic things" (module). It will be DirectX "implementation". The aim: the programmer that uses my "implementation" can switch to other fast and simple (which implementation he uses is almost transparent to him).
I would like to keep it that way:
//choosing module "implementation" ("implementation" mentioned here only)
Engine3D * engine = new MyEngine3D();
Renderable * model = engine->createModel(...);
//line above will return MyRenderable class in fact,
//but user (programmer) will treat it as Renderable
Why I want to create "own" versions of Renderable and all others? Because they will share some "implementation"-specific data (pointers for DirectX structures etc.).
My problem is:
But that way, I would create a "mirror" - a copy of the original module's objects hierarchy with My in front of each class name. Moreover, MyRenderable would have to inherit both from Renderable (to overwide render()) and MySizeable (to get the DirectX matrices etc.).
And that involves the virtual inheritance and really complicates the structure.
Is there an easier way?
I'm speaking mainly about avoiding virtual multi-inheritance (just multi-inheritance is fine, I guess).
You should definitely avoid a strong coupling of object hierarchy and rendering implementation. My suggestion would be to move all the DirectX specific code to a class outside of your object hierarchy, for example an IRenderer interface together with a DirectXRenderer implementation. Add a reference or pointer to IRenderer to all the classes which have to draw something (Renderable, etc). All object classes must use your own implementations of matrices etc. to keep them independent from the data structures of the actual rendering backend.
In game development there is a notion of Entity System which is aiming to simplify the game loop by gaining a flexible architecture. For details see the links below:
http://www.richardlord.net/blog/what-is-an-entity-framework
http://shaun.boyblack.co.za/blog/2012/08/04/games-and-entity-systems/
Now I wonder how it is possible to realize automatic Node creation when a Component is added to an Entity in C++? Please tell me the principle of identifying what Nodes can be spawned from a specific Entity, i.e. you should have list of Component and classes that aggregate components. And you should understand what classes can be created with the list of data.
For example I have Components:
class PositionComponent
{
int m_x;
int m_y;
int m_rotation;
};
class VelocityComponent
{
int m_vX;
int m_vY;
int m_vAngular;
};
class RenderableComponent
{
Sprite m_view;
};
And nodes:
class MoveNode
{
PositionComponent m_position;
VelocityComponent m_velocity;
};
class RenderNode
{
RenderableComponent m_rend;
PositionComponent m_position;
};
Now if I create an Entity like this:
Entity * e = new Entity;
e.add(new PositionComponent);
e.add(new VelocityComponent);
Then I want to have a code that creates a MoveNode automatically, and if I add also this:
e.add(new RenderableComponent);
Then I want to know that also RenderNode is created. Consequently, when I delete it:
e.remove(new RenderableComponent);
the RenderNode should be deleted. And this process, of course, should not be bind to the specific Nodes and Components I have defined.
How is it possible to realize this in C++?
I am slightly confused, since it appears to mix concepts. I will try to shed some light on the two concepts.
Entity & Component
The entity component system is quite common in game engines, for example Unity implements it quite visibly. It tries to address the issue that simple inheritance does not work well in many cases, such as mixing rendering and collision information; is a Collidable also a Renderable? And since multiple inheritance is a scary thing for many and not supported in many languages, the only way out of this is the Entity/Component design. (Actually not the only solution, but that is a different issue.)
The design for entity component is quite simple, you have a class Entity that takes multiple objects of type Component. There will be multiple components that "do" something, like a MeshRenderer, TriMeshCollision or RigidBodyMotion. As stated in the articles, the actual logic does not need to be implemented in the components themselves. The component just "flags" the entity for specific logic. It makes sense to delegate the actual work to be done in a tight loop in a system, maybe even in a different thread, but more to that later.
Then the actual entity is composed. There are two basic ways to do this, in code or in data.
For example you compose objects in code that represent one "real world" object; the object of type Goblin exists and it is derived from the class Entity. The constructor from Goblin will then create all components and register them on itself. Inheritance is now only done for high level logic, for example the FastGoblin is derived from Goblin and only has a different material and speed setting.
The second way to create objects is through data, that is you have some form of object description language. (Take something in XML or JSON) This will then create in a factory method something based on a given template in that is defined in this object description language.
Node Based Work Scheduling
It may make sense to have objects that are fully defined, but the logic not being executed. Think about objects on the server or in the editor. On the server you do not want the rendering code to be in the way. So the basic approach is to create components that contain no data. The problem to solve is, how do you efficiently get things done without iterating through the entire scene each frame and typecasting the objects around?
What your second link describes is basically a botched version of Designing the Framework of a Parallel Game Engine
There needs to be a way to schedule the work in an efficient way. The proposed solution is to have "nodes" that each do a specific task. The nodes are then scheduled, by submitting them to either a work scheduler or a specific system.
Take for example rendering. You have an entity and it has a MeshRenderer component. This component will create a RenderNode and submit it to the RenderSystem. Then when it is time to render the frame the RenderSystem will simply iterate over each RenderNode and call its display method. In the display method the actual rendering is done.
Alternatively the system, engine or entity can create nodes based on specific component configurations. Take for example physics. The Entity has the TriMeshCollision and RigidBodyMovement components. The PhysicsSystem seeing this configuration creates a RigidBodyNode that takes the two components as inputs and thus implements rigid body motion. Should the entity only have a TriMeshCollision component the PhysicsSystem would then create a StaticColliderNode to implement the behavior.
But like the construction mechanic for components from data, the nodes can also be created and attached to the entity through a factory function. This can be part of either the object definition or a rule based system.
Mapping this design into C++ should be straight forward. The rather difficult bit is to figure out a way how the different bits get connected; for example, how the MeshRenderer gets access to the RenderSystem so it can submit its RenderNode. But this can be solved with a singleton (shudder) or by passing a Game/Engine object around at the construction of the Entity or Component.
Is this good design?
But the issue I want to address here is: Is this good design?
I have troubles with your second link (Games And Entity Systems), since I think the design will fall flat on its nose quite quickly. This is true for other aspects like physics, but this will become quite inefficient when considering modern 3D rendering.
When you need to organize the scene spatially to efficiently cull hidden objects, organize the objects into batches for lighting and reduce resource switching then the entire "list of nodes" concepts is moot since you need a separate organisational structure anyway.
At this point you can let the components "talk" directly to the systems and each system has its own unique specific API that is fit for its specific purpose. The requirements of rendering, sound and input are each significantly different and tying to cram them into on API is futile.
See Also
Entity/Component based engine rendering separation from logic
I considered this scenario: objects that roughly look like this:
class PhyisicalObject
{
private:
virtual void Update() = 0;
friend class PhysicsController;
void DoUpdate() { this->Update(); }
};
There's a controller class called a PhysicsController that manages the dynamics of a pool of physical objects by calling their DoUpdate() method. This method, in terms, calls an overloaded version of the Update()function where a numerical integrator is used to compute the objects position, velocity and acceleration step-wise. I thought that having an interface implying this functionality would be a good starting point:
class IIntegrator
{
virtual void opertor() (const vec3& pos, const vec3& vel, vec3& outPos, vec3& outVel);
};
Now inheriting this IIntegrator abstract class and providing the implementation for various methods is the next step (explicit Euler, RK4, Verlet, Midpoint, Symplectic Euler and perhaps some semi-implicit/IMEX or implicit ones would be excellent). The problem is that I don't see clearly how to do the following two things:
Each physical object computes its own acceleration at any of its vertices in different ways (considering the objects consist of masspoints connected through springs or some kind of constraining objects). This function must be passed to the integrator, but it is object specific. It is possible to get pointers to non-static methods, but how would this fit the IIntegratorinterface?
When an object calls its Update() method, what happens behind the scenes is that an integrator is used to provide the functionality. I'd like to switch the integration method on the fly, perhaps. Or at least instantiate the same kind of object with different integrators. To me, it sounds like a factory doing that and, for on-the-fly integrator switching.. perhaps a strategy pattern? What solution would be quite elegant and efficient in this context?
Without going into implementation details, here are a few design patterns that might be applied to your problem
Factory or Prototype To create objects at startup from a file, or clone them during run-time, respectively.
Composite This might be used to model PhysicalObjects, either as stand-alone objects or collections connected by strings, springs or gravitational forces.
Iterator or Visitor This might be used by PhysicsController to iterate over all physical objects (composite or stand-alone) and apply a function over them.
Strategy To select different IIntegrator objects and their integration functions at runtime.
Apart from the GoF book (Amazon), a good online resource is here