I always run into confusion with who should know about the other.
for example:
Circle.Draw(&canvas) or Canvas.Draw(&circle)
or Draw(&canvas, &circle)
EmployeeVector.Save(&file) or File.Save(&employee_vector)
or even still
void operator() (Employee e) { Save( e.Serialize();}
for_each(employees.begin(), employees.end(),File)
I think I end up "abstracting" too much where I have all kinds of adapters so nobody knows about anybody.
Depends on who has the expertise.
If the only thing you can draw are circles, then of course you could just put that in Canvas and be on your way. If Canvas has a method to draw generic Shapes, then it falls on the various subclasses of Shape to draw themselves. For instance, a circle surely knows how to draw itself on a canvas. I doubt a canvas knows natively how to draw a circle, unless you hardcode the functionality, which kinda kills the whole idea of polymorphism.
For the same reasons, a vector would probably know how to save itself to a file, but I doubt a file knows what to do with a vector. But a vector can contain a variety of things, so it should delegate most of the work to its actual elements. So the for_each idea is probably the best.
Like most design questions, the answer is, it depends. :-)
Is it meaningful for something to know how to draw itself? possibly. It's also equally possibly that something else knows how to draw it. If the object is a graphical entity, it probably should know how to draw itself.
As for things like saving, again, it depends... it can be good for things to know how to serialize themselves to an abstraction like a stream, but also, sometimes, it's better not to couple entities to such trivial matters like serialization....
For classes that you are creating, you usually make them do work involving themselves. A Circle will draw itself onto a Canvas, so will a Rectangle. That way, if they are all subclasses of Shape, they can draw themselves through a Shape interface. It is the same for saving. This is extensible -- you do not need to guess all the possible shapes when designing the Canvas class.
For the "it depends" cases, for me, it usually involves utility methods for classes already defined by some library. For example, save/load commonly used STL data structures like map, set, vector, etc; via a File utility class with static methods.
Related
I've heard people saying that having protected members kind of breaks the point of encapsulation and is not the best practice, one should design the program such that derived classes will not need to have access to private base class members.
An example situation
Now, imagine the following scenario, a simple 8bit game, we have bunch of different objects, such as, regular boxes act as obstacles, spikes, coins, moving platforms etc. List can go on.
All of them have x and y coordinates, a rectangle that specifies size of the object, and collision box, and a texture. Also they can share functions like setting position, rendering, loading the texture, checking for collision etc.
But some of them also need to modify base members, e.g. boxes can be pushed around so they might need a move function, some objects may be moving by themselves or maybe some blocks change texture in-game.
Therefore a base class like object can really come in handy, but that would either require ton of getters - setters or having private members to be protected instead. Either way, compromises encapsulation.
Given the anecdotal context, which would be a better practice:
1. Have a common base class with shared functions and members, declared as protected. Be able to use common functions, pass the reference of base class to non-member functions which only needs to access shared properties. But compromise encapsulation.
2. Have a separate class for each, declare the member variables as private and don't compromise encapsulation.
3. A better way that I couldn't have thought.
I don't think encapsulation is highly vital and probably way to go for that anecdote would be just having protected members, but my goal with this question is writing a well practiced, standard code, rather than solving that specific problem.
Thanks in advance.
First off, I'm going to start by saying there is not a one-size fits all answer to design. Different problems require different solutions; however there are design patterns that often may be more maintainable over time than others.
Indeed, a lot of suggestions for design make them better in a team environment -- but good practices are also useful for solo projects as well so that it can be easier to understand and change in the future.
Sometimes the person who needs to understand your code will be you, a year from now -- so keep that in mind😊
I've heard people saying that having protected members kind of breaks the point of encapsulation
Like any tool, it can be misused; but there is nothing about protected access that inherently breaks encapsulation.
What defines the encapsulation of your object is the intended projected API surface area. Sometimes, that protected member is logically part of the surface-area -- and this is perfectly valid.
If misused, protected members can give clients access to mutable members that may break a class's intended invariants -- which would be bad. An example of this would be if you were able to derive a class exposing a rectangle, and were able to set the width/height to a negative value. Functions in the base class, such as compute_area could suddenly yield wrong values -- and cause cascading failures that should otherwise have been guarded against by better encapsulated.
As for the design of your example in question:
Base classes are not necessarily a bad thing, but can easily be overused and can lead to "god" classes that unintentionally expose too much functionality in an effort to share logic. Over time this can become a maintenance burden and just an overall confusing mess.
Your example sounds better suited to composition, with some smaller interfaces:
Things like a point and a vector type would be base-types to produce higher-order compositions like rectangle.
This could then be composed together to create a model which handles general (logical) objects in 2D space that have collision.
intersection/collision logic can be handled from an outside utility class
Rendering can be handled from a renderable interface, where any class that needs to render extends from this interface.
intersection handling logic can be handled by an intersectable interface, which determines behaviors of an object on intersection (this effectively abstracts each of the game objects into raw behaviors)
etc
encapsulation is not a security thing, its a neatness thing (and hence a supportability, readability ..). you have to assume that people deriving classes are basically sensible. They are after all writing programs either of their own using your base classes (so who cares), or they are writing in a team with you
The primary purpose of "encapsulation" in object-oriented programming is to limit direct access to data in order to minimize dependencies, and where dependencies must exist, to express those in terms of functions not data.
This is ties in with Design by Contract where you allow "public" access to certain functions and reserve the right to modify others arbitrarily, at any time, for any reason, even to the point of removing them, by expressing those as "protected".
That is, you could have a game object like:
class Enemy {
public:
int getHealth() const;
}
Where the getHealth() function returns an int value expressing the health. How does it derive this value? It's not for the caller to know or care. Maybe it's byte 9 of a binary packet you just received. Maybe it's a string from a JSON object. It doesn't matter.
Most importantly because it doesn't matter you're free to change how getHealth() works internally without breaking any code that's dependent on it.
However, if you're exposing a public int health property that opens up a whole world of problems. What if that is manipulated incorrectly? What if it's set to an invalid value? How do you trap access to that property being manipulated?
It's much easier when you have setHealth(const int health) where you can do things like:
clamp it to a particular range
trigger an event when it exceeds certain bounds
update a saved game state
transmit an update over the network
hook in other "observers" which might need to know when that value is manipulated
None of those things are easily implemented without encapsulation.
protected is not just a "get off my lawn" thing, it's an important tool to ensure that your implementation is used correctly and as intended.
I am designing a game engine in c++. I am currently working on categorizing the different entities in the game. My base class is SpriteObject that two classes MovableObject and FixedObject inherit from. Now if i for example create an instance of a MovableObject and want to add it to a Vector of Sprite and a Vector of MovableObject i just do:
Vector<Sprite*> sprites;
Vector<MovableObject*> movableObjects;
MovableObject* movingObject = new MovableObject();
sprites.push_back(movingObject);
movableObjects.push_back(movingObject);
But as the different categories and entities grow the code will get large (and it would get tiresome to add every entity to every vector that it belongs to). How do i automatically add an object to the vector that it belongs to when it is created?
EDIT 1: I think i just came up with a solution, what if i just make a global static class Entities that holds all the vector of entities in the scene. Every entity could have access to this class and when a entity is created it just adds a pointer version of itself to the corresponding vector(s) in that global class.
EDIT 2: But i forgot that my solution requires me to still manually add every entity to its matching vector. I just split the work among the different entities.
This is a nice problem.
I think that I would implement it like this: There will be an addToVector() method in Sprite class, and each derived class will override it to add itself to the corresponding vector.
I would suggest a different approach. But before I start I would like to note one thing with your current design.
I would hide the creation of those objects behind a facade. Call it a scene or whatever. Using new manually is bad from a couple of perspectives. First of all if you decide you want to change the scheme on how you allocate/construct your objects you have to change it everywhere in the code. If you have a lets say a factory like Scene you just change the implementation and the calls to scene->CreateObject<Sprite>() will remain the same everywhere else. This might get important once you start adding stuff like custom memory allocation schemes, object pools etc and at some point you will if you will start to grow your engine. Even if this is just an excercise and a for fun project we all want to do this like its actually done, right ;) ?
Now going back to the core - dont abuse inheritance.
MovableObject is not a Sprite. Static Object is not a sprite either. They are that, movable and static elements.
A sprite can be movable or static, so it has a behavior of a dynamic or static element.
Use composition instead. Make a Sprite accepting behavior, or better a list of behaviors. In fact the Sprite itself is just a behavior on a Game object too, it just controls the way it is presented to the user.
What if you had an object that can be attached multiple behaviors like the fact it is a dynamic one, it has a sprite presence on the scene and even more is a sound emitter!
If you add those behaviors to the object you have to create them first. They can, when constructed, decide to which list they should subscribe to.
This is all metaphors for actually a well known system, that is proven to work well and is actually used in most game engines nowadays. Its a Entity Component System.
You object with behaviors are Entities, Components are those Behaviors and each of them is controlled by one system that knows the component and knows how to update/handle them.
Objects in the scene are merely a set of components attached to them that act upon them.
I've decided to make my next game using my own simple engine. I've already written some code for object rendering, physics etc. and now I'm thinking about how to easily connect them together.
I want to make hierarchic structure with one master object, lets call it Scene which will have parent as Sprites or InteractiveObjects and every Sprite or InteractiveObject could have its own child which would have its own child.. I think you already got my point here :)
Let's assume, that every object type will inherit from some base object, let's call it Node for example. I'm not sure yet, if Node will be "real" object which will have its size, position etc. or only abstract wrapper for every object in game (I tend to option two actually).
And finally. My goal is, to have object of actual Scene, call something like Scene->Move(x,y) and it will move every child of Scene (or Sprite, InteractiveObject etc.). Or Scene->Render() and it will render every (renderable) child. If I create Sprite, I want to add child like Sprite->addChild() and child could be another Sprite, InteractiveObject or just simple Node.
And now my question. What's the best way to implement it with C++? Or am I totally wrong and this structure is stupid? :)
I should think that whether or not the structure is sensible depends somewhat on what you really want to achieve -- the system sounds very flexible, but usually there's a trade-off between flexibility and performance. Depending on the genre of the game, performance may be hard enough to come by.
Also, if all things derive from some BaseNode, they all need (although possibly empty) methods for all kinds of things whether or not they actually can be rendered, moved etc. Or you'd end up with lots of dynamic_casts, which isn't very nice either. It might therefore be better to have slightly less flexibility and differentiate between game entities and graphical entities, with the latter being part of the former (you might want to allow a game entity to be made up from multiple graphical entities, or sub-entities, though).
If you do go with your current architecture, I should think that each BaseObject has something like a vector and when you call, say, render() on a master object, it goes through all it's children and calls render on them. They do the same and do any render code that is appropriate to them.
Another question is, though, whether an object could feasibly be attached to several other objects (if there is a difference between rendering and physics, for example). If so, it can get hairy to know when to delete an object, unless you don't use plain BaseObject*, but some form of auto_ptr or shared_ptr.
I hope that this answer does help you a little, though I realise it's not a simple "this is they way!" one.
I have a data structure that stores ... well, data. Now, I need to access various pieces of data in slightly different manner, so I'm essentially building an in-memory index. But I'm wondering: should the index hold pointers or copies?
To elaborate, say I have
class Widget
{
// Ways to access the list of gears...
private:
std::list<Gears> m_gears;
};
Now, I have two Widgets, and there exists between these two a mapping between their Gears. Currently, this is
boost::unordered_map<Gear, Gear>
but Gear is a fairly hefty class, and I feel like making so many copies is poor design. I could store a pointer, but then the mapping is only valid for the lifetime of the corresponding Widgets, and you start getting ->s... (And if that std::list ever changes to a std::vector, it gets more complex...)
Pertaining to the copies, it's actually slightly worse: There's two boost::unordered_maps, one for each direction. So, for each Gear, I'm making up to 2 copies of it.
Alternatively, I could put the index inside the Widget class, but I feel like this violates the responsibilities of the Widget class.
You might try Boost Pointer Container Library: http://www.boost.org/doc/libs/1_43_0/libs/ptr_container/doc/ptr_container.html
I think it addresses exactly the problem you are facing.
Could you store all gears in one place, like statically in the gears class, and then have each mapping AND widget store only the reference/index to it?
You would have to keep track of references to each gear so you know when you can dispose them, but that should be easy enough.
I wonder if and how writing "almighty" classes in c++ actually impacts performance.
If I have for example, a class Point, with only uint x; uint y; as data, and have defined virtually everything that math can do to a point as methods. Some of those methods might be huge. (copy-)constructors do nothing more than initializing the two data members.
class Point
{
int mx; int my;
Point(int x, int y):mx(x),my(y){};
Point(const Point& other):mx(other.x),my(other.y){};
// .... HUGE number of methods....
};
Now. I load a big image and create a Point for every pixel, stuff em into a vector and use them. (say, all methods get called once)
This is only meant as a stupid example!
Would it be any slower than the same class without the methods but with a lot of utility functions? I am not talking about virtual functions in any way!
My Motivation for this: I often find myself writing nice and relatively powerful classes, but when I have to initialize/use a ton of them like in the example above, I get nervous.
I think I shouldn't.
what I think I know is:
Methods exist only once in memory.
(optimizations aside)
Allocation
only takes place for the data
members, and they are the only thing
copied.
So it shouldn't matter. Am I missing something?
You are right, methods only exist once in memory, they're just like normal functions with an extra hidden this parameter.
And of course, only data members are taken in account for allocation, well, inheritance may introduce some extra ptrs for vptrs in the object size, but not a big deal
You have already got some pretty good technical advice. I want to throw in something non-technical: As the STL showed us all, doing it all in member functions might not be the best way to do this. Rather than piling up arguments, I refer to Scott Meyers' class article on the subject: How Non-Member Functions Improve Encapsulation.
Although technically there should be no problem, you still might want to review your design from a design POV.
I suppose this is more of an answer than you're looking for, but here goes...
SO is filled with questions where people are worried about the performance of X, Y, or Z, and that worry is a form of guessing.
If you're worried about the performance of something, don't worry, find out.
Here's what to do:
Write the program
Performance tune it
Learn from the experience
What this has taught me, and I've seen it over and over, is this:
Best practice says Don't optimize prematurely.
Best practice says Do use lots of data structure classes, with multiple layers of abstraction, and the best big-O algorithms, "information hiding", with event-driven and notification-style architecture.
Performance tuning reveals where the time is going, which is: Galloping generality, making mountains out of molehills, calling functions & properties with no realization of how long they take, and doing this over multiple layers using exponential time.
Then the question is asked: What is the reason behind the best practice for the big-O algorithms, the event- and notification-driven architecture, etc. The answer comes: Well, among other things, performance.
So in a way, best practice is telling us: optimize prematurely. Get the point? It says "don't worry about performance", and it says "worry about performance", and it causes the very thing we're trying unsuccessfully not to worry about. And the more we worry about it, against our better judgement, the worse it gets.
My constructive suggestion is this: Follow steps 1, 2, and 3 above. That will teach you how to use best practice in moderation, and that will give you the best all-around design.
If you are truly worried, you can tell your compiler to inline the constructors. This optimization step should leave you with clean code and clean execution.
These 2 bits of code are identical:
Point x;
int l=x.getLength();
int l=GetLength(x);
given that the class Point has a non-virtual method getLength(). The first invocation actually calls int getLength(Point &this), an identical signature as the one we wrote in our second example. (*)
This of course wouldn't apply if the methods you're calling are virtual, since everything would go through an extra level of indirection (something akin to the C-style int l=x->lpvtbl->getLength(x)), not to mention that instead of 2 int's for every pixel you'd actually have 3, the extra one being that pointer to the virtual table.
(*) this isn't exactly true, the "this" pointer is passed through one of the cpu registers instead of through the stack, but the mechanism could have easily worked either way.
First: do not optimize prematurely.
Second: clean code is easier to maintain than optimized code.
Methods for classes have the hidden this pointer, but you should not worry about it. Most of the time the compiler tries to pass it via register.
Inheritance and virtual function introduce indirections in the appropriate calls (inheritance = constructor / destructor call, virual function - every function call of this function).
Short:
Objects you don't create/destroy often can have virtual methods, inheritance and so on as long as it benefits the design.
Objects you create/destroy often should be small (few data members) and should not have many virtual methods (best would be none at all - performance wise).
try to inline small methods/constructor. This will reduce the overhead.
Go for a clean design and refactor if you don't reach the desired performance.
There is a different discussion about classes having large or small interfaces (for example in one of Scott Meyers (More) Effective C++ Books - he opts for minimal interface). But this has nothing to do with performance.
I agree with the above comments wrt:performance and class layout, and would like to add a comment not yet stated about design.
It feels to me like you're over-using your Point class beyond it's real Design scope. Sure, it can be used that way but should it?
In past work in computer games I've often been faced by similar situations, and usually the best end result has been that when doing specialized processing (e.g. image processing) having a specialized code set for that which work on differently laid-out buffers has been more efficient.
This also allows you to performance optimize for the case that matters, in a more clean way, without making base code less maintainable.
In theory, I'm sure that there is a crafty way of using a complex combination of template code, concrete class design, etc., and getting nearly the same run-time efficiency ... but I am usually unwilling to make the complexity-of-implementation trade.
Member functions are not copied along with the object. Only data fields contribute to the size of the object.
I have created the same point class as you except it is a template class and all functions are inline. I expect to see performance increase not decrease by this. However, an image of size 800x600 will have 480k pixels and its memory print will be close to 4M without any color information. Not just memory but also initializing 480k object will take too much time. Therefore, I think its not a good idea in that case. However, if you use this class to transform position of an image, or use it for graphic primitives (lines, curves, circles, etc.)