For my game I have built a small framework which among other things has:
Entities that own components.
Systems that hold pointers to the entities.
An Engine that owns the systems.
An EntityManager that owns the entities.
Every time I add a Component, the Entity passes it's "this" pointer to the Systems through an Engine pointer that it holds and they decide whether to register it or ignore it.
Now, since the Entities are elements of the EntityManager's container, am I right in assuming that if an insert operation to it causes shifts or reallocation, the systems won't hold valid pointers any more?
If so, what's a good container that can be used to prevent this from happening? If I understand things correctly this is similar to what happens with iterators and the same rules should apply when requiring non-invalidation with insertion.
If you store a vector of entities and then just store their iterators to access them: yes, a reallocation might invalidate all your data.
The suggested way is to store a vector of pointers (if you need memory collection capabilities you might want to go for a vector of smart pointers). This way you will be sure that the pointers are valid (assuming nothing else touched the objects) at every insertion/deletion regardless of the reallocation of the container's space.
From the question isn't clear but a word of advice if you're just storing objects in your containers instead of pointers: when inserting elements into a container like with
std::vector<T>::push_back()
you're storing a copy of the object. This is usually undesirable since brings additional copy overhead and might create problems if things aren't properly set up. See "shallow copies" and "deep copies" to learn more about this problem.
Your pointer value will only change if a relocation of the actual value happens.
This is the case where you manipulate arrays of objects instead of arrays of pointers to these objects. You should definitely not do the former.
I would suggest using standard collections like std::array or std::vector to manage the objects. With those, and provided you have instanciated the objects on the heap (read: with new), you won't have to worry about the value of this.
Related
I've had some experience in C++ from school works. I've learned, among other things, that objects should be stored in a container (vector, map, etc) as pointers. The main reason being that we need the use of the new-operator, along with a copy constructor, in order to create a copy on the heap (otherwise called dynamic memory) of the object. This method also necessitates defining a destructor.
However, from what I've read since then, it seems that STL containers already store the values they contain on the heap. Thus, if I were to store my objects as values, a copy (using the copy constructor) would be made on the heap anyway, and there would be no need to define a destructor. All in all, a copy on the heap would be made anyway???
Also, if(true), then the only other reason I can think of for storing objects using pointers would be to alleviate resource needs for copying the container, as pointers are easier to copy than whole objects. However, this would require the use of std::shared_ptr instead of regular pointers, since you don't want elements in the copied container to be deleted when the original container is destroyed. This method would also alleviate the need for defining a destructor, wouldn't it?
Edit : The destructor to be defined would be for the class using the container, not for the class of the objects stored.
Edit 2 : I guess a more precise question would be : "Does it make a difference to store objects as pointers using the new-operator, as opposed to plain values, on a memory and resources used standpoint?"
The main reason to avoid storing full objects in containers (rather than pointers) is because copying or moving those objects is expensive. In that case, the recommended alternative is to store smart pointers in the container.
So...
vector<something_t> ................. Usually perfectly OK
vector<shared_ptr<something_t>> ..... Preferred if you want pointers
vector<something_t*> ................ Usually best avoided
The problem with raw pointers is that, when a raw pointer disappears, the object it points to hangs around causing memory and resource leaks - unless you've explicitly deleted it. C++ doesn't have garbage collection, and when a pointer is discarded, there's no way to know if other pointers may still be pointing to that object.
Raw pointers are a low-level tool - mostly used to write libraries such as vector and shared_ptr. Smart pointers are a high-level tool.
However, particularly with C++11 move semantics, the costs of moving items around in a vector is normally very small even for huge objects. For example, a vector<string> is fine even if all the strings are megabytes long. You mostly worry about the cost of moving objects if sizeof(classname) is big - if the object holds lots of data inside itself rather than in separate heap-allocated memory.
Even then, you don't always worry about the cost of moving objects. It doesn't matter that moving an object is expensive if you never move it. For example, a map doesn't need to move items around much. When you insert and delete items, the nodes (and contained items) stay where they are, it's just the pointers that link the nodes that change.
I am trying to figure out how to manage items in my program. I want to have a unified inventory system that knows where every item is. Then have container objects that actually hold the inventoried items, and have everything locate-able by container id.
My thought was to have items held inside the containers inside boost::ptr_vetors, and then hold a pointer (probably the same one) inside a hash table inside the inventory object, now shifting things around in terms of the inventory is easy that just changing a single value, and moving things from containerX to containerY is just a matter of removing from one pointer vector, and passing it to the other container, or doing all the work in one container, or another.
The problem that I am having trouble considering is when it comes time to get all the stuff out. I have only really ever dealt with having a pointer/object held in one place at a time not multiple, and I know that if I try to delete something that has already been deleted it will cause a crash at the least. The first thing that comes to mind is to remove all the references from the containers, and still have it reside in the inventory, and then step through, and delete the inventory. Is this feasible, or am I not considering this right, and need to reconsider this? then what if I only need to remove a single thing (keep the rest)?
I am concerned about de-validating the pointers in either case.
Boost::ptr_vector assumes ownership of the object when you pass it in. When you remove an object from that vector it will be automatically deleted. You can remove items without deleting them however by using the built in auto_type (see the farm yard example for usage).
This means that you should really only have an item in one ptr_vector at a time. However your idea of having items in a ptr_vector and then having the ptr_vector be owned by your inventory object (which is another ptr_vector) should work. I have never done that but it should be fine.
In order to delete a single object you just look it up using a container_id and an item_id, and then remove it from the item-level ptr_vector. In order to delete a container, just remove it from the inventory. It will destruct anything that it contains at that point.
If you want to remove them without deletion use the auto_type to remove them safely. You can release them from the auto_type and do as you please if you want to using raw pointers to objects.
There are a few ways to do this.
Use boost::shared_ptr or std::shared_ptr with the standard vector. That way you do not
have to worry about deleting and memory is freed incrementally
whenever no pointer references the object. This can be useful if you need to destroy specific objects frequently and requires the least amount of code. The downside is storage
overhead if most objects are never destroyed apart from the rest and you have a lot of objects. And, depending on where you remove the object, it can still reside in the inventory without a container or vice versa.
Give the containers responsibility for destruction and deindexing from the inventories by deriving them or wrapping them in another class. Since each object's pointer only appears in one container as described by you, only inventories may hold extra pointers. Hence whenever an object is to be removed, the container will have to look up the inventories to remove the pointer to the object it is about to destroy. The overhead is the bookkeeping of the inventories.
Use a memory pool that is a container that stores pointers to all objects and is responsible for their destruction when itself is destroyed. The memory pool can be a boost::ptr_vector while the other containers are standard vectors. This effective for frequent adding and querying objects in the system. The downside is that the memory pool must outlive your containers and inventories. Otherwise it has to do bookkeeping like in (2) if individual objects need to be destroyed apart from the rest.
Here is what I'm trying to do. I have a std::vector with a certain number of elements, it can grow but not shrink. The thing is that its sort of cell based so there may not be anything at that position. Instead of creating an empty object and wasting memory, I thought of instead just NULLing that cell in the std::vector. The issue is that how do I get pointers in there without needing to manage my memory? How can I take advantage of not having to do new and keep track of the pointers?
How large are the objects and how sparse do you anticipate the vector will be? If the objects are not large or if there aren't many holes, the cost of having a few "empty" objects may be lower than the cost of having to dynamically allocate your objects and manage pointers to them.
That said, if you do want to store pointers in the vector, you'll want to use a vector of smart pointers (e.g., a vector<shared_ptr<T>>) or a container designed to own pointers (e.g., Boost's ptr_vector<T>).
If you're going to use pointers something will need to manage the memory.
It sounds like the best solution for you would be to use boost::optional. I believe it has exactly the semantics that you are looking for. (http://www.boost.org/doc/libs/1_39_0/libs/optional/doc/html/index.html).
Actually, after I wrote this, I realized that your use case(e.g. expensive default constructor) is used by the boost::optional docs: http://www.boost.org/doc/libs/1_39_0/libs/optional/doc/html/boost_optional/examples.html#boost_optional.examples.bypassing_expensive_unnecessary_default_construction
You can use a deque to hold an ever-increasing number of objects, and use your vector to hold pointers to the objects. The deque won't invalidate pointers to existing objects it holds if you only add new objects to the end of it. This is far less overhead than allocating each object separately. Just ensure that the deque is destroyed after or at the same time as the vector so you don't create dangling pointers.
However, based on the size of the 3-D array you mentioned in another answer's comment, you may have difficulty storing that many pointers. You might want to look into a sparse array implementation so that you mainly use memory for the portions of the array where you have non-null pointers.
You could use a smart pointer. For example boost::shared_ptr.
The issue is that how do I get pointers in there without needing to manage my memory?
You can do certainly do this using the shared_ptr or other similar techniques mentioned here. But in near future you will come across some problem where you will have to manage your own memory. So please get yourself comfortable with the pointer concept.
Normally if you see in big servers the memory management of object itself is considered as a responsibility and specially for this purpose you will create a class. This is known as pool. Whenever you need an object you ask the pool to give you the object and whenever you are done with the object you tell the pool that I am done. Now it is the responsibility of the pool to see what can be done with that object.
But the basic idea is your main program still deals with pointers but do not care about the memory. There is some other object who cares about it.
Just a conceptual question that I've been running into. In my current project it feels like I am over-using the boost smart_ptr and ptr_container libraries. I was creating boost::ptr_vectors
in many different objects and calling the transfer() method to move certain pointers from one boost::ptr_vector to another.
It is my understanding that it is important to clearly show ownership of heap allocated objects.
My question is, would it be desirable to use these boost libraries to create heap-allocated members that belong to an object but then use normal pointers to these members via get() when doing any processing.
For example...
A game might have a collection of Tiles that belong to it. It might make sense to create these tiles in a boost::ptr_vector. When the game is over these tiles should be automatically freed.
However if I want to put these Tiles in a Bag object temporarily, should I create another boost::ptr_vector in the bag and transfer the Game's Tiles to the Bag via transfer() or
should I create a std::vector<Tile*> where the Tiles*'s reference the Tiles
in the Game and pass that to the Bag?
Thanks.
**Edit
I should point out that in my example The Game would have a Bag object as a member. The Bag would only be filled with Tiles the game owns. So the Bag would not exist without the Game.
You should only use owning smart pointers and pointer containers where there's clear transfer of ownership. It doesn't matter if the object is temporary or not - all that matters is whether it has ownership or not (and, therefore, whether the previous owner relinquishes ownership).
If you create a temporary vector of pointers just to pass it to some other function, and the original ptr_vector still references all those objects, there's no ownership transfer, and therefore you should use plain vector for the temporary object - just as you'd use a raw pointer to pass a single object from ptr_vector to a function that takes a pointer, but doesn't store it anywhere.
In my experience, there are three main ownership patterns that crop up. I will call them tree, DAG and graph.
The most common is a tree. The parent owns its children, which in turn owns its children and so on. auto_ptr, scoped_ptr, bare pointers and the boost ptr_x classes are what you typically see here. In my opinion, bare pointers should generally be avoided as they convey no ownership semantics at all.
The second most common is the DAG. This means you can have shared ownership. The children a parent owns may also be the children of other children the parent owns. The TR1 and boost shared_ptr template is the main actor here. Reference counting is a viable strategy when you have no cycles.
The third most common is the full graph. This means that you can have cycles. There are some strategies for breaking those cycles and returning to a DAG at the cost of some possible sources of error. These are generally represented by TR1 or boost's weak_ptr template.
The full graph that can't be broken down into a DAG using weak_ptr is a problem that can't easily be solved in C++. The only good handlers are garbage collection schemes. They are also the most general, capable of handling the other two schemes quite well as well. But their generality comes at a cost.
In my opinion, you can't overuse the ptr_x container classes or auto_ptr unless you really should be using containers of objects instead of containers of pointers. shared_ptr can be overused. Think carefully about whether or not you really need a DAG.
Of course I think people should just be using containers of scope_ptrs instead of the boost ptr_x classes, but that's going to have to wait for C++0x. :-(
Most likely, the solution you're looking for is
std::vector<Tile>
There's no need for pointers most of the time. The vector already takes care of memory managements of the contained objects. The tiles are owned by the game, aren't they? The way to make that explicit is to put the objects themselves in the game class -- the only cases where you typically need pointers and dynamically allocated individual objects is if you need 1) polymorphism, or 2) shared ownership.
But pointers should be the exception, not the rule.
boost::ptr_vector only serves to improve the semantics of using vectors of pointers. If, in your example, the original array could be destroyed while the temporary set of vectors is in use, then you should definitely be using a vector of shared_ptrs to prevent them from being deleted while still in use. If not, then a plain old vector of pointers may be appropriate. Whether you choose std::vector or boost::ptr_vector doesn't really matter, except in how nice the code that uses the vector looks.
In your game tiles example, my first instinct would be to create a vector<Tile>, not a vector<Tile*> or a ptr_vector. Then use pointers to the tiles as you please, without worrying about ownership at all, provided that the objects which hold the pointers don't outlive the vector of tiles. You've said it's only destroyed when the game ends, so that shouldn't be difficult.
Of course there may be reasons this is not possible, for instance because Tile is a polymorphic base class, and the tiles themselves are not all of the same class. But this is C++, not Java, and not every problem always needs dynamic polymorphism. But even if you do really need pointers, you can still make copies of those pointers without any ownership semantics, provided that the scope and storage duration of the objects pointed to is understood to be wider than the scope and duration of use of the pointer:
int main() {
vector<Tile*> v;
// fill in v, perhaps with heap objects, perhaps with stack objects.
runGame(v);
}
void runGame(const vector<Tile*> &v) {
Tile *firsttile = v[0];
vector<Tile*> eventiles;
eventiles.push_back(v[2]);
eventiles.push_back(v[4]);
// and so on. No need to worry about ownership,
// just as long as I don't keep any pointers beyond return.
// It's my caller's problem to track and free heap objects, if any.
}
If the Game owns the tiles then the Game is responcable for there deltion.
It sounds like the bag never actually ownes the objects so it should not be responcable for deleting them. Thus I would use ptr_vector within the Game object. But use a std::vector in the bag.
Note: I would never let anybody using the bag retrieve a pointer to a tile from the bag. They should only be able to retrieve a referenceto the tile from the bag.
If Tiles are placed in the Bag and someone steals the Bag you lose all tiles. Therefore Tiles, althought temporarly, but they belong to the Bag for a short time. You should transfer ownership here, I suppose.
But the better opinion would be not to mess with ownership, because I don't see why it's needed in this particular case. If there's something behind the scene, go on, make your choice.
I agree that using vector instead of jumping right into heap-allocated T is a good instinct, but I can easily see Tile being a type for while copy construction is expensive, which might preclude vector's growth strategy being practical. Of course, that might be best solved with a list, not a vector...
Designing a new system from scratch. I'll be using the STL to store lists and maps of certain long-live objects.
Question: Should I ensure my objects have copy constructors and store copies of objects within my STL containers, or is it generally better to manage the life & scope myself and just store the pointers to those objects in my STL containers?
I realize this is somewhat short on details, but I'm looking for the "theoretical" better answer if it exists, since I know both of these solutions are possible.
Two very obvious disadvantage to playing with pointers:
1) I must manage allocation/deallocation of these objects myself in a scope beyond the STL.
2) I cannot create a temp object on the stack and add it to my containers.
Is there anything else I'm missing?
Since people are chiming in on the efficency of using pointers.
If you're considering using a std::vector and if updates are few and you often iterate over your collection and it's a non polymorphic type storing object "copies" will be more efficent since you'll get better locality of reference.
Otoh, if updates are common storing pointers will save the copy/relocation costs.
This really depends upon your situation.
If your objects are small, and doing a copy of the object is lightweight, then storing the data inside an stl container is straightforward and easier to manage in my opinion because you don't have to worry about lifetime management.
If you objects are large, and having a default constructor doesn't make sense, or copies of objects are expensive, then storing with pointers is probably the way to go.
If you decide to use pointers to objects, take a look at the Boost Pointer Container Library. This boost library wraps all the STL containers for use with dynamically allocated objects.
Each pointer container (for example ptr_vector) takes ownership of an object when it is added to the container, and manages the lifetime of those objects for you. You also access all the elements in a ptr_ container by reference. This lets you do things like
class BigExpensive { ... }
// create a pointer vector
ptr_vector<BigExpensive> bigVector;
bigVector.push_back( new BigExpensive( "Lexus", 57700 ) );
bigVector.push_back( new BigExpensive( "House", 15000000 );
// get a reference to the first element
MyClass& expensiveItem = bigList[0];
expensiveItem.sell();
These classes wrap the STL containers and work with all of the STL algorithms, which is really handy.
There are also facilities for transferring ownership of a pointer in the container to the caller (via the release function in most of the containers).
If you're storing polymporhic objects you always need to use a collection of base class pointers.
That is if you plan on storing different derived types in your collection you must store pointers or get eaten by the slicing deamon.
Sorry to jump in 3 years after the event, but a cautionary note here...
On my last big project, my central data structure was a set of fairly straightforward objects. About a year into the project, as the requirements evolved, I realised that the object actually needed to be polymorphic. It took a few weeks of difficult and nasty brain surgery to fix the data structure to be a set of base class pointers, and to handle all the collateral damage in object storage, casting, and so on. It took me a couple of months to convince myself that the new code was working. Incidentally, this made me think hard about how well-designed C++'s object model is.
On my current big project, my central data structure is a set of fairly straightforward objects. About a year into the project (which happens to be today), I realised that the object actually needs to be polymorphic. Back to the net, found this thread, and found Nick's link to the the Boost pointer container library. This is exactly what I had to write last time to fix everything, so I'll give it a go this time around.
The moral, for me, anyway: if your spec isn't 100% cast in stone, go for pointers, and you may potentially save yourself a lot of work later.
Why not get the best of both worlds: do a container of smart pointers (such as boost::shared_ptr or std::shared_ptr). You don't have to manage the memory, and you don't have to deal with large copy operations.
Generally storing the objects directly in the STL container is best as it is simplest, most efficient, and is easiest for using the object.
If your object itself has non-copyable syntax or is an abstract base type you will need to store pointers (easiest is to use shared_ptr)
You seem to have a good grasp of the difference. If the objects are small and easy to copy, then by all means store them.
If not, I would think about storing smart pointers (not auto_ptr, a ref counting smart pointer) to ones you allocate on the heap. Obviously, if you opt for smart pointers, then you can't store temp stack allocated objects (as you have said).
#Torbjörn makes a good point about slicing.
Using pointers will be more efficient since the containers will be only copying pointers around instead of full objects.
There's some useful information here about STL containers and smart pointers:
Why is it wrong to use std::auto_ptr<> with standard containers?
If the objects are to be referred to elsewhere in the code, store in a vector of boost::shared_ptr. This ensures that pointers to the object will remain valid if you resize the vector.
Ie:
std::vector<boost::shared_ptr<protocol> > protocols;
...
connection c(protocols[0].get()); // pointer to protocol stays valid even if resized
If noone else stores pointers to the objects, or the list doesn't grow and shrink, just store as plain-old objects:
std::vector<protocol> protocols;
connection c(protocols[0]); // value-semantics, takes a copy of the protocol
This question has been bugging me for a while.
I lean to storing pointers, but I have some additional requirements (SWIG lua wrappers) that might not apply to you.
The most important point in this post is to test it yourself, using your objects
I did this today to test the speed of calling a member function on a collection of 10 million objects, 500 times.
The function updates x and y based on xdir and ydir (all float member variables).
I used a std::list to hold both types of objects, and I found that storing the object in the list is slightly faster than using a pointer. On the other hand, the performance was very close, so it comes down to how they will be used in your application.
For reference, with -O3 on my hardware the pointers took 41 seconds to complete and the raw objects took 30 seconds to complete.