Allocation and fragmentation on the heap - c++

I have a complex structure I need to allocate on the heap. It's made of some basic types and custom objects. Those custom objects are made out of some basic types and some other custom objects. Those custom objects are made out of some basic types and some other custom objects, et cetara, et cetera...
The way I've been doing it is storing the basic types as automatic variables while making the custom objects (smart) pointers.
But, since the main object is created as a (smart) pointer, all of this is allocated on the heap anyway, no? But every time I use another (smart) pointer, it does another allocation and fragments the memory, right?
So I shouldn't really be using pointers, save for that initial one to put it on a heap, no? All of the objects that have changeable sizes have the changeable parts stored in a map or a vector, which do allocate stuff on their own, but at this point, this is a necessity anyway since I don't know how many, if any, there will be.
Anyway, am I right to think this way?
The less the pointers are used, the better?

Use whatever makes more sense from a design point of view.
If it turns out to be to slow (it probably won't, anyway), profile and optimize the bottlenecks.

Related

For a data member, is there any difference between dynamically allocating this variable(or not) if the containing object is already in dynamic memory?

I'm starting with the assumption that, generally, it is a good idea to allocate small objects in the stack, and big objects in dynamic memory. Another assumption is that I'm possibly confused while trying to learn about memory, STL containers and smart pointers.
Consider the following example, where I have an object that is necessarily allocated in the free store through a smart pointer, and I can rely on clients getting said object from a factory, for instance. This object contains some data that is specifically allocated using an STL container, which happens to be a std::vector. In one case, this data vector itself is dynamically allocated using some smart pointer, and in the other situation I just don't use a smart pointer.
Is there any practical difference between design A and design B, described below?
Situation A:
class SomeClass{
public:
SomeClass(){ /* initialize some potentially big STL container */ }
private:
std::vector<double> dataVector_;
};
Situation B:
class SomeOtherClass{
public:
SomeOtherClass() { /* initialize some potentially big STL container,
but is it allocated in any different way? */ }
private:
std::unique_ptr<std::vector<double>> pDataVector_;
};
Some factory functions.
std::unique_ptr<SomeClass> someClassFactory(){
return std::make_unique<SomeClass>();
}
std::unique_ptr<SomeOtherClass> someOtherClassFactory(){
return std::make_unique<SomeOtherClass>();
}
Use case:
int main(){
//in my case I can reliably assume that objects themselves
//are going to always be allocated in dynamic memory
auto pSomeClassObject(someClassFactory());
auto pSomeOtherClassObject(someOtherClassFactory());
return 0;
}
I would expect that both design choices have the same outcome, but do they?
Is there any advantage or disadvantage for choosing A or B? Specifically, should I generally choose design A because it's simpler or are there more considerations? Is B morally wrong because it can dangle for a std::vector?
tl;dr : Is it wrong to have a smart pointer pointing to a STL container?
edit:
The related answers pointed to useful additional information for someone as confused as myself.
Usage of objects or pointers to objects as class members and memory allocation
and Class members that are objects - Pointers or not? C++
And changing some google keywords lead me to When vectors are allocated, do they use memory on the heap or the stack?
std::unique_ptr<std::vector<double>> is slower, takes more memory, and the only advantage is that it contains an additional possible state: "vector doesn't exist". However, if you care about that state, use boost::optional<std::vector> instead. You should almost never have a heap-allocated container, and definitely never use a unique_ptr. It actually works fine, no "dangling", it's just pointlessly slow.
Using std::unique_ptr here is just wasteful unless your goal is a compiler firewall (basically hiding the compile-time dependency to vector, but then you'd need a forward declaration to standard containers).
You're adding an indirection but, more importantly, the full contents of SomeClass turns into 3 separate memory blocks to load when accessing the contents (SomeClass merged with/containing unique_ptr's block pointing to std::vector's block pointing to its element array). In addition you're paying one extra superfluous level of heap overhead.
Now you might start imagining scenarios where an indirection is helpful to the vector, like maybe you can shallow move/swap the unique_ptrs between two SomeClass instances. Yes, but vector already provides that without a unique_ptr wrapper on top. And it already has states like empty that you can reuse for some concept of validity/nilness.
Remember that variable-sized containers themselves are small objects, not big ones, pointing to potentially big blocks. vector isn't big, its dynamic contents can be. The idea of adding indirections for big objects isn't a bad rule of thumb, but vector is not a big object. With move semantics in place, it's worth thinking of it more like a little memory block pointing to a big one that can be shallow copied and swapped cheaply. Before move semantics, there were more reasons to think of something like std::vector as one indivisibly large object (though its contents were always swappable), but now it's worth thinking of it more like a little handle pointing to big, dynamic contents.
Some common reasons to introduce an indirection through something like unique_ptr is:
Abstraction & hiding. If you're trying to abstract or hide the concrete definition of some type/subtype, Foo, then this is where you need the indirection so that its handle can be captured (or potentially even used with abstraction) by those who don't know exactly what Foo is.
To allow a big, contiguous 1-block-type object to be passed around from owner to owner without invoking a copy or invalidating references/pointers (iterators included) to it or its contents.
A hasty kind of reason that's wasteful but sometimes useful in a deadline rush is to simply introduce a validity/null state to something that doesn't inherently have it.
Occasionally it's useful as an optimization to hoist out certain less frequently-accessed, larger members of an object so that its commonly-accessed elements fit more snugly (and perhaps with adjacent objects) in a cache line. There unique_ptr can let you split apart that object's memory layout while still conforming to RAII.
Now wrapping a shared_ptr on top of a standard container might have more legitimate applications if you have a container that can actually be owned (sensibly) by more than one owner. With unique_ptr, only one owner can possess the object at a time, and standard containers already let you swap and move each other's internal guts (the big, dynamic parts). So there's very little reason I can think of to wrap a standard container directly with a unique_ptr, as it's already somewhat like a smart pointer to a dynamic array (but with more functionality to work with that dynamic data, including deep copying it if desired).
And if we talk about non-standard containers, like say you're working with a third party library that provides some data structures whose contents can get very large but they fail to provide those cheap, non-invalidating move/swap semantics, then you might superficially wrap it around a unique_ptr, exchanging some creation/access/destruction overhead to get those cheap move/swap semantics back as a workaround. For the standard containers, no such workaround is needed.
I agree with #MooingDuck; I don't think using std::unique_ptr has any compelling advantages. However, I could see a use case for std::shared_ptr if the member data is very large and the class is going to support COW (copy-on-write) semantics (or any other use case where the data is shared across multiple instances).

Memory allocation of values in a std::map

I've had some experience in C++ from school works. I've learned, among other things, that objects should be stored in a container (vector, map, etc) as pointers. The main reason being that we need the use of the new-operator, along with a copy constructor, in order to create a copy on the heap (otherwise called dynamic memory) of the object. This method also necessitates defining a destructor.
However, from what I've read since then, it seems that STL containers already store the values they contain on the heap. Thus, if I were to store my objects as values, a copy (using the copy constructor) would be made on the heap anyway, and there would be no need to define a destructor. All in all, a copy on the heap would be made anyway???
Also, if(true), then the only other reason I can think of for storing objects using pointers would be to alleviate resource needs for copying the container, as pointers are easier to copy than whole objects. However, this would require the use of std::shared_ptr instead of regular pointers, since you don't want elements in the copied container to be deleted when the original container is destroyed. This method would also alleviate the need for defining a destructor, wouldn't it?
Edit : The destructor to be defined would be for the class using the container, not for the class of the objects stored.
Edit 2 : I guess a more precise question would be : "Does it make a difference to store objects as pointers using the new-operator, as opposed to plain values, on a memory and resources used standpoint?"
The main reason to avoid storing full objects in containers (rather than pointers) is because copying or moving those objects is expensive. In that case, the recommended alternative is to store smart pointers in the container.
So...
vector<something_t> ................. Usually perfectly OK
vector<shared_ptr<something_t>> ..... Preferred if you want pointers
vector<something_t*> ................ Usually best avoided
The problem with raw pointers is that, when a raw pointer disappears, the object it points to hangs around causing memory and resource leaks - unless you've explicitly deleted it. C++ doesn't have garbage collection, and when a pointer is discarded, there's no way to know if other pointers may still be pointing to that object.
Raw pointers are a low-level tool - mostly used to write libraries such as vector and shared_ptr. Smart pointers are a high-level tool.
However, particularly with C++11 move semantics, the costs of moving items around in a vector is normally very small even for huge objects. For example, a vector<string> is fine even if all the strings are megabytes long. You mostly worry about the cost of moving objects if sizeof(classname) is big - if the object holds lots of data inside itself rather than in separate heap-allocated memory.
Even then, you don't always worry about the cost of moving objects. It doesn't matter that moving an object is expensive if you never move it. For example, a map doesn't need to move items around much. When you insert and delete items, the nodes (and contained items) stay where they are, it's just the pointers that link the nodes that change.

When to use C++ pointers in game development

I've looked at a bunch of articles, and most tell the same story: don't use pointers unless you have to. Coming from a C#/Java background where memory is all managed, I have absolutely no idea when it's appropriate to use a pointer, except for these situations:
dynamic memory (like variable-size arrays)
polymorphism
When else would I use pointers, especially in the context of gamedev?
"Don't use pointers, they're slow" doesn't make sense (at least not in C++).
It's exactly like saying, "Don't use variables, they're slow".
Did you mean, "Don't use dynamic memory allocation"?
If so: I don't think you should worry about it right now. Write the code first, then optimize later.
Or did you mean to say, "Don't use raw pointers (i.e. of type foo*)", which would require new and delete?
If so, that is good advice: You generally want smart pointers instead, like shared_ptr or unique_ptr, to manage objects, instead of dealing with raw pointers. You shouldn't need to use new and delete in most C++ code, even though that might seem natural.
But they're still pointers under the hood!
Or did you mean something else?
Addendum
Thanks to the #bames53 below for pointing this out:
If passing a copy is an option (i.e. when you're passing small amounts of data, not a huge structure or an array that could be larger than a few registers, e.g. on the order of ~16 bytes), do not use pointers (or references) to pass data; pass by copy instead. It allows the compiler to optimize better that way.
The idea is that you don't want to have memory leaks, nor slow down the program while retrieving memory, so, if you pre-allocate the structures you will need, then you can use pointers to pass the structures around, as that will be faster than copying structures.
I think you are just confused as to why pointers may be considered bad, as there are many places to use them, you just need to think about why you are doing it.
One primary use of pointers (although references are generally better when possible), is to be able to pass an object around without copying it. If an object contains a lot of data, copying it could be quite expensive and unnecessary. In terms of game development, you can store pointers in a container like a std::vector so you can manage objects in a game. For instance, you could have a vector that contains pointers to all enemies in the game so you can perform some "mass procedure" on them. It might be worthwhile to look into smart pointers, too, since these could make your life a lot easier (they help a great deal when it comes to preventing memory leaks).
You don't need pointers for variable size arrays, you can (and usually should) use std::vector instead. You don't need raw pointers for polymorphism either; virtual functions work with smart pointers like unique_ptr.
In modern C++ you rarely need pointers. Often they will be hidden away inside resource owning classes like vectors. To pass objects around without copying them you can use references instead. Most of the remaining uses of pointers are covered by smart pointers. Very rarely you will want a 'non-owning' pointer, in which case raw pointers are fine.
You'll learn the appropriate uses of pointers if you learn about the things to should use instead of pointers. So learn about RAII, references, and smart_pointers.

A std::vector of pointers?

Here is what I'm trying to do. I have a std::vector with a certain number of elements, it can grow but not shrink. The thing is that its sort of cell based so there may not be anything at that position. Instead of creating an empty object and wasting memory, I thought of instead just NULLing that cell in the std::vector. The issue is that how do I get pointers in there without needing to manage my memory? How can I take advantage of not having to do new and keep track of the pointers?
How large are the objects and how sparse do you anticipate the vector will be? If the objects are not large or if there aren't many holes, the cost of having a few "empty" objects may be lower than the cost of having to dynamically allocate your objects and manage pointers to them.
That said, if you do want to store pointers in the vector, you'll want to use a vector of smart pointers (e.g., a vector<shared_ptr<T>>) or a container designed to own pointers (e.g., Boost's ptr_vector<T>).
If you're going to use pointers something will need to manage the memory.
It sounds like the best solution for you would be to use boost::optional. I believe it has exactly the semantics that you are looking for. (http://www.boost.org/doc/libs/1_39_0/libs/optional/doc/html/index.html).
Actually, after I wrote this, I realized that your use case(e.g. expensive default constructor) is used by the boost::optional docs: http://www.boost.org/doc/libs/1_39_0/libs/optional/doc/html/boost_optional/examples.html#boost_optional.examples.bypassing_expensive_unnecessary_default_construction
You can use a deque to hold an ever-increasing number of objects, and use your vector to hold pointers to the objects. The deque won't invalidate pointers to existing objects it holds if you only add new objects to the end of it. This is far less overhead than allocating each object separately. Just ensure that the deque is destroyed after or at the same time as the vector so you don't create dangling pointers.
However, based on the size of the 3-D array you mentioned in another answer's comment, you may have difficulty storing that many pointers. You might want to look into a sparse array implementation so that you mainly use memory for the portions of the array where you have non-null pointers.
You could use a smart pointer. For example boost::shared_ptr.
The issue is that how do I get pointers in there without needing to manage my memory?
You can do certainly do this using the shared_ptr or other similar techniques mentioned here. But in near future you will come across some problem where you will have to manage your own memory. So please get yourself comfortable with the pointer concept.
Normally if you see in big servers the memory management of object itself is considered as a responsibility and specially for this purpose you will create a class. This is known as pool. Whenever you need an object you ask the pool to give you the object and whenever you are done with the object you tell the pool that I am done. Now it is the responsibility of the pool to see what can be done with that object.
But the basic idea is your main program still deals with pointers but do not care about the memory. There is some other object who cares about it.

Should I store entire objects, or pointers to objects in containers?

Designing a new system from scratch. I'll be using the STL to store lists and maps of certain long-live objects.
Question: Should I ensure my objects have copy constructors and store copies of objects within my STL containers, or is it generally better to manage the life & scope myself and just store the pointers to those objects in my STL containers?
I realize this is somewhat short on details, but I'm looking for the "theoretical" better answer if it exists, since I know both of these solutions are possible.
Two very obvious disadvantage to playing with pointers:
1) I must manage allocation/deallocation of these objects myself in a scope beyond the STL.
2) I cannot create a temp object on the stack and add it to my containers.
Is there anything else I'm missing?
Since people are chiming in on the efficency of using pointers.
If you're considering using a std::vector and if updates are few and you often iterate over your collection and it's a non polymorphic type storing object "copies" will be more efficent since you'll get better locality of reference.
Otoh, if updates are common storing pointers will save the copy/relocation costs.
This really depends upon your situation.
If your objects are small, and doing a copy of the object is lightweight, then storing the data inside an stl container is straightforward and easier to manage in my opinion because you don't have to worry about lifetime management.
If you objects are large, and having a default constructor doesn't make sense, or copies of objects are expensive, then storing with pointers is probably the way to go.
If you decide to use pointers to objects, take a look at the Boost Pointer Container Library. This boost library wraps all the STL containers for use with dynamically allocated objects.
Each pointer container (for example ptr_vector) takes ownership of an object when it is added to the container, and manages the lifetime of those objects for you. You also access all the elements in a ptr_ container by reference. This lets you do things like
class BigExpensive { ... }
// create a pointer vector
ptr_vector<BigExpensive> bigVector;
bigVector.push_back( new BigExpensive( "Lexus", 57700 ) );
bigVector.push_back( new BigExpensive( "House", 15000000 );
// get a reference to the first element
MyClass& expensiveItem = bigList[0];
expensiveItem.sell();
These classes wrap the STL containers and work with all of the STL algorithms, which is really handy.
There are also facilities for transferring ownership of a pointer in the container to the caller (via the release function in most of the containers).
If you're storing polymporhic objects you always need to use a collection of base class pointers.
That is if you plan on storing different derived types in your collection you must store pointers or get eaten by the slicing deamon.
Sorry to jump in 3 years after the event, but a cautionary note here...
On my last big project, my central data structure was a set of fairly straightforward objects. About a year into the project, as the requirements evolved, I realised that the object actually needed to be polymorphic. It took a few weeks of difficult and nasty brain surgery to fix the data structure to be a set of base class pointers, and to handle all the collateral damage in object storage, casting, and so on. It took me a couple of months to convince myself that the new code was working. Incidentally, this made me think hard about how well-designed C++'s object model is.
On my current big project, my central data structure is a set of fairly straightforward objects. About a year into the project (which happens to be today), I realised that the object actually needs to be polymorphic. Back to the net, found this thread, and found Nick's link to the the Boost pointer container library. This is exactly what I had to write last time to fix everything, so I'll give it a go this time around.
The moral, for me, anyway: if your spec isn't 100% cast in stone, go for pointers, and you may potentially save yourself a lot of work later.
Why not get the best of both worlds: do a container of smart pointers (such as boost::shared_ptr or std::shared_ptr). You don't have to manage the memory, and you don't have to deal with large copy operations.
Generally storing the objects directly in the STL container is best as it is simplest, most efficient, and is easiest for using the object.
If your object itself has non-copyable syntax or is an abstract base type you will need to store pointers (easiest is to use shared_ptr)
You seem to have a good grasp of the difference. If the objects are small and easy to copy, then by all means store them.
If not, I would think about storing smart pointers (not auto_ptr, a ref counting smart pointer) to ones you allocate on the heap. Obviously, if you opt for smart pointers, then you can't store temp stack allocated objects (as you have said).
#Torbjörn makes a good point about slicing.
Using pointers will be more efficient since the containers will be only copying pointers around instead of full objects.
There's some useful information here about STL containers and smart pointers:
Why is it wrong to use std::auto_ptr<> with standard containers?
If the objects are to be referred to elsewhere in the code, store in a vector of boost::shared_ptr. This ensures that pointers to the object will remain valid if you resize the vector.
Ie:
std::vector<boost::shared_ptr<protocol> > protocols;
...
connection c(protocols[0].get()); // pointer to protocol stays valid even if resized
If noone else stores pointers to the objects, or the list doesn't grow and shrink, just store as plain-old objects:
std::vector<protocol> protocols;
connection c(protocols[0]); // value-semantics, takes a copy of the protocol
This question has been bugging me for a while.
I lean to storing pointers, but I have some additional requirements (SWIG lua wrappers) that might not apply to you.
The most important point in this post is to test it yourself, using your objects
I did this today to test the speed of calling a member function on a collection of 10 million objects, 500 times.
The function updates x and y based on xdir and ydir (all float member variables).
I used a std::list to hold both types of objects, and I found that storing the object in the list is slightly faster than using a pointer. On the other hand, the performance was very close, so it comes down to how they will be used in your application.
For reference, with -O3 on my hardware the pointers took 41 seconds to complete and the raw objects took 30 seconds to complete.