C++ Vector Memory Usage - Is it ever freed? - c++

I know that vectors double in size whenever their capacity() is exceeded. This operation takes some time which is why vectors are supposed to have amortized constant time for addition of elements with push_back().
What I'm wondering is... what happens when a vector shrinks so that its size() is less than half of the capacity().
Do vectors ever relinquish the memory which they use, or is it just gone until the vector is destroyed?
It could be a lot of wasted memory if they don't shrink in size ever, but I've never heard of them having that feature.

No, it is never freed (i.e. capacity is never reduced) until destruction. A common idiom for freeing up some memory is to create a new vector of the correct size, and use that instead:
std::vector<type>(originalVector).swap(originalVector);
(inspired by "More Exceptional C++", Item #7)

If you want to make sure your vector uses as little space as possible, you can say:
std::vector<Foo>(my_vector).swap(my_vector);
You could also call the shrink_to_fit() member function, but that is just a non-binding request.

Erasing elements from a vector or even clearing the vector entirely does not necessarily free any of the memory associated with that element. This is because the maximum size of a vector since its creation is a good estimate of the future size of the vector.
To use the shrink to fit idiom:
std::vector<int> v;
//v is swapped with its temporary copy, which is capacity optimal
std::vector<int>(v).swap(v);
Solution in C++0x
In C++0x some containers declare such idiom as function shrink_to_fit(), e.g. vector, deque, basic_string. shrink_to_fit() is a non-binding request to reduce capacity() to size().

They don't shrink in size.
It would not be generally useful to de-allocate the memory.
For example, a vector could shrink now but then grow again later. It would be a waste to deallocate the extra memory (and copy all the elements in the memory) only to reallocate it later.
If a vector is shrinking, there is also a good chance the vector will be shrunk to empty and then destroyed. In that case deallocating as it shrinks would be a waste of time as well.
It can also be a useful optimization to reuse a vector object so as to avoid allocating/dealocating the contents. That would be ruined by introducing deallocations as it shrinks.
Basically, most of the time deallocating the extra memory isn't worth it. In the few case where it is, you can specifically ask for dealocation.

The memory is gone until the vector is destroyed.

Related

What does 'compacting memory' mean when removing items from the front of a std::vector?

Remove first N elements from a std::vector
This question talks about removing items from a vector and 'compacting memory'. What is 'compacting memory' and why is it important here?
Inside the implementation of the std::vector class is some code that dynamically allocates an array of data-elements. Often not all of the elements in this internal array will be in use -- the array is often allocated to be bigger than what is currently needed, in order to avoid having to reallocate a bigger array too often (array-reallocations are expensive!).
Similarly, when items are removed from the std::vector, the internal data-array is not immediately reallocated to be smaller (because doing so would be expensive); rather, the now-extra slots in the array are left "empty" in the expectation that the calling code might want to re-use them in the near future.
However, those empty slots still take up RAM, so if the calling code has just removed a lot of items from the vector, it might want to force the vector to reallocate a smaller internal array that doesn't contain so many empty slots. That is what they are referring to as compacting in that question.
The OP is talking about shrinking the memory the vector takes up. When you erase elements from a vector its size decreases but it capacity (the memory it is using) remains the same. When the OP says
(that also compacts memory)
They want the removal of the elements to also shrink the capacity of the vector so it reduces its memory consumption.
It means that the vector shouldn't use more memory than it needs to. In other words, the OP wants:
size() == capacity()
This can be achieved in C++11 and later by calling shrink_to_fit() on a vector. This is only a request though, it is not binding.
To make sure it will compact memory you should create a new vector and call reserve(oldvector.size()).

Does resize() to a smaller size discard the reservation made by earlier reserve()?

So if I reserve(100) first, add some elements, and then resize(0) (or any other number smaller than the current size), will the vector reallocate memory to a take less space than 100 elements?
vector<T>::resize(0) should not cause a reallocation or deletion of allocated memory, and for this reason in most cases is preferable to vector<T>::clear().
For more detail see answers to this question: std::vector resize downward
Doing a vector::resize(0) or to a smaller count rather than current count should not deallocate any memory. However, it may destroy these elements.
For a reference on std::vector::resize, take a look at std::vector::resize

Is there an STL Allocator that will not implicitly free memory?

Memory usage in my STL containers is projected to be volatile - that is to say it will frequently shrink and grow. I'm thinking to account for this by specifying an allocator to the STL container type declarations. I understand that pool allocators are meant to handle this type of situation, but my concern is that the volatility will be more than the pool accounts for, and to overcome it I would have to do a lot of testing to determine good pool metrics.
My ideal allocator would never implicitly release memory, and in fact is perfectly acceptable if memory is only ever released upon destruction of the allocator. A member function to explicitly release unused memory would be nice, but not necessary. I know that what I'm referring to sounds like a per-object allocator and this violates the standard. I'd rather stick with the standard, but will abandon it if I can't resolve this within it.
I am less concerned with initial performance and more with average performance. Put another way, it matters less whether a single element or a pool of them is allocated at a time, and more whether said allocation results in a call to new/malloc. I have no problem writing my own allocator, but does anyone know of a preexisting one that accomplishes this? If it makes a difference, this will be for contiguous memory containers (e.g. vector, deque), although a generalized solution would be nice.
I hope this isn't too basic.
Memory will be allocated and freed more for adding items than removing them.
I believe that never "freeing" memory isn't possible unless you know the maximum number of elements allowed by your application. The CRT might try to allocate a larger block of memory in place, but how would you handle the failure cases?
Explaination:
To create a dynamically expanding vector, there will be a larger capacity to handle most push_backs, and a reallocation to handle when this is insufficient.
During the reallocation, a new larger piece of memory is "newed" up
and the elements of the old piece of memory are copied into the new
one.
It is important that you don't hold any iterators while to push_back
elements, because the reallocation will invalidate the memory
locations the iterator point to.
In c++11 and TR1, you might have perfect forwarding where only the
pointer to the elements need to be copied. This is done via a move
constructor instead of a copy constructor.
However, you seem to want to avoid reallocation as much as possible.
Using the default allocator for vector you can specify an initial capacity.
Capacity is the memory allocated and size is the number of elements.
Memory will only be allocated at construction and if the size reaches
capacity. This should only happen with a push_back();
The default allocator will increase the capacity by a multiple (eg.
1.5, 2.0) so that reallocation takes place in linear time. So if you have a loop that pushes back data it's linear. Or if you know the number of elements ahead of time you can allocate once.
There are pool concepts you can explore. The idea with a pool is you are not destroying elements but deactivating them.
If you still want to write your own allocator, this is a good article.
custom allocators

Why not resize and clear works in GotW 54?

Referring to article Gotw 54 by HerbSutter, he explains about
The Right Way To "Shrink-To-Fit" a
vector or deque and
The Right Way to Completely Clear a vector or
deque
Can we just use container.resize()
and container.clear() for the above task
or am I missing something?
There are two different things that a vector holds: size Vs capacity. If you just resize the vector, there is no guarantee that the capacity(how much memory is reserved) must change. resize is an operation concerned with how much are you using, not how much the vector capacity is.
So for example.
size == how much you are using
capacity == how much memory is reserved
vector<int> v(10);
v.resize(5); // size == 5 but capacity (may or may) not be changed
v.clear() // size == 0 but capacity (may or may) not be changed
In the end, capacity should not changed on every operation, because that would bring a lot of memory allocation/deallocation overhead. He is saying that if you need to "deallocate" the memory reserved by vector, do that.
Neither resize() nor clear() work. The .capacity() of a vector is guaranteed to be at least as big as the current size() of the vector, and guaranteed to be at least as big as the reserve()d capacity. Also, this .capacity() doesn't shrink, so it is also at least as big as any previous size() or reserve()ation.
Now, the .capacity() of a vector is merely the memory it has reserved. Often not all of that memory cotains objects. Resizing removes objects, but doesn't recycle the memory. A vector can only recycle its memory buffer when allocating a larger buffer.
The swap trick works by copying all ojects to a smaller, more appropriate memory buffer. Afterwards, the original memory buffer can be recycled. This appears to violate the previous statement that the memory buffer of a vector can only grow. However, with the swap trick, you temporarily have 2 vectors.
The vector has size and capacity. It may hold X elements but have uninitialized memory in store for Y elements more. In a typical implementation erase, resize (when resizing to smaller size) and clear don't affect the capacity: the vector keeps the memory around for itself, should you want to add new items to it at a later time.

shrinking a vector

I've got a problem with my terrain engine (using DirectX).
I'm using a vector to hold the vertices of a detail block.
When the block increases in detail, so the vector does.
BUT, when the block decreases its detail, the vector doesn't shrink in size.
So, my question: is there a way to shrink the size of a vector?
I did try this:
vertexvector.reserve(16);
If you pop elements from a vector, it does not free memory (because that would invalidate iterators into the container elements). You can copy the vector to a new vector, and then swap that with the original. That will then make it not waste space. The Swap has constant time complexity, because a swap must not invalidate iterators to elements of the vectors swapped: So it has to just exchange the internal buffer pointers.
vector<vertex>(a).swap(a);
It is known as the "Shrink-to-fit" idiom. Incidentally, the next C++ version includes a "shrink_to_fit()" member function for std::vector.
The usual trick is to swap with an empty vector:
vector<vertex>(vertexvector.begin(), vertexvector.end()).swap(vertexvector);
The reserved memory is not reduced when the vector size is reduced because it is generally better for performance. Shrinking the amount of memory reserved by the vector is as expensive as increasing the size of the vector beyond the reserved size, in that it requires:
Ask the allocator for a new, smaller memory location,
Copy the contents from the old location, and
Tell the allocator to free the old memory location.
In some cases, the allocator can resize an allocation in-place, but it's by no means guaranteed.
If you have had a very large change in the size required, and you know that you won't want that vector to expand again (the principal of locality suggests you will, but of course there are exceptions), then you can use litb's suggested swap operation to explicitly shrink your vector:
vector<vertex>(a).swap(a);
There is a member function for this, shrink_to_fit. Its more efficient than most other methods since it will only allocate new memory and copy if there is a need. The details are discussed here,
Is shrink_to_fit the proper way of reducing the capacity a `std::vector` to its size?
If you don't mind the libc allocation functions realloc is even more efficient, it wont copy the data on a shrink, just mark the extra memory as free, and if you grow the memory and there is memory free after it will mark the needed memory as used and not copy either. Be careful though, you are moving out of C++ stl templates into C void pointers and need to understand how pointers and memory management works, its frowned upon by many now adays as a source for bugs and memory leaks.