The standard STL vector container has a "reserve" function to reserve uninitialized memory that can be used later to prevent reallocations.
How come that the other deque container hasn't it?
Increasing the size of a std::vector can be costly. When a vector outgrows its reserved space, the entire contents of the vector must be copied (or moved) to a larger reserve.
It is specifically because std::vector resizing can be costly that vector::reserve() exists. reserve() can prepare a std::vector to anticipate reaching a certain size without exceeding its capacity.
Conversely, a deque can always add more memory without needing to relocate the existing elements. If a std::deque could reserve() memory, there would be little to no noticeable benefit.
For vector and string, reserved space prevents later insertions at the end (up to the capacity) from invalidating iterators and references to earlier elements, by ensuring that elements don't need to be copied/moved. This relocation may also be costly.
With deque and list, earlier references are never invalidated by insertions at the end, and elements aren't moved, so the need to reserve capacity does not arise.
You might think that with vector and string, reserving space also guarantees that later insertions will not throw an exception (unless a constructor throws), since there's no need to allocate memory. You might think that the same guarantee would be useful for other sequences, and hence deque::reserve would have a possible use. There is in fact no such guarantee for vector and string, although in most (all?) implementations it's true. So this is not the intended purpose of reserve.
Quoting from C++ Reference
As opposed to std::vector, the elements of a deque are not stored contiguously: typical implementations use a sequence of individually allocated fixed-size arrays.
The storage of a deque is automatically expanded and contracted as needed. Expansion of a deque is cheaper than the expansion of a std::vector because it does not involve copying of the existing elements to a new memory location.
Deque can allocate new memory anywhere it wants and just point to it, unlike vectors which require a continuous block of memory to hold all their elements.
Only vector have. There is no need of reserve function for deque, since elements not stored continuougusly and there is no need to reallocate and move elements when add, or remove elements.
reserve implies allocation of large blocks of contiguous data (like a vector). There is nothing in the dequeue that implies contiguous storage - it's generally implemented more like a list (which you will notice also doesn't have a 'reserve' function).
Thus, a 'reserve' function would make no sense.
There are 2 main types of memory: memories that allocate a single chunk like array and vectors, and distributed memories whose members grabs any empty location to fill in. queue and linkest list structures belong to the second type and they have some special practical advantages such that deleting a particular element does not cause a mass memory movement as opposed to arrays and vectors. Therefore they do not need to reserve any space beforehand, if they need it they just take it by connecting to tip
If you aim for having aligned memory containers you could think about implementing something like this:
std::deque<std::vector> dv; //deque with dynamic size memory aligned vectors.
typedef size_t[N] Mem;
std::deque<Mem> dvf //deque with fixed size memory aligned vectors. Here you can store the raw bytes adding a header to loop through and cast using header information and typeid...
//templates and polymorphism can help storing raw bytes, checking the type where a pointer points for example, and creating an interface to access the partial aligned memory.
Alternatively you can use a map to access the vectors instead of a deque...
Related
What is the safe, portable, idiomatic way to use the spare capacity in a std::vector?
std::vector<Foo> foos;
foos.emplace_back(1);
foos.emplace_back(2);
foos.reserve(10);
At this point, foos owns at least 8 * sizeof(Foo) uninitialized spare memory starting at foos.data() + foos.size(). It seems terribly inefficient to let that memory go to waste. How can I use this spare capacity as scratch space or for other purposes before I fill it with Foo objects by appending to foos? What is the right way to do this without invoking any undefined behavior?
How can I use the spare capacity in a std::vector?
By inserting more elements.
More convoluted answer which may be what you're looking for, but probably more complex than it's worth:
You can allocate memory yourself (std::allocator or whatever; don't forget std::unique_ptr), and implement a custom allocator that uses that piece of memory. You can then use that memory for whatever and later create a vector using the custom allocator. Note that this doesn't let you use the memory that has already been reserved for the vecor; you can only use the memory before it has been acquired by the vector.
Yes, memory for unused elements is wasted, but it is unavoidable. It's by design.
The storage of the vector is handled automatically, being expanded and
contracted as needed. Vectors usually occupy more space than static
arrays, because more memory is allocated to handle future growth. This
way a vector does not need to reallocate each time an element is
inserted, but only when the additional memory is exhausted.
It's also good to know some non-standard vector's variants and may use them as replacement for std::vector, they
have specific application scenarios.
If you need a dynamic array and the maximum capacity is know at compile time,then boost::static_vector would be a choice(Use it may avoid the unnecessary memory waste of std::vector since most of std::vector's implementation always reserve capacity with a pow of 2, so if your desired capacity is 10, then 6 elements is likely to
be waste):
static_vector is an hybrid between vector and array: like vector, it's
a sequence container with contiguous storage that can change in size,
along with the static allocation, low overhead, and fixed capacity of
array. static_vector is based on Adam Wulkiewicz and Andrew Hundt's
high-performance varray class.
The number of elements in a static_vector may vary dynamically up to a
fixed capacity because elements are stored within the object itself
similarly to an array. However, objects are initialized as they are
inserted into static_vector unlike C arrays or std::array which must
construct all elements on instantiation. The behavior of static_vector
enables the use of statically allocated elements in cases with complex
object lifetime requirements that would otherwise not be trivially
possible. Some other properties:
Random access to elements Constant time insertion and removal of
elements at the end Linear time insertion and removal of elements at
the beginning or in the middle. static_vector is well suited for use
in a buffer, the internal implementation of other classes, or use
cases where there is a fixed limit to the number of elements that must
be stored. Embedded and realtime applications where allocation either
may not be available or acceptable are a particular case where
static_vector can be beneficial.
And if the dynamic array mostly has a very small size(only some of them would be large but with a small probability), then we may consider using small_vector, a small vector will improve the performance for such a scenario and such container is widely used in large projects.
small_vector is a vector-like container optimized for the case when it
contains few elements. It contains some preallocated elements
in-place, which allows it to avoid the use of dynamic storage
allocation when the actual number of elements is below that
preallocated threshold. small_vector is inspired by LLVM's SmallVector
container. Unlike static_vector, small_vector's capacity can grow
beyond the initial preallocated capacity.
I came across these statements:
resize(n) – Resizes the container so that it contains ‘n’ elements.
shrink_to_fit() – Reduces the capacity of the container to fit its size and destroys all elements beyond the capacity.
Is there any significant difference between these functions? they come under vectors in c++
Vectors have two "length" attributes that mean different things:
size is the number of usable elements in the vector. It is the number of things you have stored. This is a conceptual length.
capacity is how many elements would fit into the amount of memory the vector has currently allocated.
capacity >= size must always be true, but there is no reason for them to be always equal. For example, when you remove an element, shrinking the allocation would require creating a new allocation one bucket smaller and moving the remaining contents over ("allocate, move, free").
Similarly, if capacity == size and you add an element, the vector could grow the allocation by one element (another "allocate, move, free" operation), but usually you're going to add more than one element. If the capacity needs to increase, the vector will increase its capacity by more than one element so that you can add several more elements before needing to move everything again.
With this knowledge, we can answer your question:
std::vector<T>::resize() changes the size of the array. If you resize it smaller than its current size, the excess objects are destructed. If you resize it larger than its current size, the "new" objects added at the end are default-initialized.
std::vector<T>::shrink_to_fit() asks for the capacity to be changed to match the current size. (Implementations may or may not honor this request. They might decrease the capacity but not make it equal to the size. They might not do anything at all.) If the request is fulfilled, this will discard some or all of the unused portion of the vector's allocation. You'd typically use this when you are done building a vector and will never add another item to it. (If you know in advance how many items you will be adding, it would be better to use std::vector<T>::reserve() to tell the vector before adding any items instead of relying on shrink_to_fit doing anything.)
So you use resize() to change how much stuff is conceptually in the vector.
You use shrink_to_fit() to minimize the excess space the vector has allocated internally without changing how much stuff is conceptually in the vector.
shrink_to_fit() – Reduces the capacity of the container to fit its size and destroys all elements beyond the capacity.
That is a mischaracterization of what happens. Secifically, the destroys all elements beyond the capacity part is not accurate.
In C++, when dynamically memory is used for objects, there are two steps:
Memory is allocated for objects.
Objects are initialized/constructed at the memory locations.
When objects in dynamically allocated memory are deleted, there are also two steps, which mirror the steps of construction but in inverse order:
Objects at the memory locations destructed (for built-in types, this is a noop).
Memory used by the objects is deallocated.
The memory allocated beyond the size of the container is just buffer. They don't hold any properly initialized objects. It's just raw memory. shrink_to_fit() makes sure that the additional memory is not there but there were no objects in those locations. Hence, nothing is destroyed, only memory is deallocated.
According to the C++ Standard relative to shrink_to_fit
Effects: shrink_to_fit is a non-binding request to reduce capacity()
to size().
and relative to resize
Effects: If sz < size(), erases the last size() - sz elements from the
sequence. Otherwise, appends sz - size() default-inserted elements to
the sequence.
It is evident that the functions do different things. Moreover the first function has no parameter while the second function has even two parameters. The function shrink_to_fit does not change the size of the container though can reallocate memory.
Remove first N elements from a std::vector
This question talks about removing items from a vector and 'compacting memory'. What is 'compacting memory' and why is it important here?
Inside the implementation of the std::vector class is some code that dynamically allocates an array of data-elements. Often not all of the elements in this internal array will be in use -- the array is often allocated to be bigger than what is currently needed, in order to avoid having to reallocate a bigger array too often (array-reallocations are expensive!).
Similarly, when items are removed from the std::vector, the internal data-array is not immediately reallocated to be smaller (because doing so would be expensive); rather, the now-extra slots in the array are left "empty" in the expectation that the calling code might want to re-use them in the near future.
However, those empty slots still take up RAM, so if the calling code has just removed a lot of items from the vector, it might want to force the vector to reallocate a smaller internal array that doesn't contain so many empty slots. That is what they are referring to as compacting in that question.
The OP is talking about shrinking the memory the vector takes up. When you erase elements from a vector its size decreases but it capacity (the memory it is using) remains the same. When the OP says
(that also compacts memory)
They want the removal of the elements to also shrink the capacity of the vector so it reduces its memory consumption.
It means that the vector shouldn't use more memory than it needs to. In other words, the OP wants:
size() == capacity()
This can be achieved in C++11 and later by calling shrink_to_fit() on a vector. This is only a request though, it is not binding.
To make sure it will compact memory you should create a new vector and call reserve(oldvector.size()).
I have been trying to figure out an efficient way of managing dynamic arrays which I may change occasionally but would like to randomly access and iterate over often.
I would like to be able to:
store the array in a continuous data block (reduce cache misses)
access each element individually and independently of the array handle (pointers > indices)
resize the array (dynamic)
So in order to achieve this I have been trying things out using std::vector<T>::iterator, and it worked very well, until recently, when I resized the vector (e.g. calling push_back()) that I was storing iterators of. All the iterators became invalid, because they were pointing to stale memory.
Is there any efficient (possibly STL-)way of keeping the iterator pointers up to date? Or do I have to update each Iterator manually?
Is this whole approach even worthwhile? Should I stick with indices?
EDIT:
I have used indices before and it was ok, but I have changed my approach because it still wasn´t good. I would always have to drag the entire array into scope and the indices could be easily used for any array. also there is no perfect way of defining a "NULL" index (none I know about).
What about the option to update all pointers along with a resize operation? All you would have to do is to store the original vector::begin, resize the vector and afterwards update all pointers to vector.begin() + (ptr - prevBegin) and resize operations is already something you should try to avoid.
Fully achieving all 3 of your goals is impossible. If you are fully contiguous, then you have one memory block with a finite size, and the only way to get more memory is to ask for more memory, which will not be contiguous with the memory you already have. So you have to sacrifice at least one requirement, to at least some degree:
If you are willing to partly sacrifice contiguity, you can use a std::deque. This is an array-of-arrays kind of structure. It doesn't invalidate references, for I think any operation that increases its size. It depends on the details of your data type but generally its performance is much closed to a contiguous array than a linked list. Well done but old (5 year) benchmarks: https://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector-list-deque.html. Another option is to write a chunking allocator, to use either with deque or another structure. This is quite a bit more work though.
If you can use indices, then you can just use a vector
If you don't need to resize, you can still just use a vector and never resize it.
Unless you have a good reason, I would stick with indices. If your main performance bottlenecks are iteration related over a large number of elements (as your contiguity requirement implies), then this whole indexing thing should really be a non-issue. If you do have a very good reason for avoiding indices (which you haven't stated), then I would profile the deque versus the vector on the main loop operation to see how much worse the deque really does. It might be barely worse, and if neither deque nor vector work well enough for you, the next alternatives are quite a bit more work (probably involving allocators or a custom data structure).
Depending on your needs, if you can use the following data structure:
std::vector<std::unique_ptr<Foo>>
then no matter how the vector is resized, if you access your data via a Foo*, the pointer to foo will not be invalidated.
As the number of Foos you need to store in your vector changes, the vector may need to resize it's internal contiguous block of memory, which means any iterators you have pointing inside the vector will be invalidated when the vector resizes.
(You can read more here on C++0x iterator invalidation rules)
However, since the object stored in the vector is a pointer to an object elsewhere in the heap, the pointed-to-object (Foo in this example), will not be invalidated.
Note that the vector owns Foo (as alluded to by std::unique_ptr<Foo>), whilst you can store a non-owning pointer to Foo by keeping a Foo* as the means of accessing your Foo data.
So long as the vector outlives your need to access Foo via your Foo*, then you will not have any lifetime issues.
So in terms of your requirements:
store the array in a continuous data block (reduce cache misses)
yes, std::vector achieves this
access each element individually and independently of the array handle (pointers > indices)
yes, store a Foo* as your means of accessing each element individually, and that remains independent of the array handle (vector::iterator)
resize the array (dynamic)
yes, std::vector achieves this, automatically, resizing for you when you need it to.
Bonus:
Using a smart pointer (in this example std::unique_ptr) in the vector means memory management is also handled automatically for you. (Just make sure you don't try to access a Foo* after the vector is destroyed.
Edit:
It has been pointed out in the comments that storing a std::unique_ptr<Foo> in the vector violates your requirement for the objects to be stored in contiguous memory (if indeed that is what you mean by store the array in contiguous memory, as the vector's underlying array will be contiguous, but accessing the Foo objects will incur an indirection).
However, if you use a suitable allocator (eg arena allocator) for both the vector and the Foo objects, then you will have a higher chance of suffering fewer cache misses, as your Foo objects will exist near to the memory used by your vector, thereby having a higher chance of being in cache when iterating over the vector.
I've got a problem with my terrain engine (using DirectX).
I'm using a vector to hold the vertices of a detail block.
When the block increases in detail, so the vector does.
BUT, when the block decreases its detail, the vector doesn't shrink in size.
So, my question: is there a way to shrink the size of a vector?
I did try this:
vertexvector.reserve(16);
If you pop elements from a vector, it does not free memory (because that would invalidate iterators into the container elements). You can copy the vector to a new vector, and then swap that with the original. That will then make it not waste space. The Swap has constant time complexity, because a swap must not invalidate iterators to elements of the vectors swapped: So it has to just exchange the internal buffer pointers.
vector<vertex>(a).swap(a);
It is known as the "Shrink-to-fit" idiom. Incidentally, the next C++ version includes a "shrink_to_fit()" member function for std::vector.
The usual trick is to swap with an empty vector:
vector<vertex>(vertexvector.begin(), vertexvector.end()).swap(vertexvector);
The reserved memory is not reduced when the vector size is reduced because it is generally better for performance. Shrinking the amount of memory reserved by the vector is as expensive as increasing the size of the vector beyond the reserved size, in that it requires:
Ask the allocator for a new, smaller memory location,
Copy the contents from the old location, and
Tell the allocator to free the old memory location.
In some cases, the allocator can resize an allocation in-place, but it's by no means guaranteed.
If you have had a very large change in the size required, and you know that you won't want that vector to expand again (the principal of locality suggests you will, but of course there are exceptions), then you can use litb's suggested swap operation to explicitly shrink your vector:
vector<vertex>(a).swap(a);
There is a member function for this, shrink_to_fit. Its more efficient than most other methods since it will only allocate new memory and copy if there is a need. The details are discussed here,
Is shrink_to_fit the proper way of reducing the capacity a `std::vector` to its size?
If you don't mind the libc allocation functions realloc is even more efficient, it wont copy the data on a shrink, just mark the extra memory as free, and if you grow the memory and there is memory free after it will mark the needed memory as used and not copy either. Be careful though, you are moving out of C++ stl templates into C void pointers and need to understand how pointers and memory management works, its frowned upon by many now adays as a source for bugs and memory leaks.