Does std::vector::insert reserve by definition? - c++

When calling the insert member function on a std::vector, will it reserve before "pushing back" the new items? I mean does the standard guarantee that or not?
In other words, should I do it like this:
std::vector<int> a{1,2,3,4,5};
std::vector<int> b{6,7,8,9,10};
a.insert(a.end(),b.begin(),b.end());
or like this:
std::vector<int> a{1,2,3,4,5};
std::vector<int> b{6,7,8,9,10};
a.reserve(a.size()+b.size());
a.insert(a.end(),b.begin(),b.end());
or another better approach?

Regarding the complexity of the function [link]:
Linear on the number of elements inserted (copy/move construction)
plus the number of elements after position (moving).
Additionally, if InputIterator in the range insert (3) is not at least
of a forward iterator category (i.e., just an input iterator) the new
capacity cannot be determined beforehand and the insertion incurs in
additional logarithmic complexity in size (reallocations).
Hence, there is two cases :
The new capacity can be determined, therefore you won't need to call reserve
The new capacity can't be determined, hence a call to reserve should be useful.

Does std::vector::insert reserve by definition?
Not always; depends on the current capacity.
From the draft N4567, §23.3.6.5/1 ([vector.modifiers]):
Causes reallocation if the new size is greater than the old capacity.
If the allocated memory capacity in the vector is large enough to contain the new elements, no additional allocations for the vector are needed. So no, then it won't reserve memory.
If the vector capacity is not large enough, then a new block is allocated, the current contents moved/copied over and the new elements are inserted. The exact allocation algorithm is not specified, but typically it would be as used in the reserve() method.
... or another better approach?
If you are concerned about too many allocations whilst inserting elements into the vector, then calling the reserve method with the size of the number of expected elements to be added does minimise the allocations.
Does the vector call reserve before the/any insertions? I.e. does it allocate enough capacity in a single allocation?
No guarantees. How would it know the distance between the to input iterators? Given that the insert method can take an InputIterator (i.e. single pass iterator), it has no way of calculating the expected size. Could the method calculate the size if the iterators where something else (e.g. pointers or RandomAccessIterator)? Yes it could. Would it? Depends on the implementation and the optimisations that are made.

From the documentation, it seems that:
Causes reallocation if the new size() is greater than the old capacity().
Be aware also that in such a case all the iterators and references are invalidated.
It goes without saying thus that reallocations are in charge of the insert and, if you look at those operations one at the time, it's as a consistent reserve-size-plus-one operation is made at each step.
You can argue that a reserve call at the top of the insertion would speed up everything in those cases when more than one reallocation takes place... Well, right, it could help, but it mostly depends on your actual problem.

Related

Will `resize` have any risk to reduce the vector capacity?

In C++, the method resize (of std::vector), changes the size of the vector (and construct / destroys object if necessary), reserve allows to eventually increase the capacity of a vector. shrink_to_fit will reduce the capacity of the vector to match its size. When increasing the vector size (either through resize, push_back or insert), the capacity will be increased if needed (it doubles every time it needs to increase if I am not mistaken).
Do the standards ensure that a vector will never reduce its capacity, unless the function shrink_to_fit is called? Or is it possible that a vector capacity will vary depending upon what the compiler think being wise to do?
No, the vector's capacity will not be reduced by reserve or resize.
std::vector::reserve:
If new_cap is [not greater than capacity] ... no iterators or references are invalidated.
std::vector::resize
Vector capacity is never reduced when resizing to smaller size because that would invalidate all iterators, rather than only the ones that would be invalidated by the equivalent sequence of pop_back() calls.
Relevant part of the standard

How does deque have an amortized constant Time Complexity

I read here from the accepted answer that a std::deque has the following characteristic
1- Random access - constant O(1)
2- Insertion or removal of elements at the end or beginning - amortized constant O(1)
3- Insertion or removal of elements - linear O(n)
My question is about point 2. How can a deque have an amortized constant insertion at the end or beginning?
I understand that a std::vector has an amortized constant time complexity for insertions at the end. This is because a vector is continguous and is a dynamic array. So when it runs out of memory for a push_back at the end, it would allocate a whole new block of memory, copy existing items from the old location to the new location and then delete items from the old location. This operation I understand is amortized constant. How does this apply to a deque ? How can insertions at the top and bottom of a deque be amortized constant. I was under the impression that it was supposed to be constant O(1). I know that a deque is composed of memory chunks.
The usual implementation of a deque is basically a vector of pointers to fixed-sized nodes.
Allocating the fixed-size node clearly has constant complexity, so that's pretty easy to handle--you just amortize the cost of allocating a single node across the number of items in that node to get a constant complexity for each.
The vector of pointers part is what's (marginally) more interesting. When we allocate enough of the fixed-size nodes that the vector of pointers is full, we need to increase the size of the vector. Like std::vector, we need to copy its contents to the newly allocated vector, so its growth must follow a geometric (rather than arithmetic) progression. This means that we have more pointers to copy we do the copying less and less frequently, so the total time devoted to copying pointers remains constant.
As a side note: the "vector" part is normally treated as a circular buffer, so if you're using your deque as a queue, constantly adding to one end and removing from the other does not result in re-allocating the vector--it just means moving the head and tail pointers that keep track of which of the pointers are "active" at a given time.
The (profane) answer lies in containers.requirements.general, 23.2.1/2:
All of the complexity requirements in this Clause are stated solely in
terms of the number of operations on the contained objects.
Reallocating the array of pointers is hence not covered by the complexity guarantee of the standard and may take arbitrarily long. As mentioned before, it likely adds an amortized constant overhead to each push_front()/push_back() call (or an equivalent modifier) in any "sane" implementation. I would not recommend using deque in RT-critical code though. Typically, in an RT scenario, you don't want to have unbounded queues or stacks (which in C++ by default use deque as the underlying container) anyway, neither memory allocations that could fail, so you will be most likely using a preallocated ring buffer (e.g. Boost's circular_buffer) instead.

why std::vector item deletion does not reduce its capacity?

I am aware that when we insert items to a vector its capacity could be increase by non-linear factor. In gcc its capacity doubles. But I wonder why when I erase items from a vector, the capacity does not reduce. I tried to find out a reason for this. It 'seems' C++ standard does not say any word about this reduction (either to do or not).
For my understand ideally, when vector size comes to 1/4 of its capacity at item deletion, it the vector could be shrunken by 1/2 of its capacity to achieve constant amortized space allocation/de-allocation complexity.
My question is why C++ standard does not specify capacity reduction policy? What are the language design goals to not to specify anything about this? Does anyone has an idea about this?
It 'seems' C++ standard does not say any word about this reduction (either to do or not)
This is not true, because the complexity description for vector::erase specifies exactly what operations will be performed.
From §23.3.6.5/4 [vector.modifiers]
iterator erase(const_iterator position);
iterator erase(const_iterator first, const_iterator last);
Complexity: The destructor of T is called the number of times equal to the number of the elements erased, but the move assignment operator of T is called the number of times equal to the number of elements in the vector after the erased elements.
This precludes implementations from reducing capacity because that would mean reallocation of storage and moving all remaining elements to the new memory.
And if you're asking why the standard itself doesn't specify implementations are allowed to reduce capacity when you erase elements, then one can only guess the reasons.
It was probably considered not important enough from a performance point of view to have the vector spend time reallocating and moving elements when erasing
Reducing capacity would also add an additional possibility of an exception due to a failed memory allocation.
You can attempt to reduce capacity yourself by calling vector::shrink_to_fit, but be aware that this call is non-binding, and implementations are allowed to ignore it.
Another possibility for reducing the capacity would be move the elements into a temporary vector and swap it back into the original.
decltype(vec)(std::make_move_iterator(vec.begin()),
std::make_move_iterator(vec.end())).swap(vec);
But even with the second method, there's nothing stopping an implementation from over allocating storage.
Even more than the performance of moving all elements is the effect on existing iterators and pointers to elements. The behavior of erase is:
Invalidates iterators and references at or after the point of the erase.
If reallocation occurred, then all iterators, pointers, and references would become invalid. In general, keeping iterator validity is a desirable thing.
The algorithm for allocating additional space as the vector grows has "constant amortized complexity" due to the notion that the total complexity (which is O(N) when a vector of N elements is created by a series of push_back() operations) can be "amortized" over the N push_back() calls--that is, the total cost is divided by N.
Even more specifically, using the algorithm that allocates twice as much space each time, the worst case is that the algorithm allocates nearly 4 times as much memory as would need to be allocated if you knew the exact size of the vector in advance. The last allocation is just slightly less than two times the size of the vector after the allocation, and the some of all the previous allocations is slightly less than the size of the last allocation.
The total number of allocations is O(log N), and the number of deallocations (up to that point) is just one less than the number of allocations.
For a large vector, if you know its maximum size in advance, it's more efficient to reserve that space at the beginning (one allocation rather than O(log N) allocations)
before inserting any data.
If you cut the capacity in half each time the size of the vector shrank to 1/4 of the currently-allocated space--that is, if you ran the allocation algorithm in reverse--you would be re-allocating (and then deallocating) nearly as much memory as the maximum capacity of the vector, in addition to deallocating the memory block with the maximum capacity. That's a performance penalty for applications that simply wanted to erase elements of the vector until they were all gone and then delete the vector.
That is, with deallocation as well as allocation, it's better to do it all at once if you can. And with deallocation you (almost) always can.
The only beneficiary of the more complicated deallocation algorithm would be an application that makes a vector, then erases at least 3/4 of it and then keeps the remaining part in memory while proceeding to grow new vectors. And even then there would be no benefit from the complicated algorithm unless the sum of the maximum capacities of the old (but still existing) vectors and the new vectors was so large that the application started to run into limitations of virtual memory.
Why penalize all algorithms that progressively erase their vectors in order to gain this advantage in this special case?

c++ inserting elements at the end of a vector

I am experiencing a problem with the vector container. I am trying to improve the performance of inserting a lot of elements into one vector.
Basically I am using vector::reserve to expand my vector _children if needed:
if (_children.capacity() == _children.size())
{
_children.reserve(_children.size() * 2);
}
and using vector::at() to insert a new element at the end of _children instead of vector::push_back():
_children.at(_children.size()) = child;
_children has already one element in it, so the first element should be inserted at position 1, and the capacity at this time is 2.
Despite this, an out_of_range error is thrown. Can someone explain to me, what I misunderstood here? Is it not possible to just insert an extra element even though the chosen position is less than the vector capacity? I can post some more code if needed.
Thanks in advance.
/mads
Increasing the capacity doesn't increase the number of elements in the vector. It simply ensures that the vector has capacity to grow up to the required size without having to reallocate memory. I.e., you still need to call push_back().
Mind you, calling reserve() to increase capacity geometrically is a waste of effort. std::vector already does this.
This causes accesses out of bounds. Reserving memory does not affect the size of the vector.
Basically, you are doing manually what push_back does internally. Why do you think it would be any more efficient?
That's not what at() is for. at() is a checked version of [], i.e. accessing an element. But reserve() does not change the number of elements.
You should just use reserve() followed by push_back or emplace_back or insert (at the end); all those will be efficient, since they will not cause reallocations if you stay under the capacity limit.
Note that the vector already behaves exactly like you do manually: When it reaches capacity, it resizes the allocated memory to a multiple of the current size. This is mandated by the requirement that adding elements have amortized constant time complexity.
Neither at nor reserve increase the size of the vector (the latter increases the capacity but not the size).
Also, your attempted optimization is almost certainly redundant; you should simply push_back the elements into the array and rely on std::vector to expand its capacity in an intelligent manner.
You have to differentiate between the capacity and the size. You can only assign within size, and reserve only affects the capacity.
vector::reserve is only internally reserving space but is not constructing objects and is not changing the external size of the vector. If you use reserve you need to use push_back.
Additionally vector::at does range checking, which makes it a lot slower compared to vector::operator[].
What you are doing is trying to mimic part of the behaviour vector already implements internally. It is going to expand by its size by a certain factor (usually around 1.5 or 2) every time it runs out of space. If you know that you are pushing back many objects and only want one reallocation use:
vec.reserve(vec.size() + nbElementsToAdd);
If you are not adding enough elements this is potentially worse than the default behaviour of vector.
The capacity of a vector is not the number of elements it has, but the number of elements it can hold without allocating more memory. The capacity is equal to or larger than the number of elements in the vector.
In your example, _children.size() is 1, but there is no element at position 1. You can only use assignment to replace existing elements, not for adding new ones. Per definition, the last element is at _children.at(_children.size()-1).
The correct way is just to use push_back(), which is highly optimized, and faster than inserting at an index. If you know beforehand how many elements you want to add, you can of course use reserve() as an optimization.
It's not necessary to call reserve manually, as the vector will automatically resize the internal storage if neccessary. Actually I believe what you do in your example is similar what the vector does internally anyway - when it reaches the capacity, reserve twice the current size.
See also http://www.cplusplus.com/reference/stl/vector/capacity/

What is a truly empty std::vector in C++?

I've got a two vectors in class A that contain other class objects B and C. I know exactly how many elements these vectors are supposed to hold at maximum. In the initializer list of class A's constructor, I initialize these vectors to their max sizes (constants).
If I understand this correctly, I now have a vector of objects of class B that have been initialized using their default constructor. Right? When I wrote this code, I thought this was the only way to deal with things. However, I've since learned about std::vector.reserve() and I'd like to achieve something different.
I'd like to allocate memory for these vectors to grow as large as possible because adding to them is controlled by user-input, so I don't want frequent resizings. However, I iterate through this vector many, many times per second and I only currently work on objects I've flagged as "active". To have to check a boolean member of class B/C on every iteration is silly. I don't want these objects to even BE there for my iterators to see when I run through this list.
Is reserving the max space ahead of time and using push_back to add a new object to the vector a solution to this?
A vector has capacity and it has size. The capacity is the number of elements for which memory has been allocated. Size is the number of elements which are actually in the vector. A vector is empty when its size is 0. So, size() returns 0 and empty() returns true. That says nothing about the capacity of the vector at that point (that would depend on things like the number of insertions and erasures that have been done to the vector since it was created). capacity() will tell you the current capacity - that is the number of elements that the vector can hold before it will have to reallocate its internal storage in order to hold more.
So, when you construct a vector, it has a certain size and a certain capacity. A default-constructed vector will have a size of zero and an implementation-defined capacity. You can insert elements into the vector freely without worrying about whether the vector is large enough - up to max_size() - max_size() being the maximum capacity/size that a vector can have on that system (typically large enough not to worry about). Each time that you insert an item into the vector, if it has sufficient capacity, then no memory-allocation is going to be allocated to the vector. However, if inserting that element would exceed the capacity of the vector, then the vector's memory is internally re-allocated so that it has enough capacity to hold the new element as well as an implementation-defined number of new elements (typically, the vector will probably double in capacity) and that element is inserted into the vector. This happens without you having to worry about increasing the vector's capacity. And it happens in constant amortized time, so you don't generally need to worry about it being a performance problem.
If you do find that you're adding to a vector often enough that many reallocations occur, and it's a performance problem, then you can call reserve() which will set the capacity to at least the given value. Typically, you'd do this when you have a very good idea of how many elements your vector is likely to hold. However, unless you know that it's going to a performance issue, then it's probably a bad idea. It's just going to complicate your code. And constant amortized time will generally be good enough to avoid performance issues.
You can also construct a vector with a given number of default-constructed elements as you mentioned, but unless you really want those elements, then that would be a bad idea. vector is supposed to make it so that you don't have to worry about reallocating the container when you insert elements into it (like you would have to with an array), and default-constructing elements in it for the purposes of allocating memory is defeating that. If you really want to do that, use reserve(). But again, don't bother with reserve() unless you're certain that it's going to improve performance. And as was pointed out in another answer, if you're inserting elements into the vector based on user input, then odds are that the time cost of the I/O will far exceed the time cost in reallocating memory for the vector on those relatively rare occasions when it runs out of capacity.
Capacity-related functions:
capacity() // Returns the number of elements that the vector can hold
reserve() // Sets the minimum capacity of the vector.
Size-related functions:
clear() // Removes all elements from the vector.
empty() // Returns true if the vector has no elements.
resize() // Changes the size of the vector.
size() // Returns the number of items in the vector.
Yes, reserve(n) will allocate space without actually putting elements there - increasing capacity() without increasing size().
BTW, if "adding to them is controlled by user-input" means that the user hits "insert X" and you insert X into the vector, you need not worry about the overhead of resizing. Waiting for user input is many times slower than the amortized constant resizing performance.
Your question is a little confusing, so let me try to answer what I think you asked.
Let's say you have a vector<B> which you default-construct. You then call vec.reserve(100). Now, vec contains 0 elements. It's empty. vec.empty() returns true and vec.size() returns 0. Every time you call push_back, you will insert one element, and unless vec conatins 100 elements, there will be no reallocation.