How does deque have an amortized constant Time Complexity - c++

I read here from the accepted answer that a std::deque has the following characteristic
1- Random access - constant O(1)
2- Insertion or removal of elements at the end or beginning - amortized constant O(1)
3- Insertion or removal of elements - linear O(n)
My question is about point 2. How can a deque have an amortized constant insertion at the end or beginning?
I understand that a std::vector has an amortized constant time complexity for insertions at the end. This is because a vector is continguous and is a dynamic array. So when it runs out of memory for a push_back at the end, it would allocate a whole new block of memory, copy existing items from the old location to the new location and then delete items from the old location. This operation I understand is amortized constant. How does this apply to a deque ? How can insertions at the top and bottom of a deque be amortized constant. I was under the impression that it was supposed to be constant O(1). I know that a deque is composed of memory chunks.

The usual implementation of a deque is basically a vector of pointers to fixed-sized nodes.
Allocating the fixed-size node clearly has constant complexity, so that's pretty easy to handle--you just amortize the cost of allocating a single node across the number of items in that node to get a constant complexity for each.
The vector of pointers part is what's (marginally) more interesting. When we allocate enough of the fixed-size nodes that the vector of pointers is full, we need to increase the size of the vector. Like std::vector, we need to copy its contents to the newly allocated vector, so its growth must follow a geometric (rather than arithmetic) progression. This means that we have more pointers to copy we do the copying less and less frequently, so the total time devoted to copying pointers remains constant.
As a side note: the "vector" part is normally treated as a circular buffer, so if you're using your deque as a queue, constantly adding to one end and removing from the other does not result in re-allocating the vector--it just means moving the head and tail pointers that keep track of which of the pointers are "active" at a given time.

The (profane) answer lies in containers.requirements.general, 23.2.1/2:
All of the complexity requirements in this Clause are stated solely in
terms of the number of operations on the contained objects.
Reallocating the array of pointers is hence not covered by the complexity guarantee of the standard and may take arbitrarily long. As mentioned before, it likely adds an amortized constant overhead to each push_front()/push_back() call (or an equivalent modifier) in any "sane" implementation. I would not recommend using deque in RT-critical code though. Typically, in an RT scenario, you don't want to have unbounded queues or stacks (which in C++ by default use deque as the underlying container) anyway, neither memory allocations that could fail, so you will be most likely using a preallocated ring buffer (e.g. Boost's circular_buffer) instead.

Related

C++ default implementation of stack and queue

In C++ Primer 5th, it says that the default implementation of stack and queue is deque.
I'm wondering why they don't use list? Stack and Queue doesn't support random access, always operate on both ends, therefore list should be the most intuitive way to implement them, and deque, which support random access (with constant time) is somehow not necessary.
Could anyone explain the reason behind this implementation?
With std::list as underlying container each std::stack::push does a memory allocation. Whereas std::deque allocates memory in chunks and can reuse its spare capacity to avoid the memory allocation.
With small elements the storage overhead of list nodes can also become significant. E.g. std::list<int> node size is 24 bytes (on a 64-bit system), with only 4 bytes occupied by the element - at least 83% storage overhead.
I think the question should be asked the other way around: Why use a list if you can use an array?
The list is more complicated: More allocations, more resources (for storing pointers) and more work to do (even if it is all in constant time). On the other hand, the main property for preferring lists is also not relevant to stacks and queues: constant-time random insertion and deletion.
the main reason is because deque is faster than the list in average for front and back insertions and deletions
deque is faster because memory is allocated by chunks were as list need allocation on each elements and allocation are costly operations.
benchmark
Let's compare the sequence containers:
std::array is right out, it doesn't change size.
std::list optimises for iterator non-invalidation, allows insertion at known positions, and lacks random access. It has O(N) space overhead, with a large constant, and it has bad cache locality.
std::forward_list is an even more crippled list with smaller space overhead
std::deque optimises for appending or prepending, at the expense of not being contiguous. It has O(N) space overhead, with a smaller constant, and it has mediocre cache locality.
std::vector optimises for access speed, at the expense of insertion / removal anywhere but the end. It has O(1) space overhead, and the great cache locality.
So what does this mean for stack and queue?
std::stack only requires operations at one end. std::vector, std::deque and std::list all provide the necessary operations
std::queue requires operations at both ends. std::deque and std::list are the only candidates.
The choice of std::deque as the default is then one of consistency, as std::vector is generally better for std::stack, but inapplicable for std::queue.
Note that std::priority_queue, whilst named similar to std::queue, is actually more akin to std::stack in requiring only modification at one end. It also benefits more from the raw access speed of std::vector, maintaining the heap invariant.

Does the std::vector implementation use an internal array or linked list or other?

I've been told that std::vector has a C-style array on the inside implementation, but would that not negate the entire purpose of having a dynamic container?
So is inserting a value in a vector an O(n) operation? Or is it O(1) like in a linked-list?
From the C++11 standard, in the "sequence containers" library section (emphasis mine):
[23.3.6.1 Class template vector overview][vector.overview]
A vector is a sequence container that supports (amortized) constant time insert and erase operations at the
end; insert and erase in the middle take linear time. Storage management is handled automatically, though
hints can be given to improve efficiency.
This does not defeat the purpose of dynamic size -- part of the point of vector is that not only is it very fast to access a single element, but scanning over the vector has very good memory locality because everything is tightly packed together. In practice, having good memory locality is very important because it greatly reduces cache misses, which has a large impact on runtime. This is a major advantage of vector over list in many situations, particularly those where you need to iterate over the entire container more often than you need to add or remove elements.
The memory in a std::vector is required to be contiguous, so it's typically represented as an array.
Your question about the complexity of the operations on a std::vector is a good one - I remember wondering this myself when I first started programming. If you append an element to a std::vector, then it may have to perform a resize operation and copy over all the existing elements to a new array. This will take time O(n) in the worst case. However, the amortized cost of appending an element is O(1). By this, we mean that the total cost of any sequence of n appends to a std::vector is always O(n). The intuition behind this is that the std::vector usually overallocates space in its array, leaving a lot of free slots for elements to be inserted into without a reallocation. As a result, most of the appends will take time O(1) even though every now and then you'll have one that takes time O(n).
That said, the cost of performing an insertion elsewhere in a std::vector will be O(n), because you may have to shift everything down.
You also asked why this is, if it defeats the purpose of having a dynamic array. Even if the std::vector just acted like a managed array, it's still a win over raw arrays. The std::vector knows its size, can do bounds-checking (with at), is an actual object (unlike an array), and doesn't decay to a pointer. These extra features - coupled with the extra logic to make appends work quickly - are almost always worth it.

why std::vector item deletion does not reduce its capacity?

I am aware that when we insert items to a vector its capacity could be increase by non-linear factor. In gcc its capacity doubles. But I wonder why when I erase items from a vector, the capacity does not reduce. I tried to find out a reason for this. It 'seems' C++ standard does not say any word about this reduction (either to do or not).
For my understand ideally, when vector size comes to 1/4 of its capacity at item deletion, it the vector could be shrunken by 1/2 of its capacity to achieve constant amortized space allocation/de-allocation complexity.
My question is why C++ standard does not specify capacity reduction policy? What are the language design goals to not to specify anything about this? Does anyone has an idea about this?
It 'seems' C++ standard does not say any word about this reduction (either to do or not)
This is not true, because the complexity description for vector::erase specifies exactly what operations will be performed.
From §23.3.6.5/4 [vector.modifiers]
iterator erase(const_iterator position);
iterator erase(const_iterator first, const_iterator last);
Complexity: The destructor of T is called the number of times equal to the number of the elements erased, but the move assignment operator of T is called the number of times equal to the number of elements in the vector after the erased elements.
This precludes implementations from reducing capacity because that would mean reallocation of storage and moving all remaining elements to the new memory.
And if you're asking why the standard itself doesn't specify implementations are allowed to reduce capacity when you erase elements, then one can only guess the reasons.
It was probably considered not important enough from a performance point of view to have the vector spend time reallocating and moving elements when erasing
Reducing capacity would also add an additional possibility of an exception due to a failed memory allocation.
You can attempt to reduce capacity yourself by calling vector::shrink_to_fit, but be aware that this call is non-binding, and implementations are allowed to ignore it.
Another possibility for reducing the capacity would be move the elements into a temporary vector and swap it back into the original.
decltype(vec)(std::make_move_iterator(vec.begin()),
std::make_move_iterator(vec.end())).swap(vec);
But even with the second method, there's nothing stopping an implementation from over allocating storage.
Even more than the performance of moving all elements is the effect on existing iterators and pointers to elements. The behavior of erase is:
Invalidates iterators and references at or after the point of the erase.
If reallocation occurred, then all iterators, pointers, and references would become invalid. In general, keeping iterator validity is a desirable thing.
The algorithm for allocating additional space as the vector grows has "constant amortized complexity" due to the notion that the total complexity (which is O(N) when a vector of N elements is created by a series of push_back() operations) can be "amortized" over the N push_back() calls--that is, the total cost is divided by N.
Even more specifically, using the algorithm that allocates twice as much space each time, the worst case is that the algorithm allocates nearly 4 times as much memory as would need to be allocated if you knew the exact size of the vector in advance. The last allocation is just slightly less than two times the size of the vector after the allocation, and the some of all the previous allocations is slightly less than the size of the last allocation.
The total number of allocations is O(log N), and the number of deallocations (up to that point) is just one less than the number of allocations.
For a large vector, if you know its maximum size in advance, it's more efficient to reserve that space at the beginning (one allocation rather than O(log N) allocations)
before inserting any data.
If you cut the capacity in half each time the size of the vector shrank to 1/4 of the currently-allocated space--that is, if you ran the allocation algorithm in reverse--you would be re-allocating (and then deallocating) nearly as much memory as the maximum capacity of the vector, in addition to deallocating the memory block with the maximum capacity. That's a performance penalty for applications that simply wanted to erase elements of the vector until they were all gone and then delete the vector.
That is, with deallocation as well as allocation, it's better to do it all at once if you can. And with deallocation you (almost) always can.
The only beneficiary of the more complicated deallocation algorithm would be an application that makes a vector, then erases at least 3/4 of it and then keeps the remaining part in memory while proceeding to grow new vectors. And even then there would be no benefit from the complicated algorithm unless the sum of the maximum capacities of the old (but still existing) vectors and the new vectors was so large that the application started to run into limitations of virtual memory.
Why penalize all algorithms that progressively erase their vectors in order to gain this advantage in this special case?

Vector vs Deque insertion in middle

I know that deque is more efficient than vector when insertions are at front or end and vector is better if we have to do pointer arithmetic. But which one to use when we have to perform insertions in middle.? and Why.?
You might think that a deque would have the advantage, because it stores the data broken up into blocks. However to implement operator[] in constant time requires all those blocks to be the same size. Inserting or deleting an element in the middle still requires shifting all the values on one side or the other, same as a vector. Since the vector is simpler and has better locality for caching, it should come out ahead.
Selection criteria with Standard library containers is, You select a container depending upon:
Type of data you want to store &
The type of operations you want to perform on the data.
If you want to perform large number of insertions in the middle you are much better off using a std::list.
If the choice is just between a std::deque and std::vector then there are a number of factors to consider:
Typically, there is one more indirection in case of deque to access the elements, so element
access and iterator movement of deques are usually a bit slower.
In systems that have size limitations for blocks of memory, a deque might contain more elements because it uses more than one block of memory. Thus, max_size() might be larger for deques.
Deques provide no support to control the capacity and the moment of reallocation. In
particular, any insertion or deletion of elements other than at the beginning or end
invalidates all pointers, references, and iterators that refer to elements of the deque.
However, reallocation may perform better than for vectors, because according to their
typical internal structure, deques don't have to copy all elements on reallocation.
Blocks of memory might get freed when they are no longer used, so the memory size of a
deque might shrink (this is not a condition imposed by standard but most implementations do)
std::deque could perform better for large containers because it is typically implemented as a linked sequence of contiguous data blocks, as opposed to the single block used in an std::vector. So an insertion in the middle would result in less data being copied from one place to another, and potentially less reallocations.
Of course, whether that matters or not depends on the size of the containers and the cost of copying the elements stored. With C++11 move semantics, the cost of the latter is less important. But in the end, the only way of knowing is profiling with a realistic application.
Deque would still be more efficient, as it doesn't have to move half of the array every time you insert an element.
Of course, this will only really matter if you consider large numbers of elements, and even than it is advisable to run a benchmark and see which one works better in your particular case. Remember that premature optimization is the root of all evil.

Which stl container should I use when doing few inserts?

I don't know my exact numbers but i'll try my best. I have a 10000 element deque thats populated right at the start. Than i scan through each element and lets every 20 elements i'll need to insert an new element. The insert would happen at the current position and maybe one element back.
I don't exactly need to remember the position but i also don't exactly need random access either. I'd like fast inserts. Does deque and vector have a heavy price to pay on insert? Should i use list?
My other option is to have a 2nd deque list and as i go through each element insert it to the other deque list unless i need to do the insert i am talking about. This does need to be fast as its a performance intensive app. But I am using a lot of pointers (each element is a pointer) which is upsetting me but there isn't a way around that so i should assume L1 cache will always miss?
I'd start with std::vector in this case, but use a second std::vector for your mass mutations, reserve() appropriately, then swap() the vectors.
Update
It would take this general form:
std:vector<t_object*> source; // << source already holds 10000 elements
std:vector<t_object*> tmp;
// to minimize reallocations and frees to 1 and 1, if possible.
// if you do not swap or have to grow more, reserving can really work against you.
tmp.reserve(aMeaningfulReserveValue);
while (performingMassMutation) {
// "i scan through each element and lets every 20 elements"
for (twentyElements)
tmp.push_back(source[readPos++]);
// "every 20 elements i'll need to insert an new element"
tmp.push_back(newElement);
}
// approximately 500 iterations later…
source.swap(tmp);
Borealid brought up a good point, which is measure -- execution varies dramatically depending on your std library implementations, data sizes, complexity to copy, and so on.
For raw pointers of a collection this size with my configuration, the vector mass mutation and push_back above was 7 times faster than std::list insertion. push_back was faster than vector's range insertion.
As Emile points out below, std::vector::swap() does not need to move or reallocate elements -- it can just swap out internals (provided the allocators are the same type).
First off, the answer to all performance questions is "benchmark it". Always. Now...
If you don't care about the memory overhead, and you don't need random access, but you do care about having constant-time insertions, list is probably right for you.
std::vector will have constant-time insertions at the end when it has sufficient capacity. When the capacity is exceeded, it needs a linear-time copy. deque is better because it links discrete allocations, avoiding a complete copy and letting you do constant-time insertions at the front as well. Random insertions (every 20 elements) will always be linear time.
As for cache locality, a vector is as good as you can get (contiguous memory), but you said you cared about insertions rather than lookups; in my experience, when that's the case you don't care about how hot the cache gets as you scan through to dump, so list's poor behavior doesn't much matter.
Lists are useful when either you frequently want to insert elements in the middle of the collection, or frequently remove them. Lists are, however, slow to read.
Vectors are very fast to read and very fast when you only want to add or remove elements at the end of the collection, but they are very slow when you insert elements in the middle. This is because it has to move all elements after the desired position by one place, to make room for the new element.
Deques are basically doubly linked lists that can be used as vectors.
If you don't need to insert elements in the middle of the collection (you don't care about the order), I suggest you use vector. If you can approximate the number of elements that will be introduced in the vector from the beginning, you should also use std::vector::reserve to allocate memory necessary from the beginning. The value you pass to reserve doesn't need to be exact, just approximate; if it's smaller than needed, the vector will resize automatically, when necessary.
You can go two ways: list is always an option for random place insertions, however as you allocate every element separately this will cause some performance implications too. The other option of inserting in-place in the deque is not good as well - because you will pay linear time for every insertion. Maybe your idea of inserting in new deque is the best here - you pay twice as much memory, but on the other hand you always do insertion either at the end of the second deque, or one element before that - this all gives constant amortized time, and still you have good caching of the container.
The number of copies done for std::vector/deque ::insert etc is proportional to the number of elements between the insert position and the end of container (the number of elements that need to be shifted to make room). The worst-case for a std::vector is O(N) - when you insert at the front of the container. If you're inserting M elements the worst -case is therefore O(M*N) which isn't great.
There could also be a reallocation involved if the containers capacity is exceeded. You could prevent reallocation by ensuring that sufficient space was ::reserve'd up front.
You're other suggestion - copying to a second std::vector/deque container could be better in that it could always be organised to achieve O(N) complexity, but at the cost of temporarily storing two containers.
Using a std::list would allow you to achieve in-place O(1) inserts, but at the cost of additional memory overhead (storing the list pointers etc) and reduced memory locality (list nodes are not allocated contiguously). You could improve the memory locality by using a pool'd memory allocator (Boost pools maybe?).
Overall you'd have to benchmark to really sort out which is "the fastest" approach.
Hope this helps.
If you need fast inserts in the middle, but don't care about random access, vector and deque are definitely not for you: For those, every time you insert something, all elements between that one and the end have to be moved. Of the built-in containers, list is almost certainly your best bet. However a better data structure for your scenario would probably be a VList because it provides better cache locality, however that's not provided by the C++ standard library. The Wikipedia page links to a C++ implementation, however from a quick view on the interface it doesn't seem to completely STL compatible; I don't know if this is an issue for you.
Of course, in the end the only way to be sure which is the optimal solution is to measure the performance.