I am trying to figure out the fastest way to keep constant number of elements in vector (or maybe there is some ready-made structure that do it automatically).
In my app I am adding multiple elements to the vector and I need to do it fast. Because of vector's self resizing at some point it is significantly decreasing overall application speed. What I was thinking about is to do something like this:
if(my_vector.size() < 300)
my_vector.push_back(new_element);
else
{
my_vector.pop_front();
my_vector.push_back(new_element);
}
but after first few tests I've realized that it might not be the best solution, because I am not sure if pop_front() and later push_back() doesn't still need to resize at some point.
Is there any other solution for this?
Use a std::queue. Its underlying container is a std::deque, but like a stack a queue's interface is specifically for FIFO operations (push_back, pop_front), which is exactly what you're doing in your situation. Here's why a deque is better for this situation:
The storage of a deque is automatically expanded and contracted as
needed. Expansion of a deque is cheaper than the expansion of a
std::vector because it does not involve copying of the existing
elements to a new memory location.
The complexity (efficiency) of common operations on deques is as
follows:
Random access - constant O(1)
Insertion or removal of elements at the end or beginning - constant O(1)
To implement a fixed-size container with push_back and pop_front and minimal memory shuffling, use a std::array of the appropriate size. To keep track of things you'll need a front index for pushing elements and a back index for popping things. To push, store the element at the location given by front_index, then increment front_index and take the remainder modulo the container size. To pop, read the element at the location given by back_index, and adjust that index the same way you did front_index. With that in place, the code in the question will do what you need.
You just need to reserve the capacity to a reasonable number. The vector will not automatically shrink. So it only will grow and, possibly, stop at some point.
You might be also interested in the resize policies. For example Facebook made a substantial research and created own implementation of the vector - folly::fbvector which has better performance than std::vector
Related
I've been told that std::vector has a C-style array on the inside implementation, but would that not negate the entire purpose of having a dynamic container?
So is inserting a value in a vector an O(n) operation? Or is it O(1) like in a linked-list?
From the C++11 standard, in the "sequence containers" library section (emphasis mine):
[23.3.6.1 Class template vector overview][vector.overview]
A vector is a sequence container that supports (amortized) constant time insert and erase operations at the
end; insert and erase in the middle take linear time. Storage management is handled automatically, though
hints can be given to improve efficiency.
This does not defeat the purpose of dynamic size -- part of the point of vector is that not only is it very fast to access a single element, but scanning over the vector has very good memory locality because everything is tightly packed together. In practice, having good memory locality is very important because it greatly reduces cache misses, which has a large impact on runtime. This is a major advantage of vector over list in many situations, particularly those where you need to iterate over the entire container more often than you need to add or remove elements.
The memory in a std::vector is required to be contiguous, so it's typically represented as an array.
Your question about the complexity of the operations on a std::vector is a good one - I remember wondering this myself when I first started programming. If you append an element to a std::vector, then it may have to perform a resize operation and copy over all the existing elements to a new array. This will take time O(n) in the worst case. However, the amortized cost of appending an element is O(1). By this, we mean that the total cost of any sequence of n appends to a std::vector is always O(n). The intuition behind this is that the std::vector usually overallocates space in its array, leaving a lot of free slots for elements to be inserted into without a reallocation. As a result, most of the appends will take time O(1) even though every now and then you'll have one that takes time O(n).
That said, the cost of performing an insertion elsewhere in a std::vector will be O(n), because you may have to shift everything down.
You also asked why this is, if it defeats the purpose of having a dynamic array. Even if the std::vector just acted like a managed array, it's still a win over raw arrays. The std::vector knows its size, can do bounds-checking (with at), is an actual object (unlike an array), and doesn't decay to a pointer. These extra features - coupled with the extra logic to make appends work quickly - are almost always worth it.
I was looking to implementing a c++ container object with following properties:
Keeps all the elements contiguously in memory, so that it can be iterated over without causing any cache misses.
Expandable, not like arrays which are of a fixed sized, but much like vectors in stl which can adjust the memory allocated to accommodate as many elements as i add.
Does not reallocate elements to new place in memory when resizing like in the case of std vectors. I need to keep pointers to the elements that are in the container, so reallocation the pointers should not be invalidated when adding new elements.
Must be compatible with ranged based for loops, so that the contents can be efficiently iterated through.
Is there any container out there which meets these requirements, in any external library or do i have to implement my own in this case?
As some comments have pointed out, not all the desired properties can be implemented together. I had ought over this, and i have an implementation in mind. Since making things contiguous entirely is not possible, some discontinuity can be accomodated. For example, the data container allocates space for 10 elements initially, and when the cap is reached, allocates another chunk of memory double the amount of the previous block, but doesn't copy the existing elements to that new block. Instead, it fills the new block with the new elements i put into it. This minimizes the amount of discontinuity.
So, is there a data structure which already implements that?
IMHO, the data-structure that is the closest from your need is the
deque in the STL. Basically it stores chunk of contiguous memories and
provides both random access iterators and stability regards to push_back
(your elements stays at the same place although iterators are invalidated).
The only problem with your constrains is that the memory is not contiguous
everywhere but as others commented, your set of needs is incompatible if you want
to fully satisfy them all.
By the way one sweet thing with this container is that you can also push on
the front.
I'm trying to find a fast way to do input but I've learned that using STL for such purposes might be slow.
I have a callback that fires whenever I get Keyboard input.
It creates an object with (int _key, int _state, int _life)
Everytime I receive this callback, I push_back the object to my std::vector;
Every frame I check the top of this vector and remove the "dead" input.
The vector can be polled for whatever input is valid at that moment which means it will be searched frequently.
Optimizations:
-All the memory should be contiguous so although Link Lists are better for dynamic allocation, should I stick with STL's vector? I'm always adding to the top and removing from the bottom so what data struct should I use?
-I was thinking of having a buffer(second vector) that continuously receives new input from the callback, then each frame copy the data from that vector to the top of my active input vector. Since the active vector will be polled, would this increase performance since it won't be wasting time getting added to during the loop?
Basically I'm trying to squeeze as much performance from this vector as possible and I could use some help.
What you are describing, adding data in one end, removing in another, is the archetypical description of a queue. This is implemented in the standard library with the std::queue class.
This queue class is a so-called container adapter, meaning it uses another container for the actual storage. By default it uses std::deque, but that container doesn't keep its data in a contiguous memory area. However you can declare a std::queue with almost any other standard container, like std::vector (which is the only container guaranteed to store data in a contiguous memory area):
std::queue<int, std::vector> my_queue_with_vector;
my_queue_with_vector.push(1);
my_queue_with_vector.push(2);
my_queue_with_vector.push(3);
while (!my_queue_with_vector.empty())
{
std::cout << my_queue_with_vector.top() << '\n';
my_queue_with_vector.pop(); // Remove top element in the queue
}
std::deque makes the best container. It guarantees O(1) pop_front and push_back() and has random access and a good degree (although not completely) of cache behaviour.
But if you absolutely must have complete contiguity, then you'll need to look into a custom circular buffer container (there's one in Boost IIRC), as pop_front() on a vector is rather expensive.
Edit: As another poster has pointed out, keyboard input is so infrequent even for a very fast typist that I find it difficult to believe that this is a bottleneck in your system.
Sounds like you want a std::deque. Adjacents elements are nearly always allocated continuously in memory and it has constant time insertion and removal at begin and end.
Short answer: It doesn't matter. std::vector and std::list can easily handle millions of insert operations per second, but most typists don't type faster than 10 characters per second on a keyboard.
Long answer: push_back and erase are usually very cheap on a vector if the vector is small (< 100) and the copy/swap operations of the objects stored on the vector are cheap. The allocations used to insert into or remove from an std:list are usually more expensive. If it becomes an issue, measure the cost.
An std::deque also has allocations and depending on the implementation is likely more expensive than the vector in your case, if my assumption that your vector rarely if ever contains more than 10 items - all of which are cheap to copy - is correct.
Lets say if I have a vector V, which has 10 elements.
If I erase the first element (at index 0) using v.erase(v.begin()) then how STL vector handle this?
Does it create another new vector and copy elements from the old vector to the new vector and deallocate the old one? Or Does it copy each element starting from index 1 and copy the element to index-1 ?
If I need to have a vector of size 100,000 at once and later I don't use that much space, lets say I only need a vector of size 10 then does it automatically reduce the size? ( I don't think so)
I looked online and there are only APIs and tutorials how to use STL library.
Is there any good references that I can have an idea of the implementation or complexity of STL library?
Actually, the implementation of vector is visible, since it's a template, so you can look into that for details:
iterator erase(const_iterator _Where)
{ // erase element at where
if (_Where._Mycont != this
|| _Where._Myptr < _Myfirst || _Mylast <= _Where._Myptr)
_DEBUG_ERROR("vector erase iterator outside range");
_STDEXT unchecked_copy(_Where._Myptr + 1, _Mylast, _Where._Myptr);
_Destroy(_Mylast - 1, _Mylast);
_Orphan_range(_Where._Myptr, _Mylast);
--_Mylast;
return (iterator(_Where._Myptr, this));
}
Basically, the line
unchecked_copy(_Where._Myptr + 1, _Mylast, _Where._Myptr);
does exactly what you thought - copies the following elements over (or moves them in C++11 as bames53 pointed out).
To answer your second question, no, the capacity cannot decrease on its own.
The complexities of the algorithms in std can be found at http://www.cplusplus.com/reference/stl/ and the implementation, as previously stated, is visible.
Does it copy each element starting from index 1 and copy the element to index-1 ?
Yes (though it actually moves them since C++11).
does it automatically reduce the size?
No, reducing the size would typically invalidate iterators to existing elements, and that's only allowed on certain function calls.
I looked online and there are only APIs and tutorials how to use STL library. Is there any good references that I can have an idea of the implementation or complexity of STL library?
You can read the C++ specification which will tell you exactly what's allowed and what isn't in terms of implementation. You can also go look at your actual implementation.
Vector will copy (move in C++11) the elements to the beginning, that's why you should use deque if you would like to insert and erase from the beginning of a collection. If you want to truly resize the vector's internal buffer you can do this:
vector<Type>(v).swap(v);
This will hopefully make a temporary vector with the correct size, then swaps it's internal buffer with the old one, then the temporary one goes out of scope and the large buffer gets deallocated with it.
As others noted, you may use vector::shrink_to_fit() in C++11.
That's one of my (many) objection to C++. Everybody says "use the standard libraries" ... but even when you have the STL source (which is freely available from many different places. Including, in this case, the header file itself!) ... it's basically an incomprehensible nightmare to dig in to and try to understand.
The (C-only) Linux kernel is a paragon of simplicity and clarity in contrast.
But we digress :)
Here's the 10,000-foot answer to your question:
http://www.cplusplus.com/reference/stl/vector/
Vector containers are implemented as dynamic arrays; Just as regular
arrays, vector containers have their elements stored in contiguous
storage locations, which means that their elements can be accessed not
only using iterators but also using offsets on regular pointers to
elements.
But unlike regular arrays, storage in vectors is handled
automatically, allowing it to be expanded and contracted as needed.
Vectors are good at:
Accessing individual elements by their position index (constant time).
Iterating over the elements in any order (linear time).
Add and remove elements from its end (constant amortized time).
Compared to arrays, they provide almost the same performance for these
tasks, plus they have the ability to be easily resized. Although, they
usually consume more memory than arrays when their capacity is handled
automatically (this is in order to accommodate extra storage space for
future growth).
Compared to the other base standard sequence containers (deques and
lists), vectors are generally the most efficient in time for accessing
elements and to add or remove elements from the end of the sequence.
For operations that involve inserting or removing elements at
positions other than the end, they perform worse than deques and
lists, and have less consistent iterators and references than lists.
...
Reallocations may be a costly operation in terms of performance, since
they generally involve the entire storage space used by the vector to
be copied to a new location. You can use member function
vector::reserve to indicate beforehand a capacity for the vector. This
can help optimize storage space and reduce the number of reallocations
when many enlargements are planned.
...
I only need a vector of size 10 then does it automatically reduce the size?
No it doesn't automatically shrink.
Traditionally you swap the vector with a new empty one: reduce the capacity of an stl vector
But C++x11 includes a std::vector::shrink_to_fit() which it does it directly
I don't know my exact numbers but i'll try my best. I have a 10000 element deque thats populated right at the start. Than i scan through each element and lets every 20 elements i'll need to insert an new element. The insert would happen at the current position and maybe one element back.
I don't exactly need to remember the position but i also don't exactly need random access either. I'd like fast inserts. Does deque and vector have a heavy price to pay on insert? Should i use list?
My other option is to have a 2nd deque list and as i go through each element insert it to the other deque list unless i need to do the insert i am talking about. This does need to be fast as its a performance intensive app. But I am using a lot of pointers (each element is a pointer) which is upsetting me but there isn't a way around that so i should assume L1 cache will always miss?
I'd start with std::vector in this case, but use a second std::vector for your mass mutations, reserve() appropriately, then swap() the vectors.
Update
It would take this general form:
std:vector<t_object*> source; // << source already holds 10000 elements
std:vector<t_object*> tmp;
// to minimize reallocations and frees to 1 and 1, if possible.
// if you do not swap or have to grow more, reserving can really work against you.
tmp.reserve(aMeaningfulReserveValue);
while (performingMassMutation) {
// "i scan through each element and lets every 20 elements"
for (twentyElements)
tmp.push_back(source[readPos++]);
// "every 20 elements i'll need to insert an new element"
tmp.push_back(newElement);
}
// approximately 500 iterations later…
source.swap(tmp);
Borealid brought up a good point, which is measure -- execution varies dramatically depending on your std library implementations, data sizes, complexity to copy, and so on.
For raw pointers of a collection this size with my configuration, the vector mass mutation and push_back above was 7 times faster than std::list insertion. push_back was faster than vector's range insertion.
As Emile points out below, std::vector::swap() does not need to move or reallocate elements -- it can just swap out internals (provided the allocators are the same type).
First off, the answer to all performance questions is "benchmark it". Always. Now...
If you don't care about the memory overhead, and you don't need random access, but you do care about having constant-time insertions, list is probably right for you.
std::vector will have constant-time insertions at the end when it has sufficient capacity. When the capacity is exceeded, it needs a linear-time copy. deque is better because it links discrete allocations, avoiding a complete copy and letting you do constant-time insertions at the front as well. Random insertions (every 20 elements) will always be linear time.
As for cache locality, a vector is as good as you can get (contiguous memory), but you said you cared about insertions rather than lookups; in my experience, when that's the case you don't care about how hot the cache gets as you scan through to dump, so list's poor behavior doesn't much matter.
Lists are useful when either you frequently want to insert elements in the middle of the collection, or frequently remove them. Lists are, however, slow to read.
Vectors are very fast to read and very fast when you only want to add or remove elements at the end of the collection, but they are very slow when you insert elements in the middle. This is because it has to move all elements after the desired position by one place, to make room for the new element.
Deques are basically doubly linked lists that can be used as vectors.
If you don't need to insert elements in the middle of the collection (you don't care about the order), I suggest you use vector. If you can approximate the number of elements that will be introduced in the vector from the beginning, you should also use std::vector::reserve to allocate memory necessary from the beginning. The value you pass to reserve doesn't need to be exact, just approximate; if it's smaller than needed, the vector will resize automatically, when necessary.
You can go two ways: list is always an option for random place insertions, however as you allocate every element separately this will cause some performance implications too. The other option of inserting in-place in the deque is not good as well - because you will pay linear time for every insertion. Maybe your idea of inserting in new deque is the best here - you pay twice as much memory, but on the other hand you always do insertion either at the end of the second deque, or one element before that - this all gives constant amortized time, and still you have good caching of the container.
The number of copies done for std::vector/deque ::insert etc is proportional to the number of elements between the insert position and the end of container (the number of elements that need to be shifted to make room). The worst-case for a std::vector is O(N) - when you insert at the front of the container. If you're inserting M elements the worst -case is therefore O(M*N) which isn't great.
There could also be a reallocation involved if the containers capacity is exceeded. You could prevent reallocation by ensuring that sufficient space was ::reserve'd up front.
You're other suggestion - copying to a second std::vector/deque container could be better in that it could always be organised to achieve O(N) complexity, but at the cost of temporarily storing two containers.
Using a std::list would allow you to achieve in-place O(1) inserts, but at the cost of additional memory overhead (storing the list pointers etc) and reduced memory locality (list nodes are not allocated contiguously). You could improve the memory locality by using a pool'd memory allocator (Boost pools maybe?).
Overall you'd have to benchmark to really sort out which is "the fastest" approach.
Hope this helps.
If you need fast inserts in the middle, but don't care about random access, vector and deque are definitely not for you: For those, every time you insert something, all elements between that one and the end have to be moved. Of the built-in containers, list is almost certainly your best bet. However a better data structure for your scenario would probably be a VList because it provides better cache locality, however that's not provided by the C++ standard library. The Wikipedia page links to a C++ implementation, however from a quick view on the interface it doesn't seem to completely STL compatible; I don't know if this is an issue for you.
Of course, in the end the only way to be sure which is the optimal solution is to measure the performance.