Related
It isn't difficult to find information on the big-O time behavior of stl container operations. However, we operate in a hard real-time environment, and I'm having a lot more trouble finding information on their heap memory usage behavior.
In particular I had a developer come to me asking about std::unordered_map. We're allowed to be non-realtime at startup, so he was hoping to perform a .reserve() at startup time. However, he's finding he gets overruns at runtime. The operations he uses are lookups, insertions, and deletions with .erase().
I'm a little worried about that .reserve() actually preventing later runtime memory allocations (I don't really understand the explanation of what it does wrt to heap usage), but .erase() in particular I don't see any guarantee whatsoever that it won't be asking the heap for a dynamic deallocation when called.
So the question is what's the specified heap interactions (if any) for std::unordered_map::erase, and if it actually does deallocations, if there's some kind of trick that can be used to avoid them?
The standard doesn't specify container allocation patterns per-se. These are effectively derived from iterator/reference invalidation rules. For example, vector::insert only invalidates all references if the number of elements inserted causes the size of the container to exceed its capacity. Which means reallocation happened.
By contrast, the only operations on unordered_map which invalidates references are those which actually remove that particular element. Even a rehash (which likely allocates memory) does not invalidate references (this is why reserve changes nothing).
This means that each element must be stored separately from the hash table itself. They are individual nodes (which is why it has a node_type extraction interface), and must be able to be allocated and deallocated individually.
So it is reasonable to assume that each insertion or erasure represents at least one allocation/deallocation.
If you're all right with nodes continuing to consume memory, even after they've been removed from the container, you could pretty easily write an Allocator class that basically made deallocation a NOP.
Quite a few real-time systems basically allocate all the memory they're going to use up-front, then once they've finished initialization they neither allocate nor release memory. This would allow you to do pretty much the same thing with an unordered_map.
That said, I'm somewhat skeptical about the benefit in this case. The main strength of unordered_map is supporting insertion and deletion that are usually fast. If you're not going to be doing insertion at runtime, chances are pretty good that it's not a particularly great choice.
If it's a collection that's mostly filled during initialization, then used mostly as-is, with a few items being "removed", but no more being inserted after you finish initialization, you're likely to be better off with a simple sorted array and an interpolating search (or, if the data is distributed extremely unpredictably, maybe a binary search--but an interpolating search is usually better). In this case, I'd handle removal by simply adding a boolean to each item saying whether that item is valid or not. Erase by setting that value to false. If you find such a value during a search, you basically just ignore it.
In C++11 shrink_to_fit was introduce to complement certain STL containers (e.g., std::vector, std::deque, std::string).
Synopsizing, its main functionality is to request the container that is associated to, to reduce its capacity to fit its size. However, this request is non-binding, and the container implementation is free to optimize otherwise and leave the vector with a capacity greater than its size.
Furthermore, in a previous SO question the OP was discouraged from using shrink_to_fit to reduce the capacity of his std::vector to its size. The reasons not to do so are quoted below:
shrink_to_fit does nothing or it gives you cache locality issues and it's O(n) to
execute (since you have to copy each item to their new, smaller home).
Usually it's cheaper to leave the slack in memory. #Massa
Could someone be so kind as to address the following questions:
Do the arguments in the quotation hold?
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
Do the arguments in the quotation hold?
Measure and you will know. Are you constrained in memory? Can you figure out the correct size up front? It will be more efficient to reserve than it will be to shrink after the fact. In general I am inclined to agree on the premise that most uses are probably fine with the slack.
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
The comment does not only apply to shrink_to_fit, but to any other way of shrinking. Given that you cannot realloc in place, it involves acquiring a different chunk of memory and copying over there regardless of what mechanism you use for shrinking.
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
The request is non-binding, but the alternatives don't have better guarantees. The question is whether shrinking makes sense: if it does, then it makes sense to provide a shrink_to_fit operation that can take advantage of the fact that the objects are being moved to a new location. I.e., if the type T has a noexcept(true) move constructor, it will allocate the new memory and move the elements.
While you can achieve the same externally, this interface simplifies the operation. The equivalent to shrink_to_fit in C++03 would have been:
std::vector<T>(current).swap(current);
But the problem with this approach is that when the copy is done to the temporary it does not know that current is going to be replaced, there is nothing that tells the library that it can move the held objects. Note that using std::move(current) would not achieve the desired effect as it would move the whole buffer, maintaining the same capacity().
Implementing this externally would be a bit more cumbersome:
{
std::vector<T> copy;
if (noexcept(T(std::move(declval<T>())))) {
copy.assign(std::make_move_iterator(current.begin()),
std::make_move_iterator(current.end()));
} else {
copy.assign(current.begin(), current.end());
}
copy.swap(current);
}
Assuming that I got the if condition right... which is probably not what you want to write every time that you want this operation.
Will the arguments hold?
As the arguments are originally mine, don't mind if I defend them, one by one:
Either shrink_to_fit does nothing (...)
As it was mentioned, the standard says (many times, but in the case of vector it's section 23.3.7.3...) that the request is non-binding to allow an implementation latitude for optimizations. This means that the implementation can define shrink_to_fit as an no-op.
(...) or it gives you cache locality issues
In the case that shrink_to_fit is not implemented as a no-op, you have to allocate a new underlying container with capacity size(), copy (or, in the best case, move) construct all your N = size() new items from the old ones, destruct all the old ones (in the move case this should be optimized, but it's possible that this involves a loop again over the old container) and then destructing the old container per se. This is done, in libstdc++-4.9, exactly as David Rodriguez has described, by
_Tp(__make_move_if_noexcept_iterator(__c.begin()),
__make_move_if_noexcept_iterator(__c.end()),
__c.get_allocator()).swap(__c);
and in libc++-3.5, by a function in __alloc_traits that does approximately the same.
Oh, and an implementation absolutely cannot rely on realloc (even if it uses malloc inside ::operator new for its memory allocations) because realloc, if it cannot shrink in-place, will either leave the memory alone (no-op case) or make a bitwise copy (and miss the opportunity for readjusting pointers, etc. that the proper C++ copying/moving constructors would give).
Sure, one can write a shrinkable memory allocator, and use it in the constructor of its vectors.
In the easy case where the vectors are larger than the cache lines, all that movement puts pressure on the cache.
and it's O(n)
If n = size(), I think it was established above that, at the very least, you have to do one n sized allocation, n copy or move constructions, n destructions, and one old_capacity sized deallocation.
usually it's cheaper just to leave the slack in memory
Obviously, unless you are really pressed for free memory (in which case it might be wiser to save your data to the disk and re-load it later on demand...)
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
The proper way is still shrink_to_fit... you just have to either not rely on it or know very well your implementation!
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
There is no better way, but the reason for the existence of shrink_to_fit is, AFAICT, that sometimes your program might feel memory pressure and it's one way of treating it. Not a very good way, but still.
HTH!
If yes, what's the proper way of shrinking an STL container's capacity to its size (at least for std::vector).
The 'swap trick' will trim a vector to the exact size required (from More Effective STL):
vector<Person>(persons).swap(persons);
Particularly useful when the vector is empty, to release all memory:
vector<Person>().swap(persons);
Vectors were constantly tripping my unit tester's memory leak detection code because of retained allocations of unused space, and this sorted them out perfectly.
This is the kind of example where I really don't care about runtime efficiency (size or speed), but I do care about exact memory usage.
And if there's a better way to shrink a container, what's the reason for the existence of shrink_to_fit after-all?
I really don't know what the point of providing a function that can legally do absolutely nothing is.
I cheered when I saw it had been introduced, then despaired when I found it couldn't be relied upon.
Perhaps we'll see maybe_sort() in the next version.
I need a char array that will dynamically change in size. I do not know how big it can get so preallocating is not an option. It could never get any bigger than 20 bytes 1 time, the next time it may get up to 5kb...
I want the allocation to be like a std vector.
I thought of using a std vector < char > but all those push backs seem like they waste time:
strVec.clear();
for(size_t i = 0; i < varLen; ++i)
{
strVec.push_back(0);
}
Is this the best I can do or is there a way to add a bunch of items to a vector at once? Or maybe a better way to do this.
Thanks
std::vector doesn't allocate memory every time you call push_back, but only when the size becomes bigger than the capacity
First, don't optimize until you've profiled your code and determined that there is a bottleneck. Consider the costs to readability, accessibility, and maintainability by doing something clever. Make sure any plan you take won't preclude you from working with Unicode in future. Still here? Alright.
As others have mentioned, vectors reserve more memory than they use initially, and push_back usually is very cheap.
There are cases when using push_back reallocates memory more than is necessary, however. For example, one million calls to myvector.push_back() might trigger 10 or 20 reallocations of myvector. On the other hand, inserting into a vector at its end will cause at most 1 reallocation of myvector*. I generally prefer the insertion idiom to the reserve / push_back idiom for both speed and readability reasons.
myvector.insert(myvector.end(), inputBegin, inputEnd)
If you do not know the size of your string in advance and cannot tolerate the hiccups caused by reallocations, perhaps because of hard real-time constraints, then maybe you should use a linked list. A linked list will have consistent performance at the price of much worse average performance.
If all of this isn't enough for your purposes, consider other data structures such as a rope or post back with more specifics about your case.
From Scott Meyer's Effective STL, IIRC
You can use the resize member function to add a bunch. However, I would not expect that push_back would be slow, especially if the vector's internal capacity is already non-trivial.
Is this the best I can do or is there a way to add a bunch of items to a vector at once? Or maybe a better way to do this.
push_back isn't very slow, it just compares the size to the current capacity and reallocates if necessary. The comparison may work out to essentially zero time because of branch prediction and superscalar execution on the CPU. The reallocation is performed O(log N) times, so the vector uses up to twice as much memory as needed but time spent on reallocation seldom adds up to anything.
To insert several items at once, use insert. There are a few overloads, the only trick is that you need to explicitly pass end.
my_vec.insert( my_vec.end(), num_to_add, initial_value );
my_vec.insert( my_vec.end(), first, last ); // iterators or pointers
For the second form, you could put the values in an array first and then copy the array to the end of the vector. But this might add as much complexity as it removes. That's how it goes with micro-optimization. Only attempt to optimize if you know there's a measurable gain to be had.
Since
they are both contiguous memory containers;
feature wise, deque has almost everything vector has but more, since it is more efficient to insert in the front.
Why whould anyone prefer std::vector to std::deque?
Elements in a deque are not contiguous in memory; vector elements are guaranteed to be. So if you need to interact with a plain C library that needs contiguous arrays, or if you care (a lot) about spatial locality, then you might prefer vector. In addition, since there is some extra bookkeeping, other ops are probably (slightly) more expensive than their equivalent vector operations. On the other hand, using many/large instances of vector may lead to unnecessary heap fragmentation (slowing down calls to new).
Also, as pointed out elsewhere on StackOverflow, there is more good discussion here: http://www.gotw.ca/gotw/054.htm .
To know the difference one should know how deque is generally implemented. Memory is allocated in blocks of equal sizes, and they are chained together (as an array or possibly a vector).
So to find the nth element, you find the appropriate block then access the element within it. This is constant time, because it is always exactly 2 lookups, but that is still more than the vector.
vector also works well with APIs that want a contiguous buffer because they are either C APIs or are more versatile in being able to take a pointer and a length. (Thus you can have a vector underneath or a regular array and call the API from your memory block).
Where deque has its biggest advantages are:
When growing or shrinking the collection from either end
When you are dealing with very large collection sizes.
When dealing with bools and you really want bools rather than a bitset.
The second of these is lesser known, but for very large collection sizes:
The cost of reallocation is large
The overhead of having to find a contiguous memory block is restrictive, so you can run out of memory faster.
When I was dealing with large collections in the past and moved from a contiguous model to a block model, we were able to store about 5 times as large a collection before we ran out of memory in a 32-bit system. This is partly because, when re-allocating, it actually needed to store the old block as well as the new one before it copied the elements over.
Having said all this, you can get into trouble with std::deque on systems that use "optimistic" memory allocation. Whilst its attempts to request a large buffer size for a reallocation of a vector will probably get rejected at some point with a bad_alloc, the optimistic nature of the allocator is likely to always grant the request for the smaller buffer requested by a deque and that is likely to cause the operating system to kill a process to try to acquire some memory. Whichever one it picks might not be too pleasant.
The workarounds in such a case are either setting system-level flags to override optimistic allocation (not always feasible) or managing the memory somewhat more manually, e.g. using your own allocator that checks for memory usage or similar. Obviously not ideal. (Which may answer your question as to prefer vector...)
I've implemented both vector and deque multiple times. deque is hugely more complicated from an implementation point of view. This complication translates to more code and more complex code. So you'll typically see a code size hit when you choose deque over vector. You may also experience a small speed hit if your code uses only the things the vector excels at (i.e. push_back).
If you need a double ended queue, deque is the clear winner. But if you're doing most of your inserts and erases at the back, vector is going to be the clear winner. When you're unsure, declare your container with a typedef (so it is easy to switch back and forth), and measure.
std::deque doesn't have guaranteed continuous memory - and it's often somewhat slower for indexed access. A deque is typically implemented as a "list of vector".
According to http://www.cplusplus.com/reference/stl/deque/, "unlike vectors, deques are not guaranteed to have all its elements in contiguous storage locations, eliminating thus the possibility of safe access through pointer arithmetics."
Deques are a bit more complicated, in part because they don't necessarily have a contiguous memory layout. If you need that feature, you should not use a deque.
(Previously, my answer brought up a lack of standardization (from the same source as above, "deques may be implemented by specific libraries in different ways"), but that actually applies to just about any standard library data type.)
A deque is a sequence container which allows random access to it's elements but it is not guaranteed to have contiguous storage.
I think that good idea to make perfomance test of each case. And make decision relying on this tests.
I'd prefer std::deque than std::vector in most cases.
You woudn't prefer vector to deque acording to these test results (with source).
Of course, you should test in your app/environment, but in summary:
push_back is basically the same for all
insert, erase in deque are much faster than list and marginally faster than vector
Some more musings, and a note to consider circular_buffer.
On the one hand, vector is quite frequently just plain faster than deque. If you don't actually need all of the features of deque, use a vector.
On the other hand, sometimes you do need features which vector does not give you, in which case you must use a deque. For example, I challenge anyone to attempt to rewrite this code, without using a deque, and without enormously altering the algorithm.
Note that vector memory is re-allocated as the array grows. If you have pointers to vector elements, they will become invalid.
Also, if you erase an element, iterators become invalid (but not "for(auto...)").
Edit: changed 'deque' to 'vector'
This might seem daft for which I'm sorry, I've been writing a bit some code for the Playstation 2 for uni. I am writing a sort of API for the Graphic Synthesizer. I am using a similar syntax to that of openGL which is a state machine.
So the input would something like
gsBegin(GS_TRIANGLE);
gsColor(...);
gsVertex3f(...);
gsVertex3f(...);
gsVertex3f(...);
gsEnd();
This is great so far for line/triangles/quads with a determined amount of vertices, however things like a LINE_STRIP or TRIANGLE_FAN take an undetermined amount of points. I have been warned off several times for using stl containers because of the push_back() method in this situation because of the time sensitive nature (is this justified).
If its not justified what would be a better way of dealing with the undetermined amount situation. Currently I have an Array that can hold 30 vertices at a time, is this best way of dealing with this kind of situation?
Vector's push_back has amortized constant time complexity because it exponentially increases the capacity of the vector. (I'm assuming you're using vector, because it's ideal for this situation.) However, in practice, rendering code is very performance sensitive, so if the push_back causes a vector reallocation, performance may suffer.
You can prevent reallocations by reserving the capacity before you add to it. If you call myvec.reserve(10);, you are guaranteed to be able to add 10 elements before the vector reallocates.
However, this still requires knowing ahead of time how many elements you need. Also, if you create and destroy lots of different vectors, you're still doing a lot of memory allocation. Instead, just use one vector for all vertices, and re-use it. Calling clear() returns it to empty while keeping its allocated capacity. This way you don't actually need to reserve anything - the first few times you use it it'll reallocate and grow, but once it reaches its peak size, it won't need to reallocate any more. The nice thing about this is the vector finds the approximate size it needs to be, and once it's "warmed up" there's no further allocation so it is high performance.
In short:
Use a single persistently stored std::vector
push_back as much as you like
When you're done, clear().
In practice this will perform as well as a C array, but without a hard limit on size.
University, eh? Just tell them push_back has amortized constant time complexity and they'll be happy.
First, avoid using glBegin / glEnd if you can, and instead use something like glDrawArrays or glDrawElements.
push_back() on a std::vector is a quick operation unless the array needs to grow in size when the operation occurs. Set the vector capacity as high as you think you will need it to be and you should see minimal overhead. 'Raw' arrays will usually always be slightly faster, but then you have to deal with using 'raw' arrays.
There is always the alternative of using a deque.
A deque is very much like a vector, contiguity apart. Basically, it's often implemented as a vector of arrays.
This means a lower allocation cost, but member access might be slightly slower (though constant) because of the double dereference, so I am unsure if it's profitable in your case.
There is also the LLVM alternative: SmallVector<T,N>, which preallocates (right in the vector) space for N elements, and will simply get back to using a traditional vector-like implementation once the size has grown too much.
The drawback to using std::vector in this kind of situation is making sure you manage your memory allocation properly. On systems like the PS2 (PS3 seems to be a bit better at this), memory allocation is insanely slow and if you don't reserve the right amount of space in the vector to begin with (and it has to resize several times when adding items), you will slow your game to a creeping crawl. If you know what your max size is going to be and reserve it when you create the vector, you won't have a problem.
That said, if this vector is going to be a temporary/local variable, you will still be reallocating memory every time your function is called. So if this function is called every frame, you will still have the performance problem. You can get around this by using a custom allocator and/or making the vector global (or a member variable to a class that will exist during your game loop).
You can always equip the container you want to use with proper allocator, which takes into account the limitations of the platform and the expected grow/shrink scenarios etc...