It isn't difficult to find information on the big-O time behavior of stl container operations. However, we operate in a hard real-time environment, and I'm having a lot more trouble finding information on their heap memory usage behavior.
In particular I had a developer come to me asking about std::unordered_map. We're allowed to be non-realtime at startup, so he was hoping to perform a .reserve() at startup time. However, he's finding he gets overruns at runtime. The operations he uses are lookups, insertions, and deletions with .erase().
I'm a little worried about that .reserve() actually preventing later runtime memory allocations (I don't really understand the explanation of what it does wrt to heap usage), but .erase() in particular I don't see any guarantee whatsoever that it won't be asking the heap for a dynamic deallocation when called.
So the question is what's the specified heap interactions (if any) for std::unordered_map::erase, and if it actually does deallocations, if there's some kind of trick that can be used to avoid them?
The standard doesn't specify container allocation patterns per-se. These are effectively derived from iterator/reference invalidation rules. For example, vector::insert only invalidates all references if the number of elements inserted causes the size of the container to exceed its capacity. Which means reallocation happened.
By contrast, the only operations on unordered_map which invalidates references are those which actually remove that particular element. Even a rehash (which likely allocates memory) does not invalidate references (this is why reserve changes nothing).
This means that each element must be stored separately from the hash table itself. They are individual nodes (which is why it has a node_type extraction interface), and must be able to be allocated and deallocated individually.
So it is reasonable to assume that each insertion or erasure represents at least one allocation/deallocation.
If you're all right with nodes continuing to consume memory, even after they've been removed from the container, you could pretty easily write an Allocator class that basically made deallocation a NOP.
Quite a few real-time systems basically allocate all the memory they're going to use up-front, then once they've finished initialization they neither allocate nor release memory. This would allow you to do pretty much the same thing with an unordered_map.
That said, I'm somewhat skeptical about the benefit in this case. The main strength of unordered_map is supporting insertion and deletion that are usually fast. If you're not going to be doing insertion at runtime, chances are pretty good that it's not a particularly great choice.
If it's a collection that's mostly filled during initialization, then used mostly as-is, with a few items being "removed", but no more being inserted after you finish initialization, you're likely to be better off with a simple sorted array and an interpolating search (or, if the data is distributed extremely unpredictably, maybe a binary search--but an interpolating search is usually better). In this case, I'd handle removal by simply adding a boolean to each item saying whether that item is valid or not. Erase by setting that value to false. If you find such a value during a search, you basically just ignore it.
Related
I'm learning about C++ and have a conceptual question. Let's say I have a vector. I know that my vector is stored in contiguous memory, but let's say my vector keeps growing and runs out of room to keep the memory contiguous. How can I predict where in memory the vector will go? I'm excluding the option of using functions that tell the vector where it should be in memory.
If it "runs out of room to keep the memory contiguous", then it simply won't grow. Attempting to add items past the currently allocated size will (typically) result in its throwing an exception (though technically, it's up to the allocator object to decide what to do--it's responsible for memory allocation, and responding when that's not possible.
Note, however, that this could result from running out of address space (especially on a 32-bit machine) rather than running out of actual memory. A typical virtual memory manager can reallocate physical pages (e.g., 4 KB or 8 KB chunks) and write data to the paging file if necessary to free physical memory if needed--but when/if there's not enough contiguous address space, there's not much that can be done.
The answer depends highly on your allocation strategy, but in general, the answer is no. Most allocators do not provide you with information where the next allocation will occur. If you were writing a custom allocator, then you could potentially make this information accessible, but doing so is not necessarily a good idea unless your use case specifically requires this knowledge.
The realloc function is the only C function which will attempt to grow your memory in place, and it makes no guarantees that it will do so.
Neither new nor malloc provide any information for where the "next" allocation will take place. You could potentially guess, if you knew the exact implementation details for your specific compiler, but this would be very unwise to rely on in a real program. Regarding specifically the std::allocator used for std::vector, it also does not provide details about where future allocations will take place.
Even if you could predict it in a particular situation, it would be extremely fragile - all it takes is one function you call to change to make another call to new or malloc [unless you are using a very specific allocation method - which is different from the "usual" method] to "break" where the next allocation is made.
If you KNOW that you need a certain size, you can use std::vector::resize() to set the size of the vector [or std::vector<int> vec(10000); to create a pre-sized to 10000, for example] - which of course is not guaranteed to work, but it guarantees that you never need "enough space to hold 3x the current content", which is what happens with std::vector when you grow it using push_back [and if you are REALLY unlucky, that means that your vector will use 2*n-1 elements, leaving n-1 unused, because your size is n-1 and you add ONE more element, which doubles the size, so now 2*n, and you only actually require one more element...
The internal workings of STL containers are kept private for good reasons. You should never be accessing any container elements through any mechanism other than the appropriate iterators; and it is not possible to acquire one of those on an element that does not yet exist.
You could however, supply an allocator and use that to deterministically place future allocations.
Can you predict where in memory a vector might move when growing?
As others like EJP, Jerry and Mats have said, you cannot determine the location of a "grown" vector until after it grows. There are some corner cases, like the allocator providing a block of memory that's larger than required so that the vector does not actually move after a grow. But its not something you should depend on.
In general, stacks grow down and heaps grow up. This is an artifact from the old memory days. Your code segment was sandwiched between them, and it ensured your program would overwrite its own code segment and eventually cause an illegal instruction. So you might be able to guess the new vector is going to be higher in memory than the old vector because the vector is probably using heap memory. But its not really useful information.
If you are devising a strategy for locating elements after a grow, then use an index and not an iterator. Iterators are invalidated after inserts and deletes (including the grow).
For example, suppose you are parsing the vector and you are looking for the data that follows -----BEGIN CERTIFICATE-----. Once you know the offset of the data (byte 27 in the vector), then you can always relocate it in constant time with v.begin() + 26. If you only have part of the certificate and later add the tail of the data and the -----END CERTIFICATE----- (and the vector grows), then the data is still located at v.begin() + 26.
No, in practical terms you can't predict where it will go if it has to move due to resizing. However, it isn't so random that you could use it as a random number generator (;
I have an STL container (std::list) that I am constantly reusing. By this I mean I
push a number of elements into the container
remove the elements during processing
clear the container
rinse and repeat a large number of times
When profiling using callgrind I am seeing a large number of calls to new (malloc) and delete (free) which can be quite expensive. I am therefore looking for some way to preferably preallocate a reasonably large number of elements. I would also like my allocation pool to continue to increase until a high water mark is reach and for the allocation pool to continue to hang onto the memory until the container itself is deleted.
Unfortunately the standard allocator continually resizes the memory pool so I am looking for some allocator that will do the above without me having to write my own.
Does such an allocator exist and where can I find such an allocator?
I am working on both Linux using GCC and Android using the STLPort.
Edit: Placement new is ok, what I want to minimize is heap walking which is expensive. I would also like all my object to be as close to eachother as possible to minimize cache misses.
It sounds like you may be just using the wrong kind of container: With a list, each element occupies a separate chunk of memory, to allow individual inserts/deletes - so every addition/deletion form the list will require a separate new()/delete().
If you can use a std::vector instead, then you can reserve the required size before adding the items.
Also for deletion, it's usually best not to remove the items individually. Just call clear() on the container to empty. it.
Edit: You've now made it clear in the comments that your 'remove the elements during processing' step is removing elements from the middle of the list and must not invalidate iterators, so switching to a vector is not suitable. I'll leave this answer for now (for the sake of the comment thread!)
The allocator boost::fast_pool_allocator is designed for use with std::list.
The documentation claims that "If you are seriously concerned about performance, use boost::fast_pool_allocator when dealing with containers such as std::list, and use boost::pool_allocator when dealing with containers such as std::vector."
Note that boost::fast_pool_allocator is a singleton and by default it never frees allocated memory. However, it is implemented using boost::singleton_pool and you can make it free memory by calling the static functions boost::singleton_pool::release_memory() and boost::singleton_pool::purge_memory().
You can try and benchmark your app with http://goog-perftools.sourceforge.net/doc/tcmalloc.html, I've seen some good improvements in some of my projects (no numbers at hand though, sorry)
EDIT: Seems the code/download has been moved there: http://code.google.com/p/gperftools/?redir=1
Comment was too short so I will post my thoughts as an answer.
IMO, new/delete can come only from two places in this context.
I believe std::list<T> is implemented with some kind of nodes as normally lists are, for various reasons. Therefore, each insertion and removal of an element will have to result in new/delete of a node. Moreover, if the object of type T has any allocations and deallocations in c'tor/d'tor, they will be called as well.
You can avoid recreation of standard stuff by reiterating over existing nodes instead of deleting them. You can use std::vector and std::vector::reserve or std::array if you want to squeeze it to c-level.
Nonetheless, for every object created there must be called a destructor. The only way I see to avoid creations and destructions is to use T::operator= when reiterating over container, or maybe some c++13 move stuff if its suitable in your case.
Since
they are both contiguous memory containers;
feature wise, deque has almost everything vector has but more, since it is more efficient to insert in the front.
Why whould anyone prefer std::vector to std::deque?
Elements in a deque are not contiguous in memory; vector elements are guaranteed to be. So if you need to interact with a plain C library that needs contiguous arrays, or if you care (a lot) about spatial locality, then you might prefer vector. In addition, since there is some extra bookkeeping, other ops are probably (slightly) more expensive than their equivalent vector operations. On the other hand, using many/large instances of vector may lead to unnecessary heap fragmentation (slowing down calls to new).
Also, as pointed out elsewhere on StackOverflow, there is more good discussion here: http://www.gotw.ca/gotw/054.htm .
To know the difference one should know how deque is generally implemented. Memory is allocated in blocks of equal sizes, and they are chained together (as an array or possibly a vector).
So to find the nth element, you find the appropriate block then access the element within it. This is constant time, because it is always exactly 2 lookups, but that is still more than the vector.
vector also works well with APIs that want a contiguous buffer because they are either C APIs or are more versatile in being able to take a pointer and a length. (Thus you can have a vector underneath or a regular array and call the API from your memory block).
Where deque has its biggest advantages are:
When growing or shrinking the collection from either end
When you are dealing with very large collection sizes.
When dealing with bools and you really want bools rather than a bitset.
The second of these is lesser known, but for very large collection sizes:
The cost of reallocation is large
The overhead of having to find a contiguous memory block is restrictive, so you can run out of memory faster.
When I was dealing with large collections in the past and moved from a contiguous model to a block model, we were able to store about 5 times as large a collection before we ran out of memory in a 32-bit system. This is partly because, when re-allocating, it actually needed to store the old block as well as the new one before it copied the elements over.
Having said all this, you can get into trouble with std::deque on systems that use "optimistic" memory allocation. Whilst its attempts to request a large buffer size for a reallocation of a vector will probably get rejected at some point with a bad_alloc, the optimistic nature of the allocator is likely to always grant the request for the smaller buffer requested by a deque and that is likely to cause the operating system to kill a process to try to acquire some memory. Whichever one it picks might not be too pleasant.
The workarounds in such a case are either setting system-level flags to override optimistic allocation (not always feasible) or managing the memory somewhat more manually, e.g. using your own allocator that checks for memory usage or similar. Obviously not ideal. (Which may answer your question as to prefer vector...)
I've implemented both vector and deque multiple times. deque is hugely more complicated from an implementation point of view. This complication translates to more code and more complex code. So you'll typically see a code size hit when you choose deque over vector. You may also experience a small speed hit if your code uses only the things the vector excels at (i.e. push_back).
If you need a double ended queue, deque is the clear winner. But if you're doing most of your inserts and erases at the back, vector is going to be the clear winner. When you're unsure, declare your container with a typedef (so it is easy to switch back and forth), and measure.
std::deque doesn't have guaranteed continuous memory - and it's often somewhat slower for indexed access. A deque is typically implemented as a "list of vector".
According to http://www.cplusplus.com/reference/stl/deque/, "unlike vectors, deques are not guaranteed to have all its elements in contiguous storage locations, eliminating thus the possibility of safe access through pointer arithmetics."
Deques are a bit more complicated, in part because they don't necessarily have a contiguous memory layout. If you need that feature, you should not use a deque.
(Previously, my answer brought up a lack of standardization (from the same source as above, "deques may be implemented by specific libraries in different ways"), but that actually applies to just about any standard library data type.)
A deque is a sequence container which allows random access to it's elements but it is not guaranteed to have contiguous storage.
I think that good idea to make perfomance test of each case. And make decision relying on this tests.
I'd prefer std::deque than std::vector in most cases.
You woudn't prefer vector to deque acording to these test results (with source).
Of course, you should test in your app/environment, but in summary:
push_back is basically the same for all
insert, erase in deque are much faster than list and marginally faster than vector
Some more musings, and a note to consider circular_buffer.
On the one hand, vector is quite frequently just plain faster than deque. If you don't actually need all of the features of deque, use a vector.
On the other hand, sometimes you do need features which vector does not give you, in which case you must use a deque. For example, I challenge anyone to attempt to rewrite this code, without using a deque, and without enormously altering the algorithm.
Note that vector memory is re-allocated as the array grows. If you have pointers to vector elements, they will become invalid.
Also, if you erase an element, iterators become invalid (but not "for(auto...)").
Edit: changed 'deque' to 'vector'
I have noticed that the std::map implementation of Visual Studio (2010) allocates a new single block of memory for each node in its red-black tree. That is, for each element in the map a single new block of raw memory will be allocated via operator new ... malloc with the default allocation scheme of the std::map of the Visual Studio STL implementation.
This appears a bit wasteful to me: Wouldn't it make more sense to allocate the nodes in blocks of "(small) n", just as std::vector implementations over-allocate on growth?
So I'd like the following points clarified:
Is my assertion about the default allocation scheme actually correct?
Do "all" STL implementations of std::map work this way?
Is there anything in the std preventing a std::map implementation from putting its nodes into blocks of memory instead of allocation a new block of memory (via its allocator) for each node? (Complexity guarantees, etc.)?
Note: This is not about premature optimization. If its about optimization, then its about if an app has problem with (std::)map memory fragmentation, are there alternatives to using a custom allocator that uses a memory pool? This question is not about custom allocators but about how the map implementation uses its allocator. (Or so I hope it is.)
Your assertion is correct for most implementations of std::map.
To my knowledge, there is nothing in the standard preventing a map from using an allocation scheme such as you describe. However, you can get what you describe with a custom allocator — but forcing that scheme on all maps could be wasteful. Because map has no a priori knowledge of how it will be used, certain use patterns could prevent deallocations of mostly-unused blocks. For example, say blocks were allocated for 4 nodes at a time, but a particular map is filled with 40 nodes, then 30 nodes erased, leaving a worst case of one node left per block as map cannot invalidate pointers/references/iterators to that last node.
When you insert elements into a map, it's guaranteed that existing iterators won't be invalidated. Therefore, if you insert an element "B" between two nodes A and C that happen to be contiguous and inside the same heap allocated area, you can't shuffle them to make space, and B will have to be put elsewhere. I don't see any particular problem with that, except that managing such complexities will swell the implementation. If you erase elements then iterators can't be invalidated either, which implies any memory allocation has to hang around until all the nodes therein are erased. You'd probably need a freelist within each "swollen node"/vector/whatever-you-want-to-call-it - effectively duplicating at least some of the time-consuming operations that new/delete currently do for you.
I'm quite certain I've never seen an implementation of std::map that attempted to coalesce multiple nodes into a single allocation block. At least right offhand I can't think of a reason it couldn't work, but I think most implementors would see it as unnecessary, and leave optimization of memory allocation to the allocator instead of worrying about it much in the map itself.
Admittedly, most custom allocators are written to deal better with allocation of a large number of small blocks. You could probably render the vast majority of such optimization unnecessary by writing map (and, of course, set, multiset, and multimap) to just use larger allocations instead. OTOH, given that allocators to optimize small block allocations are easily/common/widely available, there's probably not a lot of motivation to change the map implementation this way either.
I think the only thing you cannot do is to invalidate iterators, which you might have to do if you have to reallocate your storage. Having said that, I've seen implementations using single sorted array of objects wrapped in the std::map interface. That was done for a certain reason, of course.
Actually, what you can do is just instantiate your std::map with you custom allocator, which will find memory for new nodes in a special, non-wasteful way.
This appears a bit wasteful to me. Wouldn't it make more sense to allocate the nodes in blocks of "(small) n", just as std::vector implementations over-allocate on growth
Interestingly I see it in a completely different way. I find it is appropriate and it doesn't waste any memory. At least with defaul STL allocators on Windows (MS VS 2008), HP-UX (gcc with STLport) and Linux (gcc without STLport). What is important is that these allocators do care about memory fragmentation and it seems they can handle this question pretty well. For example, look for Low-fragmentation Heap on Windows or SBA (Small block allocator) on HP-UX. I mean that frequently allocating and deallocating memory only for one node at a time doesn't have to result in memory fragmentation. I tested std::map myself in one of my programs and it indeed didn't cause any memory fragmentation with these allocators.
Is my assertion about the default
allocation scheme actually correct?
I have MS VisualStudio 2008 and its std::map behaves in the same way. On HP-UX I use gcc with and without STLport and it seem that their STL maps have the same approach to allocating memory for nodes in the std::map.
is there anything in the std
preventing a std::map implementation
from putting its nodes into blocks of
memory instead of allocation a new
block of memory (via its allocator)
for each node?
Start with tuning a default allocator on your platform if it is possible. It is useful here to quote the Douglas Lea who is an author of DL-Malloc
... first I wrote a number of
special-purpose allocators in C++,
normally by overloading operator new
for various classes. ...
However, I soon realized that building
a special allocator for each new class
that tended to be dynamically
allocated and heavily used was not a
good strategy when building kinds of
general-purpose programming support
classes I was writing at the time.
(From 1986 to 1991, I was the the
primary author of libg++ , the GNU C++
library.) A broader solution was
needed -- to write an allocator that
was good enough under normal C++ and C
loads so that programmers would not be
tempted to write special-purpose
allocators except under very special
conditions.
Or as a little bit more difficult idea you can even try to test your application with Hoard allocator. I mean just test your application and see if there is any benefit as for performance or fragmentation.
It says here that
The unbounded array is similar to a
std::vector in that in can grow in
size beyond any fixed bound. However
unbounded_array is aimed at optimal
performance. Therefore unbounded_array
does not model a Sequence like
std::vector does.
What does this mean?
As a Boost developer myself, I can tell you that it's perfectly fine to question the statements in the documentation ;-)
From reading those docs, and from reading the source code (see storage.hpp) I can say that it's somewhat correct given some assumptions about the implementation of std::vector at the time that code was written. That code dates to 2000 initially, and perhaps as late as 2002. Which means at the time many STD implementations did not do a good job of optimizing destruction and construction of objects in containers. The claim about the non-resizing is easily refuted by using an initially large capacity vector. The claim about speed, I think, comes entirely from the fact that the unbounded_array has special code for eliding dtors & ctors when the stored objects have trivial implementations of them. Hence it can avoid calling them when it has to rearrange things, or when it's copying elements. Compared to really recent STD implementations it's not going to be faster, as new STD implementation tend to take advantage of things like move semantics to do even more optimizations.
It appears to lack insert and erase methods. As these may be "slow," ie their performance depends on size() in the vector implementation, they were omitted to prevent the programmer from shooting himself in the foot.
insert and erase are required by the standard for a container to be called a Sequence, so unlike vector, unbounded_array is not a sequence.
No efficiency is gained by failing to be a sequence, per se.
However, it is more efficient in its memory allocation scheme, by avoiding a concept of vector::capacity and always having the allocated block exactly the size of the content. This makes the unbounded_array object smaller and makes the block on the heap exactly as big as it needs to be.
As I understood it from the linked documentation, it is all about allocation strategy. std::vector afaik postpones allocation until necessary and than might allocate some reasonable chunk of meory, unbounded_array seams to allocate more memory early and therefore it might allocate less often. But this is only a gues from the statement in documentation, that it allocates more memory than might be needed and that the allocation is more expensive.