Assume that
vector<vector<shared_ptr<Base>>> vec
vec.reserve(100)
vec[0].reserve(20) // Error : vector subscript out of range
I am trying to reserve memory for both outer vector and inner vector.
I know that the vec is empty so I cannot reserve memory for the inner vector. I could only resize() or shrink_to_fit() afterward. However, using resize() or shrink_to_fit() is useless due to that is not what I wanted to do.
The intention of reserving memory for the inner vector is trying to allocate the memory well for faster searching of inner elements afterward. I am just wondering if I do not reserve the memory, the memory that is pre-allocated is expensive and chaos.
I would like to ask :
Are there any way to reserve memory for the inner vector
Does my concept of "concerning about bad allocation of memory will be caused without reserving memory for the vector" correct?
Sorry for my poor english and I am using VC++ 2010.
You can't reserve memory for both inner and outer vectors... the inner vectors don't get constructed if you've only reserved space in the outer vector. You can resize the outer vector then do a reserve for each element thereof, or you can do the reserving on the inner vectors as they're added.
If you're sure you need to do this at all, I would probably resize the outer vector, then reserve space in each inner vector.
If 100 elements is even close to accurate, the space for your outer vector is almost irrelevant anyway (typically going to be something like 1200 bytes on a 32-bit system or 2400 bytes on a 64-bit system).
That may be a little less convenient (may force you to track how many items are created vs. really in use) but if you want to reserve space in your inner vectors, you don't really have a lot of choices.
I'd start with how you're going to interface with the final container and what you know about its content in advance. Once you have settled on a convenient interface, you can implement the code behind it. For example, you could make sure that every new inner vector get created with a capacity of 100 elements. Or, you could use a map from an x/y pair to a shared pointer, which can make sense in a sparsely populated container. Or how about allocating the 100x100 elements statically and just not reallocating at all? The important point is that all these alternatives can be implemented without changing the interface to the final container, so this gives you the freedom to experiment with different approaches.
BTW: Check out make_shared, which avoid the allocation overhead of shared_ptr, I believe. Alternatively, Boost also has an intrusive_ptr which uses an internal reference counter. These shared_ptr instances are also only half the size of a shared_ptr. However, you need benchmarks to actually prove which way is fastest. Anything else is just more or less vague speculation and guesswork.
Related
Remove first N elements from a std::vector
This question talks about removing items from a vector and 'compacting memory'. What is 'compacting memory' and why is it important here?
Inside the implementation of the std::vector class is some code that dynamically allocates an array of data-elements. Often not all of the elements in this internal array will be in use -- the array is often allocated to be bigger than what is currently needed, in order to avoid having to reallocate a bigger array too often (array-reallocations are expensive!).
Similarly, when items are removed from the std::vector, the internal data-array is not immediately reallocated to be smaller (because doing so would be expensive); rather, the now-extra slots in the array are left "empty" in the expectation that the calling code might want to re-use them in the near future.
However, those empty slots still take up RAM, so if the calling code has just removed a lot of items from the vector, it might want to force the vector to reallocate a smaller internal array that doesn't contain so many empty slots. That is what they are referring to as compacting in that question.
The OP is talking about shrinking the memory the vector takes up. When you erase elements from a vector its size decreases but it capacity (the memory it is using) remains the same. When the OP says
(that also compacts memory)
They want the removal of the elements to also shrink the capacity of the vector so it reduces its memory consumption.
It means that the vector shouldn't use more memory than it needs to. In other words, the OP wants:
size() == capacity()
This can be achieved in C++11 and later by calling shrink_to_fit() on a vector. This is only a request though, it is not binding.
To make sure it will compact memory you should create a new vector and call reserve(oldvector.size()).
TBB Concurrent Vector we can dynamically resize by using grow_by and grow_to_at_least .And in STL Vector there is also resize function .So what is the difference?
The difference which i came across is
1. A concurrent_vector never moves an element until the array is cleared, which can be an advantage over the STL std::vector(which can move elements to resize the vector), even for single-threaded code.
2. Use concurrent_vector only if you really need to dynamically resize it while other accesses are (or might be) in flight, or if you require that an element never move.
Can anyone please explain these points as i am confused in this?
I get this to mean that once memory is allocated in concurrent_vector it is always used, as opposed to std::vector which allocates twice as much memory when it runs out and moves the objects stored to the newly allocated block.
concurrent_vector, I assume, is adding new blocks of memory but keeps using the old ones.
Not moving objects is important as it allows other threads to keep accessing the vector even as it is being re-sized. It probably also helps with other optimizations (such as keeping cached copies valid.)
The downside is access to the elements is slightly slower as the correct block needs to be found first (one extra deference.)
Here's an explanation of std::vector memory allocation: How is dynamic memory managed in std::vector?
The standard STL vector container has a "reserve" function to reserve uninitialized memory that can be used later to prevent reallocations.
How come that the other deque container hasn't it?
Increasing the size of a std::vector can be costly. When a vector outgrows its reserved space, the entire contents of the vector must be copied (or moved) to a larger reserve.
It is specifically because std::vector resizing can be costly that vector::reserve() exists. reserve() can prepare a std::vector to anticipate reaching a certain size without exceeding its capacity.
Conversely, a deque can always add more memory without needing to relocate the existing elements. If a std::deque could reserve() memory, there would be little to no noticeable benefit.
For vector and string, reserved space prevents later insertions at the end (up to the capacity) from invalidating iterators and references to earlier elements, by ensuring that elements don't need to be copied/moved. This relocation may also be costly.
With deque and list, earlier references are never invalidated by insertions at the end, and elements aren't moved, so the need to reserve capacity does not arise.
You might think that with vector and string, reserving space also guarantees that later insertions will not throw an exception (unless a constructor throws), since there's no need to allocate memory. You might think that the same guarantee would be useful for other sequences, and hence deque::reserve would have a possible use. There is in fact no such guarantee for vector and string, although in most (all?) implementations it's true. So this is not the intended purpose of reserve.
Quoting from C++ Reference
As opposed to std::vector, the elements of a deque are not stored contiguously: typical implementations use a sequence of individually allocated fixed-size arrays.
The storage of a deque is automatically expanded and contracted as needed. Expansion of a deque is cheaper than the expansion of a std::vector because it does not involve copying of the existing elements to a new memory location.
Deque can allocate new memory anywhere it wants and just point to it, unlike vectors which require a continuous block of memory to hold all their elements.
Only vector have. There is no need of reserve function for deque, since elements not stored continuougusly and there is no need to reallocate and move elements when add, or remove elements.
reserve implies allocation of large blocks of contiguous data (like a vector). There is nothing in the dequeue that implies contiguous storage - it's generally implemented more like a list (which you will notice also doesn't have a 'reserve' function).
Thus, a 'reserve' function would make no sense.
There are 2 main types of memory: memories that allocate a single chunk like array and vectors, and distributed memories whose members grabs any empty location to fill in. queue and linkest list structures belong to the second type and they have some special practical advantages such that deleting a particular element does not cause a mass memory movement as opposed to arrays and vectors. Therefore they do not need to reserve any space beforehand, if they need it they just take it by connecting to tip
If you aim for having aligned memory containers you could think about implementing something like this:
std::deque<std::vector> dv; //deque with dynamic size memory aligned vectors.
typedef size_t[N] Mem;
std::deque<Mem> dvf //deque with fixed size memory aligned vectors. Here you can store the raw bytes adding a header to loop through and cast using header information and typeid...
//templates and polymorphism can help storing raw bytes, checking the type where a pointer points for example, and creating an interface to access the partial aligned memory.
Alternatively you can use a map to access the vectors instead of a deque...
I have a very large multidimensional vector that changes in size all the time.
Is there any point to use the vector.reserve() function when I only know a good approximation of the sizes.
So basically I have a vector
A[256*256][x][y]
where x goes from 0 to 50 for every iteration in the program and then back to 0 again. The y values can differ every time, which means that for each of the
[256*256][y] elements the vector y can be of a different size but still smaller than 256;
So to clarify my problem this is what I have:
vector<vector<vector<int>>> A;
for(int i =0;i<256*256;i++){
A.push_back(vector<vector<int>>());
A[i].push_back(vector<int>());
A[i][0].push_back(SOME_VALUE);
}
Add elements to the vector...
A.clear();
And after this I do the same thing again from the top.
When and how should I reserve space for the vectors.
If I have understood this correctly I would save a lot of time if I would use reserve as I change the sizes all the time?
What would be the negative/positive sides of reserving the maximum size my vector can have which would be [256*256][50][256] in some cases.
BTW. I am aware of different Matrix Templates and Boost, but have decided to go with vectors on this one...
EDIT:
I was also wondering how to use the reserve function in multidimensional arrays.
If I only reserve the vector in two dimensions will it then copy the whole thing if I exceed its capacity in the third dimension?
To help with discussion you can consider the following typedefs:
typedef std::vector<int> int_t; // internal vector
typedef std::vector<int_t> mid_t; // intermediate
typedef std::vector<mid_t> ext_t; // external
The cost of growing (vector capacity increase) int_t will only affect the contents of this particular vector and will not affect any other element. The cost of growing mid_t requires copying of all the stored elements in that vector, that is it will require all of the int_t vector, which is quite more costly. The cost of growing ext_t is huge: it will require copying all the elements already stored in the container.
Now, to increase performance, it would be much more important to get the correct ext_t size (it seems fixed 256*256 in your question). Then get the intermediate mid_t size correct so that expensive reallocations are rare.
The amount of memory you are talking about is huge, so you might want to consider less standard ways to solve your problem. The first thing that comes to mind is adding and extra level of indirection. If instead of holding the actual vectors you hold smart pointers into the vectors you can reduce the cost of growing the mid_t and ext_t vectors (if ext_t size is fixed, just use a vector of mid_t). Now, this will imply that code that uses your data structure will be more complex (or better add a wrapper that takes care of the indirections). Each int_t vector will be allocated once in memory and will never move in either mid_t or ext_t reallocations. The cost of reallocating mid_t is proportional to the number of allocated int_t vectors, not the actual number of inserted integers.
using std::tr1::shared_ptr; // or boost::shared_ptr
typedef std::vector<int> int_t;
typedef std::vector< shared_ptr<int_t> > mid_t;
typedef std::vector< shared_ptr<mid_t> > ext_t;
Another thing that you should take into account is that std::vector::clear() does not free the allocated internal space in the vector, only destroys the contained objects and sets the size to 0. That is, calling clear() will never release memory. The pattern for actually releasing the allocated memory in a vector is:
typedef std::vector<...> myvector_type;
myvector_type myvector;
...
myvector.swap( myvector_type() ); // swap with a default constructed vector
Whenever you push a vector into another vector, set the size in the pushed vectors constructor:
A.push_back(vector<vector<int> >( somesize ));
You have a working implementation but are concerned about the performance. If your profiling shows it to be a bottleneck, you can consider using a naked C-style array of integers rather than the vector of vectors of vectors.
See how-do-i-work-with-dynamic-multi-dimensional-arrays-in-c for an example
You can re-use the same allocation each time, reallocing as necessary and eventually keeping it at the high-tide mark of usage.
If indeed the vectors are the bottleneck, performance beyond avoiding the sizing operations on the vectors each loop iteration will likely become dominated by your access pattern into the array. Try to access the highest orders sequentially.
If you know the size of a vector at construction time, pass the size to the c'tor and assign using operator[] instead of push_back. If you're not totally sure about the final size, make a guess (maybe add a little bit more) and use reserve to have the vector reserve enough memory upfront.
What would be the negative/positive sides of reserving the maximum size my vector can have which would be [256*256][50][256] in some cases.
Negative side: potential waste of memory. Positive side: less CPU time, less heap fragmentation. It's a memory/cpu tradeoff, the optimum choice depends on your application. If you're not memory-bound (on most consumer machines there's more than enough RAM), consider reserving upfront.
To decide how much memory to reserve, look at the average memory consumption, not at the peak (reserving 256*256*50*256 is not a good idea unless such dimensions are needed regularly)
I've got a problem with my terrain engine (using DirectX).
I'm using a vector to hold the vertices of a detail block.
When the block increases in detail, so the vector does.
BUT, when the block decreases its detail, the vector doesn't shrink in size.
So, my question: is there a way to shrink the size of a vector?
I did try this:
vertexvector.reserve(16);
If you pop elements from a vector, it does not free memory (because that would invalidate iterators into the container elements). You can copy the vector to a new vector, and then swap that with the original. That will then make it not waste space. The Swap has constant time complexity, because a swap must not invalidate iterators to elements of the vectors swapped: So it has to just exchange the internal buffer pointers.
vector<vertex>(a).swap(a);
It is known as the "Shrink-to-fit" idiom. Incidentally, the next C++ version includes a "shrink_to_fit()" member function for std::vector.
The usual trick is to swap with an empty vector:
vector<vertex>(vertexvector.begin(), vertexvector.end()).swap(vertexvector);
The reserved memory is not reduced when the vector size is reduced because it is generally better for performance. Shrinking the amount of memory reserved by the vector is as expensive as increasing the size of the vector beyond the reserved size, in that it requires:
Ask the allocator for a new, smaller memory location,
Copy the contents from the old location, and
Tell the allocator to free the old memory location.
In some cases, the allocator can resize an allocation in-place, but it's by no means guaranteed.
If you have had a very large change in the size required, and you know that you won't want that vector to expand again (the principal of locality suggests you will, but of course there are exceptions), then you can use litb's suggested swap operation to explicitly shrink your vector:
vector<vertex>(a).swap(a);
There is a member function for this, shrink_to_fit. Its more efficient than most other methods since it will only allocate new memory and copy if there is a need. The details are discussed here,
Is shrink_to_fit the proper way of reducing the capacity a `std::vector` to its size?
If you don't mind the libc allocation functions realloc is even more efficient, it wont copy the data on a shrink, just mark the extra memory as free, and if you grow the memory and there is memory free after it will mark the needed memory as used and not copy either. Be careful though, you are moving out of C++ stl templates into C void pointers and need to understand how pointers and memory management works, its frowned upon by many now adays as a source for bugs and memory leaks.