std::vector<Foo> vec;
Foo foo(...);
assert(vec.size() == 0);
vec.reserve(100); // I've reserved 100 elems
vec[50] = foo; // but I haven't initialized any of them
// so am I assigning into uninitialized memory?
Is the above code safe?
It's not valid. The vector has no elements, so you cannot access any element of them. You just reserved space for 100 elements (which means that it's guaranteed that no reallocation happens until over 100 elements have been inserted).
The fact is that you cannot resize the vector without also initializing the elements (even if just default initializing).
You should use vec.resize(100) if you want to index-in right away.
vec[50] is only safe if 50 < vec.size(). reserve() doesn't change the size of a vector, but resize() does and constructs the contained type.
It won't work. While the container has 100 elements reserved, it still has 0 elements.
You need to insert elements in order to access that portion of memory. Like Jon-Eric said, resize() is the way to go.
std::vector::reserve(100) will claim 100*sizeof(Foo) of free memory, so that further insertion to a vector will not do a memory allocation until 100*sizeof(foo) is full, but accessing the element of that vector will give indeterministic content of that element, since its only claim the memory not allocate it.
Before you can use operator[] to access 50th element you should either call resize, push_back() something 50 times or use std::fill_n algorithm.
Related
I've read that the best way to free the memory of a vector is:
vector<int>().swap(my_vector);
And i don't really understand what is happening.The swap function takes 2 vectors and swaps their elements, so for instance:
vector<int>v1{1,2,3};
vector<int>v2{4,5,6};
v1.swap(v2);
v1 becomes {4,5,6} and v2 becomes {1,2,3}.This looks normal. But how does my first example free the memory? What happens inside the memory? If my_vector swaps elements with vector () (an empty vector), then doesn't the empty vector get my_vector elements, and my_vector becomes empty?
You're not only swapping with an empty vector, you're swapping with a temporary empty vector. So the memory of the two vectors is swapped, and then the destructor of the temporary vector frees the memory which originally was my_vectors.
Note that the standard effectively guarantees that in this case swap is swapping the allocated memory. Otherwise, this method could raise an exception, which it is forbidden to do in this case. Also, note that the default constructor of vector is also exception-free, so it effectively cannot allocate memory. As Aziuth correctly notes in the comment, it could theoretically have a small capacity non-dynamic initial buffer, and this would be transferred in the swap. In practice, this is probably negligible.
I want to grow a ::std::vector at runtime, like that:
::std::vector<int> i;
i.push_back(100);
i.push_back(10);
At some point, the vector is finished, and i do not need the extra functionality ::std::vector provides any more, so I want to convert it to a C-Array:
int* v = i.data();
Because I will do that more than once, I want to deallocate all the heap memory ::std::vector reserved, but I want to keep the data, like that (pseudo-code):
free(/*everything from ::std::vector but the list*/);
Can anybody give me a few pointers on that?
Thanks in advance,
Jack
In all the implementations I have seen, the allocated part of a vector is... the array itself. If you know that it will no longer grow, you can try to shrink it, to release possibly unused memory:
i.shrink_to_fit();
int* v = i.data();
Of course, nothing guarantees that the implementation will do anything, but the only foolproof alternative would be to allocated a new array, move data from the vector to the array and clear the vector.
int *v = new int[i.size];
memcpy(v, i.data, i.size * sizeof(int));
i.clear();
But I really doubt that you will have a major gain that way...
You can use the contents of a std::vector as a C array without the need to copy or release anything. Just make sure that std::vector outlives the need for your pointer to be valid (and that further modifications which could trigger a reallocation are done on the vector itself).
When you obtain a pointer to internal storage through data() you're not actually allocating anything, just referring to the already allocated memory.
The only additional precaution you could use to save memory is to use shrink_to_fit() to release any excess memory used as spare capacity (though it's not guaranteed).
You have two options:
Keep data in vector, but call shrink_to_fit. All overhead you'll have - is an extra pointer (to the vector end). It is available since C++ 11
Copy data to an external array and destroy the vector object:
Here is the example:
std::vector<int> vec;
// fill vector
std::unique_ptr<int[]> arr(new int[vec.size()]);
std::copy(vec.begin(), vec.end(), arr.get());
I'm using code such as the following:
const int MY_SIZE = 100000;
std::vector<double> v;
v.reserve(MY_SIZE);
// add no more than MY_SIZE elements to the vector
f(v);
v.clear();
// again, add no more than MY_SIZE elements to the vector
f(v);
v.clear();
//
// etc...
//
The point of my code is to store MY_SIZE double's and then perform an operation f(std::vector<double>) on those elements. After I fill up the vector and perform the operation, I want to get rid of all the elements (and reset std::vector::size() to 0), and then add more elements. But, the key here is that I do not want to cause the space in memory allocated for the vector to be changed.
Note than I'm never going to add more than MY_SIZE elements to v, so v should never need to reallocate more memory than was allocated by v.reserve(MY_SIZE).
So, when I call v.clear() in the above code, will it affect in any way the amount of space allocated by v.reserve(MY_SIZE) or the location in memory of v.begin()?
Related question: If I call v.erase(v.begin(),v.begin()+v.size()), will it affect in any way the amount of space allocated by v.reserve(MY_SIZE) or the location in memory of v.begin()?
If I really just wanted to erase all the elements, I would call clear(). But I'm wondering about this related question because there are occasions when I need to erase only the first X elements of v, and on these occasions I want to keep the memory allocated by v.reserve(MY_SIZE) and I don't want the location of v to change.
It seems the C++ standard (2003) implicitly guarantees that the memory is not reallocated if you call the clear() or erase() method of the std::vector.
According to the requirements of the Sequence (table 67) the a.clear() is equivalent to a.erase(begin(),end()).
Furthermore, the standard states that the erase(...) member function of the std::vector<T> does not throw an exception unless one is thrown by the copy constructor of T (section 23.2.4.3). Hence it is implicitly guaranteed, because a reallocation could cause an exception (sections 3.7.3, 20.4.1.1).
Also v.begin() remains the same, as erase(...) will only invalidate all iterators after the point of the erase (section 23.2.4.3). However, it won't be dereferenceable (since v.begin() == v.end()).
So, if you have a standard compliant implementation you are fine...
CORRECTION
My reasoning is flawed. I managed to show that erase(...) does not reallocate, but an implementation could still deallocate the memory if you erase all elements. However, if a.capacity() reports "you can add N elements without reallocating memory" after the erase/clear you are fine.
The C++11 standard defines a.clear() without referring to a.erase(...). a.clear() is not allowed to throw an exception. Hence it could deallocate, but not reallocate. So you should check the capacity after clearing the vector to make sure that the memory is still there and the next resize won't reallocate.
I'm looking for a way that prevents std::vectors/std::strings from growing in a given range of sizes (say I want to assume that a string will hold around 64 characters, but it can grow if needed). What's the best way to achieve this?
Look at the .reserve() member function. The standard docs at the SGI site say
[4] Reserve() causes a reallocation manually. The main reason for
using reserve() is efficiency: if you know the capacity to which your
vector must eventually grow, then it is usually more efficient to
allocate that memory all at once rather than relying on the automatic
reallocation scheme. The other reason for using reserve() is so that
you can control the invalidation of iterators. [5]
[5] A vector's iterators are invalidated when its memory is
reallocated. Additionally, inserting or deleting an element in the
middle of a vector invalidates all iterators that point to elements
following the insertion or deletion point. It follows that you can
prevent a vector's iterators from being invalidated if you use
reserve() to preallocate as much memory as the vector will ever use,
and if all insertions and deletions are at the vector's end.
That said, as a general rule unless you really know what is going to happen, it may be best to let the STL container deal with the allocation itself.
You reserve space for vector and string by their reserve(size_type capacity) member function. But it doesn't prevent it from anything :). You're just telling it to allocate at least that much of uninitialized memory (that is, no constructors of your type will be called) and resize to more if needed.
std::vector<MyClass> v;
v.reserve(100); //no constructor of MyClass is called
for(int i = 0; i < 100; ++i)
{
v.push_back(MyClass()); // no reallocation will happen. There is enough space in the vector
}
For vector:
std::vector<char> v;
v.reserve(64);
For string:
std::string s;
s.reserve(64);
Where's your C++ Standard Library reference got to?
Both of them have member function called reserve which you can use to reserve space.
c.reserve(100); //where c is vector (or string)
How does std::vector implement the management of the changing number of elements: Does it use realloc() function, or does it use a linked list?
Thanks.
It uses the allocator that was given to it as the second template parameter. Like this then. Say it is in push_back, let t be the object to be pushed:
...
if(_size == _capacity) { // size is never greater than capacity
// reallocate
T * _begin1 = alloc.allocate(_capacity * 2, 0);
size_type _capacity1 = _capacity * 2;
// copy construct items (copy over from old location).
for(size_type i=0; i<_size; i++)
alloc.construct(_begin1 + i, *(_begin + i));
alloc.construct(_begin1 + _size, t);
// destruct old ones. dtors are not allowed to throw here.
// if they do, behavior is undefined (17.4.3.6/2)
for(size_type i=0;i<_size; i++)
alloc.destroy(_begin + i);
alloc.deallocate(_begin, _capacity);
// set new stuff, after everything worked out nicely
_begin = _begin1;
_capacity = _capacity1;
} else { // size less than capacity
// tell the allocator to allocate an object at the right
// memory place previously allocated
alloc.construct(_begin + _size, t);
}
_size++; // now, we have one more item in us
...
Something like that. The allocator will care about allocating memory. It keeps the steps of allocating memory and constructing object into that memory apart, so it can preallocate memory, but not yet call constructors. During reallocate, the vector has to take care about exceptions being thrown by copy constructors, which complicates the matter somewhat. The above is just some pseudo code snippet - not real code and probably contains many bugs. If the size gets above the capacity, it asks the allocator to allocate a new greater block of memory, if not then it just constructs at the previously allocated space.
The exact semantics of this depend on the allocator. If it is the standard allocator, construct will do
new ((void*)(_start + n)) T(t); // known as "placement new"
And the allocate allocate will just get memory from ::operator new. destroy would call the destructor
(_start + n)->~T();
All that is abstracted behind the allocator and the vector just uses it. A stack or pooling allocator could work completely different. Some key points about vector that are important
After a call to reserve(N), you can have up to N items inserted into your vector without risking a reallocation. Until then, that is as long as size() <= capacity(), references and iterators to elements of it remain valid.
Vector's storage is contiguous. You can treat &v[0] as a buffer containing as many elements you have currently in your vector.
One of the hard-and-fast rules of vectors is that the data will be stored in one contiguous block of memory.
That way you know you can theoretically do this:
const Widget* pWidgetArrayBegin = &(vecWidget[0]);
You can then pass pWidgetArrayBegin into functions that want an array as a parameter.
The only exception to this is the std::vector<bool> specialisation. It actually isn't bools at all, but that's another story.
So the std::vector will reallocate the memory, and will not use a linked list.
This means you can shoot yourself in the foot by doing this:
Widget* pInteresting = &(vecWidget.back());
vecWidget.push_back(anotherWidget);
For all you know, the push_back call could have caused the vector to shift its contents to an entirely new block of memory, invalidating pInteresting.
The memory managed by std::vector is guaranteed to be continuous, such that you can treat &vec[0] as a pointer to the beginning of a dynamic array.
Given this, how it actually manages it's reallocations is implementation specific.
std::vector stored data in contiguous memory blocks.
Suppose we declare a vector as
std::vector intvect;
So initially a memory of x elements will be created . Here x is implementation depended.
If user is inserting more than x elements than a new memory block will be created of 2x (twice the size)elements and initial vector is copied into this memory block.
Thats why it is always recommended to reserve memory for vector by calling reserve
function.
intvect.reserve(100);
so as to avoid deletion and copying of vector data.