I wrote a program a few months ago using Vectors. I used the clear() member function to "reset" the vectors, assuming that it would not only clear the items in the elements out and reset the size data member, but that it would also give the heap back the memory that was being used with it previously. Well, I stumbled onto a post about vectors saying that this is not the correct way to get memory back from the Vector, as using clear() will not do it, but that one needed to use the swap method:
vector<MyClass>().swap(myVector);
I'm curious as to why we have to call the swap to delete the old memory? I assume this is more of a workaround, in that we are using the swap, but something else is happening. Is a destructor being called at all?
One last question, all of the articles that I've now read saying that clear() doesn't deallocate memory that that the objects are "destroyed." Can anyone clarify what is meant by that? I'm unfamiliar with the vernacular. I assumed that if an object was destroyed, it was cleared out and the memory was given back to the heap, but this is wrong, so is the word "destroy" referring to just wiping the bits associated with each element? I'm not sure. Any help would be greatly appreciated. Thanks.
To answer the question, you need to separate the memory directly allocated by the vector from memory indirectly "owned" through the member objects. So for example, say MyClass is an object taking 1000 bytes, and then you work with a std::vector<std::unique_ptr<MyClass>>. Then if that vector has 10 elements, the directly allocated memory will typically be close to 10*8=80 bytes on a 64-bit system, whereas the unique_ptr objects indirectly own 10*1000=10000 bytes.
Now, if you call clear() on the vector, the destructor is called on each unique_ptr element, so the 10000 indirectly-owned bytes are freed. However, the underlying array storage isn't freed, so the 80+ bytes directly owned by the vector are still allocated. (At this point, the vector has a capacity() of at least 10, but a size() of 0.) If you subsequently call the vector's destructor, that will cause the storage array to be freed as well.
Now, if you execute
std::vector<std::unique_ptr<MyClass>>().swap(v);
let's break down what that does: first, a temporary vector object is created, which has no allocated array and a capacity() of 0. Now, the swap transfers the underlying array of v to the temporary vector, and swaps the null or empty array from the temporary vector into v. Finally, at the end of the expression, the temporary object goes out of scope so its destructor is called. That causes the destructors of any elements previously belonging to v to be called, followed by freeing the underlying array storage that previously belonged to v. So at the end of this, v.capacity() is 0, and all memory previously belonging to v is freed, whether it was directly allocated by v or indirectly belonged to it through the stored unique_ptr objects.
A vector has an associated quantity called capacity which means that it has allocated enough memory for that many elements, even if it does not actually contain that many elements at the moment.
If elements are added or removed from the vector without exceeding the capacity, then no memory is allocated or freed; the individial elements' constructors and destructors are run on the space that's already allocated.
The clear() function doesn't change the capacity. However, the usual implementation of vector::swap() also swaps the capacities of the vectors; so swapping with an empty vector will cause the original vector to have the default capacity, which will be small or even zero (implementation-dependent) and therefore memory should be released.
Since C++11 there is a formal way to reduce capacity, called shrink_to_fit().
Note that the C++ Standard does not actually require that memory be released to the OS after reducing the capacity; it would be up to a combination of the author of the library implementation you use, and the operating system.
Related
If a std::vector vec is cleared with vec.clear() the allocated memory must not be deallocated immediately. The size of the vector will be zero, but the capacity will/can be unchanged.
This is a very beneficial behaviour, since one can clear a large vector and assign new values to it, without the need of an expensive memory de/allocation. Also the memory will be less de-fragmented.
One can enforce that with vec.shrink_to_fit() shrink_to_fit.
std::map has a clear function, but no shrink_to_fit. What happens to the needed memory to store the map after a clear?
cppreference.com states that map.clear() erases all elements from the container. After this call, size() returns zero.
One can enforce that with vec.shrink_to_fit() shrink_to_fit.
Actually, shrink_to_fit doesn't enforce deallocation of memory. It simply allows it. The implementation is allowed to not deallocate.
If a std::map is cleared is it ensured, that the memory is deallocated
No. The only case where standard containers are guaranteed to deallocate their memory is when they are destroyed.
Map doesn't have a concept of capacity that vector has, so it doesn't need shrink_to_fit. A map after clear is in same situation as a vector is after clear + shrink_to_fit: It doesn't need to have any memory allocated... but it is not prohibited to have it allocated either.
Let's say I have declared a variable
vector<int>* interList = new vector<int>();
interList->push_back(1);
interList->push_back(2);
interList->push_back(3);
interList->push_back(4);
First question is when I push_back an int, a memory space will be consumed?
Second question if I (delete interList), will the memory consume by 1,2,3,4 be released automatically?
EDIT: free --> delete
Yes, the vector class will probably automatically allocate a larger space than needed in case you want to store more data in it later, so it probably won't allocate new space each time you push_back().
Yes, but you should use delete interList; instead of free().
std::vector allocates continuos block of memory at once for some amount of elements. So every time you insert new element it is inserted into reserved block, and the memory space remains the same, no new allocation happens.
If you insert element beyond the allocated block (capacity of the vector) then it allocates a bigger block (resize), copies all the previous elements into it and destroys the old block. So vector manages memory by itself, not each inserted element cause reallocation of the internal buffer.
Second question - yes, vector will clean up all the memory if you delete vector itself.
delete interList;
push_back copies the elements to the heap where the vector will allocate array to store the elements. The capacity of vector can be greater than required or greater than how many elements the vector has. Every time a push back happens the vector checks if there is enough space and if there isn't then it moves all the elements to bigger space and then push elements to the array. The vector always puts elements to contiguous memory blocks and hence if the memory block is not large enough to hold all elements together then it moves all the elements to larger block and appends new elements. In order to avoid this frequent moving the vector would usually allocated bigger memory block.
delete interList would destroy the vector and the integers hold by the vector. Here the vector would be on heap as well as the integers also would be on heap. Actually it is better to create the vector on stack or as a member of other object like vector<int> interList; The vector though on stack stores the elements of int on heap as a array. And as ints are stored as value types then once the vector goes out of scope then the memory of ints would be reclaimed.
Because the vector has value types. They are copied to heap by the vector and stored and managed as arrays and their lifetime is attached with vector's lifetime. If you have a vector of pointers then you have to worry. Like vector<T*> list; list.push_back(new T()); The list stores pointers to objects of type T. When you destroy such vector the T objects would not be deleted. This is same like a class with a pointer to a T*. You have to loop through all the element and call delete on pointers or use vector of shared pointers. Vector of shared pointers or unique pointers is recommended.
You are better off not directly allocating the vector if you can help it. So your code would look like this:
vector<int> interList;
interList.push_back(1);
interList.push_back(2);
interList.push_back(3);
interList.push_back(4);
Now when interList goes out of scope all memory is freed. In fact this is basis of all resource management of C++, somewhat prosaically called RAII (resource acquisition is initialization).
Now if you felt that you absolutely had to allocate your vector you should use one of the resource management smart pointers. In this case I'm using shared_ptr
auto interList = std::make_shared<vector<int>>();
interList->push_back(1);
interList->push_back(2);
interList->push_back(3);
interList->push_back(4);
This will now also free all memory and you never need to call delete. What's more you can pass you interList around and it will reference count it for you. When the last reference is lost the vector will be freed.
Suppose I have VectorA and VectorB are two std::vector<SameType>, both initilized (i mean VectorA.size() > 0 and VectorB.size() > 0)
If I do:
VectorA = VectorB;
the memory previosly allocated for VectorA is automatically freed?
It is freed in the sense that the destructors of all contained objects are called, and the vector no longer owns the memory.1
But really, it's just returned to the allocator, which may or may not actually return it to the OS.
So long as there isn't a bug in the allocator being used, you haven't created a memory leak, if that's what your concern is.
1. As #David points out in the comment below, the memory isn't necessarily deallocated, depending on whether the size needs to change or not.
In general, not necessarily. When you assign one vector to the other, the post condition is that both arrays will contain equivalent objects at the end of the operation.
If the capacity of the destination vector is enough, the operation can be achieved by calling the assignment operator on the set of min( v1.size(), v2.size() ) elements, and then either destructing the rest of the elements if the destination vector held more elements, or else copy-constructing the extra elements at the end. In this case no memory release or allocation will be done.
If the destination vector does not have enough capacity, then it will create a new buffer with enough capacity and copy-construct the elements in the new buffer from the source vector. It will then swap the old and new buffers, destroy all old objects and release the old buffer. In this case, the old objects are destroyed and the old memory released, but this is just one case.
My question is regarding the effect of vector::push_back, I know it adds an element in the end of the vector but what happens underneath the hood?
IIRC memory objects are allocated in a sequential manner, so my question is whether vector::push_back simply allocates more memory immediately after the vector, and if so what happens if there is not enough free memory in that location? Or perhaps a pointer is added in the "end" to cause the vector to "hop" to the location it continues? Or is it simply reallocated through copying it to another location that has enough space and the old copy gets discarded? Or maybe something else?
If there is enough space already allocated, the object is copy constructed from the argument in place. When there is not enough memory, the vector will grow it's internal databuffer following some kind of geometric progression (each time the new size will be k*old_size with k > 1[1]) and all objects present in the original buffer will then be moved to the new buffer. After the operation completes the old buffer will be released to the system.
In the previous sentence move is not used in the technical move-constructor/ move-assignment sense, they could be moved or copied or any equivalent operation.
[1] Growing by a factor k > 1 ensures that the amortized cost of push_back is constant. The actual constant varies from one implementation to another (Dinkumware uses 1.5, gcc uses 2). The amortized cost means that even if every so often one push_back will be highly expensive (O(N) on the size of the vector at the time), those cases happen rarely enough that the cost of all operations over the whole set of insertions is linear in the number of insertions, and thus each insertion averages a constant cost)
When vector is out of space, it will use it's allocator to reserve more space.
It is up to the allocator to decide how this is implemented.
However, the vector decides how much space to reserve: the standard guarantees that the vector capacity shall grow by at least a factor of 1.51 geometrically (see comment), thus preventing horrible performance due to repeated 'small' allocations.
On the physical move/copy of elements:
c++11 conforming implementations will move elements if they support move assignment and construction
most implementations I know of (g++ notably) will just use std::copy for POD types; the algorithm specialisation for POD types ensures that this compiles into (essentially) a memcpy operation. This in turn gets compiled in whatever CPU instruction is fastest on your system (e.g. SSE2 instructions)
1 I tried finding the reference quote for that from the n3242 standard draft document, but I was unable to find it at this time
A vector gurantees that all elements are contigious in memory.
Internally you can think of it as defined as three pointers (or what act like pointers):
start: Points at the beginning of the allocated block.
final: Points one past the last element in the vector.
If the vector is empty then start == final
capacity: Points one past the end of allocated memory.
If final == capacity there is no room left.
When you push back.
If final is smaller than capacity:
the new element is copied into the location pointed at by final
final is incremented to the next location.
If final is the same as capacity then the vector is full
new memory must be allocated.
The compiler will then allocate X*(capacity - start)*sizeof(t) bytes.
where X is usually a value between 1.5 and 2.
It then copies all the values from the old memory buffer to the new memory buffer.
the new value is added to the buffer.
Transfers start/final/capacity pointers.
Free's up the old buffer
When vector runs out of space, it is reallocated and all the elements are copied over to the new array. The old array is then destroyed.
To avoid an excessive number of allocations and to keep the average push_back() time at O(1), a reallocation requires that the size be increased by at least a constant factor. (1.5 and 2 are common)
When you call vector::push_back the end pointer is compared to the capacity pointer. If there is enough room for the new object placement new is called to construct the object in the available space and the end pointer is incremented.
If there isn't enough room the vector calls its allocator to allocate enough contiguous space for at least the existing elements plus new element (different implementation may grow the allocated memory by different multipliers). Then all existing elements plus the new one are copied to the newly allocated space.
std::vector overallocates - it will usually allocate more memory than necessary automatically. sizeis not affectd by this, but you can control that through capacity.
std::vector will copy everything if the additional capacity is not sufficient.
The memory allocated by std::vector is raw, no constructors are called on demand, using placement new.
So, push_back does:
if capacity is not sufficient for the new element, it will
allocate a new block
copy all existing elements (usually using the copy constructor)
increase size by one
copy the new element to the new location
If you have some idea of what will be the final size of your array, try to vector::reserve the memory first. Note that reserve is different from vector::resize. With reserve the vector::size() of your array is not changed
Since container data types have dynamic size I'm assuming they allocate memory on the heap. But when/how do they free this allocated memory?
They get freed either when they go out of scope (if the container was created in the stack), or when you explicitly call delete on the container(in case of heap-based container). When this happens, the destructor of the container automatically gets called and the heap memory allocated for the container (that contains the data) are freed then.
Simply removing an element in the container won't necessarily free the memory right away, since STL containers generally use caching to speed things up. Remember, new/delete operations are relatively costly.
They free the memory in their destructors when they are destroyed. (And they are destroyed by having delete or delete [] called if the container itself is heap allocated, or by going out of scope if it is stack allocated)
Short answer: When you remove elements from it.
Generally it happens when you remove element from the container blow its previous growth threshold. It's an implementation detail, but usually for example a vector creates an internal array of N T's.
When you insert more than N of T's, then the vector realocates it memory and grows to some multiple of N (again - implementation detail) to store the new elements, same happens with removal - when your remove elements from your vector it shrinks whenever it reaches the previous multiple of N... until you end up with only one multiple of N, or 0 if you clear it and shrink it with
The heap memory (node storage) is deleted also when the vector object is destructed.