What happens when an object allocated in the heap gets larger? - c++

Say I have a member variable called
Result m_result
class Result{
QList<Group *> m_groups
}
and Group
class Group{
QList<Data> m_data
}
As the program continues, each groups' QList of Data keeps growing. All groups are allocated in the heap ( hence the pointer). In essence, each group gets allocated ONCE in the heap. Does a particular group get re-allocated each time its QList grows? Also, does m_testResult get recopied over and over again because of the growth of Data members?

Inside QList, there will be pointers to some other objects or arrays which actually hold the data. As the list grows, new objects for this backing data will be allocated and the existing ones deleted. The code which you've shown doesn't have to worry about it, but if you are implement your own collections then you would. The objects themselves don't ever grow once they are allocated.
The details for QList can be found here - it uses an array of pointers to <T>, so isn't a linked list. As it uses pointers to the elements it does not have to copy the elements when the array is resized, just copying the pointers ( or possibly not if it's implemented in a similar fashion to a VList - I didn't see anything in the document to indicate which strategy it uses ), so a particular group will not get re-allocated each time its QList grows.

I don't know about QList in particular, but, in general, I would expect the following:
Upon adding a new element, the QList allocates (new) more memory and links the last element to the new element.
Upon removing an (existing) element, the element's predecessor is linked to its successor, and the memory holding the element is delete-ed.
This is the general principle on how any linked list works, such as std::list or std::slist
Objects never grow, but memory can be claimed and released repeatedly.

Related

What does 'compacting memory' mean when removing items from the front of a std::vector?

Remove first N elements from a std::vector
This question talks about removing items from a vector and 'compacting memory'. What is 'compacting memory' and why is it important here?
Inside the implementation of the std::vector class is some code that dynamically allocates an array of data-elements. Often not all of the elements in this internal array will be in use -- the array is often allocated to be bigger than what is currently needed, in order to avoid having to reallocate a bigger array too often (array-reallocations are expensive!).
Similarly, when items are removed from the std::vector, the internal data-array is not immediately reallocated to be smaller (because doing so would be expensive); rather, the now-extra slots in the array are left "empty" in the expectation that the calling code might want to re-use them in the near future.
However, those empty slots still take up RAM, so if the calling code has just removed a lot of items from the vector, it might want to force the vector to reallocate a smaller internal array that doesn't contain so many empty slots. That is what they are referring to as compacting in that question.
The OP is talking about shrinking the memory the vector takes up. When you erase elements from a vector its size decreases but it capacity (the memory it is using) remains the same. When the OP says
(that also compacts memory)
They want the removal of the elements to also shrink the capacity of the vector so it reduces its memory consumption.
It means that the vector shouldn't use more memory than it needs to. In other words, the OP wants:
size() == capacity()
This can be achieved in C++11 and later by calling shrink_to_fit() on a vector. This is only a request though, it is not binding.
To make sure it will compact memory you should create a new vector and call reserve(oldvector.size()).

Questions About Vectors and Deleting Memory Associated With Them

I wrote a program a few months ago using Vectors. I used the clear() member function to "reset" the vectors, assuming that it would not only clear the items in the elements out and reset the size data member, but that it would also give the heap back the memory that was being used with it previously. Well, I stumbled onto a post about vectors saying that this is not the correct way to get memory back from the Vector, as using clear() will not do it, but that one needed to use the swap method:
vector<MyClass>().swap(myVector);
I'm curious as to why we have to call the swap to delete the old memory? I assume this is more of a workaround, in that we are using the swap, but something else is happening. Is a destructor being called at all?
One last question, all of the articles that I've now read saying that clear() doesn't deallocate memory that that the objects are "destroyed." Can anyone clarify what is meant by that? I'm unfamiliar with the vernacular. I assumed that if an object was destroyed, it was cleared out and the memory was given back to the heap, but this is wrong, so is the word "destroy" referring to just wiping the bits associated with each element? I'm not sure. Any help would be greatly appreciated. Thanks.
To answer the question, you need to separate the memory directly allocated by the vector from memory indirectly "owned" through the member objects. So for example, say MyClass is an object taking 1000 bytes, and then you work with a std::vector<std::unique_ptr<MyClass>>. Then if that vector has 10 elements, the directly allocated memory will typically be close to 10*8=80 bytes on a 64-bit system, whereas the unique_ptr objects indirectly own 10*1000=10000 bytes.
Now, if you call clear() on the vector, the destructor is called on each unique_ptr element, so the 10000 indirectly-owned bytes are freed. However, the underlying array storage isn't freed, so the 80+ bytes directly owned by the vector are still allocated. (At this point, the vector has a capacity() of at least 10, but a size() of 0.) If you subsequently call the vector's destructor, that will cause the storage array to be freed as well.
Now, if you execute
std::vector<std::unique_ptr<MyClass>>().swap(v);
let's break down what that does: first, a temporary vector object is created, which has no allocated array and a capacity() of 0. Now, the swap transfers the underlying array of v to the temporary vector, and swaps the null or empty array from the temporary vector into v. Finally, at the end of the expression, the temporary object goes out of scope so its destructor is called. That causes the destructors of any elements previously belonging to v to be called, followed by freeing the underlying array storage that previously belonged to v. So at the end of this, v.capacity() is 0, and all memory previously belonging to v is freed, whether it was directly allocated by v or indirectly belonged to it through the stored unique_ptr objects.
A vector has an associated quantity called capacity which means that it has allocated enough memory for that many elements, even if it does not actually contain that many elements at the moment.
If elements are added or removed from the vector without exceeding the capacity, then no memory is allocated or freed; the individial elements' constructors and destructors are run on the space that's already allocated.
The clear() function doesn't change the capacity. However, the usual implementation of vector::swap() also swaps the capacities of the vectors; so swapping with an empty vector will cause the original vector to have the default capacity, which will be small or even zero (implementation-dependent) and therefore memory should be released.
Since C++11 there is a formal way to reduce capacity, called shrink_to_fit().
Note that the C++ Standard does not actually require that memory be released to the OS after reducing the capacity; it would be up to a combination of the author of the library implementation you use, and the operating system.

C++: Vector of pointers vs Fixed-size array performance

Performance-wise, which is faster?
A vector of object pointers allocated by the new operator?
std::vector<Object *> array;
Or an array allocated with new in the constructor?
Object[] objects;
objects = new objects[64];
The idea is that in every frame, the program loops through each element reading/writing values for each element.
Edit:
The second snippet was pulled from an XNA book. I am not using XNA to write my framework, and I'm trying to figure out the best method for using containers in an application that requires speed.
Definitely the second one.
With a vector of pointers, each individual element of that vector can be allocated anywhere on the heap.
With an array of objects, all elements are stored sequentially. This means the processor can cache chunks of memory more effectively as you iterate through the array.
The concept is called cache locality, referring to how well organised your data is with respect to memory access patterns and caching.
As pointed out in the comments, neither of your examples are correct. I assume you meant something like this:
std::vector<Object*> vector_of_pointers(size);
Object *array_of_objects = new Object[size];
However, I fear you may not have phrased your question the way you intended. You're not comparing two similar things. A vector is basically just an array that can grow if necessary. It makes all the same guarantees as an array, and so if it's storing the same data type, you shouldn't notice any difference between the two.
// Bad cache locality:
Object **A = new Object*[size];
std::vector<Object*> B(size);
// Good cache locality:
Object *C = new Object[size];
std::vector<Object> D(size);

c++ memory issue about pointer + non pointer

Let's say I have declared a variable
vector<int>* interList = new vector<int>();
interList->push_back(1);
interList->push_back(2);
interList->push_back(3);
interList->push_back(4);
First question is when I push_back an int, a memory space will be consumed?
Second question if I (delete interList), will the memory consume by 1,2,3,4 be released automatically?
EDIT: free --> delete
Yes, the vector class will probably automatically allocate a larger space than needed in case you want to store more data in it later, so it probably won't allocate new space each time you push_back().
Yes, but you should use delete interList; instead of free().
std::vector allocates continuos block of memory at once for some amount of elements. So every time you insert new element it is inserted into reserved block, and the memory space remains the same, no new allocation happens.
If you insert element beyond the allocated block (capacity of the vector) then it allocates a bigger block (resize), copies all the previous elements into it and destroys the old block. So vector manages memory by itself, not each inserted element cause reallocation of the internal buffer.
Second question - yes, vector will clean up all the memory if you delete vector itself.
delete interList;
push_back copies the elements to the heap where the vector will allocate array to store the elements. The capacity of vector can be greater than required or greater than how many elements the vector has. Every time a push back happens the vector checks if there is enough space and if there isn't then it moves all the elements to bigger space and then push elements to the array. The vector always puts elements to contiguous memory blocks and hence if the memory block is not large enough to hold all elements together then it moves all the elements to larger block and appends new elements. In order to avoid this frequent moving the vector would usually allocated bigger memory block.
delete interList would destroy the vector and the integers hold by the vector. Here the vector would be on heap as well as the integers also would be on heap. Actually it is better to create the vector on stack or as a member of other object like vector<int> interList; The vector though on stack stores the elements of int on heap as a array. And as ints are stored as value types then once the vector goes out of scope then the memory of ints would be reclaimed.
Because the vector has value types. They are copied to heap by the vector and stored and managed as arrays and their lifetime is attached with vector's lifetime. If you have a vector of pointers then you have to worry. Like vector<T*> list; list.push_back(new T()); The list stores pointers to objects of type T. When you destroy such vector the T objects would not be deleted. This is same like a class with a pointer to a T*. You have to loop through all the element and call delete on pointers or use vector of shared pointers. Vector of shared pointers or unique pointers is recommended.
You are better off not directly allocating the vector if you can help it. So your code would look like this:
vector<int> interList;
interList.push_back(1);
interList.push_back(2);
interList.push_back(3);
interList.push_back(4);
Now when interList goes out of scope all memory is freed. In fact this is basis of all resource management of C++, somewhat prosaically called RAII (resource acquisition is initialization).
Now if you felt that you absolutely had to allocate your vector you should use one of the resource management smart pointers. In this case I'm using shared_ptr
auto interList = std::make_shared<vector<int>>();
interList->push_back(1);
interList->push_back(2);
interList->push_back(3);
interList->push_back(4);
This will now also free all memory and you never need to call delete. What's more you can pass you interList around and it will reference count it for you. When the last reference is lost the vector will be freed.

When/How do container data types (string, vector, etc.) in C++ free their dynamically allocated memory?

Since container data types have dynamic size I'm assuming they allocate memory on the heap. But when/how do they free this allocated memory?
They get freed either when they go out of scope (if the container was created in the stack), or when you explicitly call delete on the container(in case of heap-based container). When this happens, the destructor of the container automatically gets called and the heap memory allocated for the container (that contains the data) are freed then.
Simply removing an element in the container won't necessarily free the memory right away, since STL containers generally use caching to speed things up. Remember, new/delete operations are relatively costly.
They free the memory in their destructors when they are destroyed. (And they are destroyed by having delete or delete [] called if the container itself is heap allocated, or by going out of scope if it is stack allocated)
Short answer: When you remove elements from it.
Generally it happens when you remove element from the container blow its previous growth threshold. It's an implementation detail, but usually for example a vector creates an internal array of N T's.
When you insert more than N of T's, then the vector realocates it memory and grows to some multiple of N (again - implementation detail) to store the new elements, same happens with removal - when your remove elements from your vector it shrinks whenever it reaches the previous multiple of N... until you end up with only one multiple of N, or 0 if you clear it and shrink it with
The heap memory (node storage) is deleted also when the vector object is destructed.