Free memory from vector of objects inside a function - c++

I have memory leaks inside my code but I couldn't figure out a solution to free the memory allocated inside the function where the object is created and pushed into a vector of an object.
The main function is the following:
void foo(vector<vector<BCC>> &features){
vector<MinutiaPair*> matchingMtiae;
for (int i = 0; i < features.size(); i++){
Match(features[0], features[i], matchingMtiae);
ms += s;
// Free memory
for (int j = 0; j < matchingMtiae.size(); j++)
delete (matchingMtiae[j]);
matchingMtiae.clear();
}
Each step of the loop a comparison is executed between values and a "new" vector matchingMtiae is returned with new objects. Then, for the next iteration, I want to completely free this vector and deallocate its content from memory. The Match function where objects are created and pushed into a vector matchingMtiae is presented below:
void Match(vector<BCC> &qt, vector<BCC> &tt, vector<MinutiaPair*> &reducedMatchingPairs) {
vector<MinutiaPair*> localMatching;
for (int i = 0; i < qt.size(); i++)
for (int j = 0; j < tt.size(); j++)
{
double currSim = qt[i].Match(tt[j], true);
if (currSim > 0)
{
qt[i].minutia.Flag = false;
tt[j].minutia.Flag = false;
MinutiaPair *pair = new MinutiaPair(qt[i].minutia, tt[j].minutia, currSim);
localMatching.push_back(pair);
}
sort(localMatching.begin(), localMatching.end(), MtiaPairComparer::ComparePointers);
for (int k = 0; k < localMatching.size(); k++)
{
if (!localMatching[k]->QueryMtia->Flag || !localMatching[k]->TemplateMtia->Flag)
{
reducedMatchingPairs.push_back(localMatching[k]);
localMatching[k]->QueryMtia->Flag = true;
localMatching[k]->TemplateMtia->Flag = true;
}
else
{
delete (localMatching[k]);
}
}
}
Debugging my code I realized that after the delete and clear of the vector matchingMtiae, the objects created were still allocated in memory and I can not understand the reason why this is happening since the pointer is not being lost but keeping it inside the vector.
I would like to deallocate the created objects from memory and completely clean the vector from pointers. Both are my aims.
Thanks in advance.

You can "submit" a non-binding request to the C++ library std::vector to release its allocated memory by calling shrink_to_fit after clear or resize.
Note this is nonbinding which practically means every sane implementation actually releases memory but you cannot portably rely on this assumption strictly speaking.
I would also strongly suggest replacing the raw pointers in your vector with std::unique_ptr (or even just the objects themselves, if there is no concern of inheritance/slicing). It will ease the visual load of your function and prevent memory leaks in the future.

Related

Why is my program leaking memory? (working with trees in C++)

I'm creating a tree of dynamic objects. The Node class has a vector to store the child nodes, among the other class variables:
std::vector<Node*> child;
The class destructor deletes all the dynamically allocated class variables, and then deletes the child nodes:
~Node() {
//Deleting the other variables
.
.
.
//Deleting the child nodes
for(int i = 0; i < child.size(); i++) {
delete child[i];
}
}
My class has a method that creates a tree of a given height, in which the current node is the root node:
void createTree(int height) {
if(height == 0) {
return;
}
for(int i = 0; i < numberOfChildNodes; i++) {
child.push_back(new Node());
child[i]->createTree(height - 1);
}
}
This class has another method where I create a tree with height = 3, then I delete the entire tree and create another one with height = 4, then I delete the entire tree and create one with height = 5, and so on, until a memory limit is reached:
void highestTreePossible() {
int i, height = 3;
struct sysinfo memInfo;
while(true) {
createTree(height);
sysinfo (&memInfo);
if(memInfo.freeram > limit) {
std::cout << "Highest tree possible: height = " << height;
break;
}
for(i = 0; i < child.size(); i++) {
delete child[i];
}
child.clear();
height++;
}
for(i = 0; i < child.size(); i++) {
delete child[i];
}
child.clear();
}
The problem is, when I check the memory after running the method highestTreePossible(), there's a lot of memory allocated, which isn't supposed to happen, because I deleted everything. Why is my code leaking memory?
It's not leaking; you're not using a valid test for this kind of thing. Modern operating systems have complex memory management, and you may observe a process "holding on" to more memory than you think it needs at any given time. Rest assured, it is available to the rest of the system when required.
If you're concerned about memory, you need to observe a constant, consistent rise in consumption for your process over a significant period of time, or hook into the actual allocators used by your program itself. A great way to do this is using a diagnostic tool like massif, which ships with Valgrind (if you're on a compatible system). There are some great ways to visualise that.
If you're on a Linux system, you can try to use valgrind to reliably check if your code has any memory leaks.
e.g. $valgrind --leak-check=full ./node_test
So to answer the question "Why is my code leaking memory?"
There is no memory leak in your code as I tested/checked it with valgrind report (e.g. "All heap blocks were freed -- no leaks are possible").
However, I seem to notice some problem in the line containing the exit condition if(memInfo.freeram > limit) since memInfo.freeram value is expected to be decreasing as the tree grows. And that your goal is to keep the tree growing (hence memInfo.freeram will also be shrinking) "until a memory limit is reached" as you've mentioned above or "until the memInfo.freeram falls below a certain limit" as I rephrased it.
So I expect that the correct exit condition should supposedly be the opposite if(memInfo.freeram < limit).
Hope this helps.
PS. It's also a good practice to use smart pointers (e.g. std::unique_ptr, std::shared_ptr) to avoid memory leaks.

c++ How to deallocate and delete a 2D array of pointers to objects

In SO question [How to allocate a 2D array of pointers in C++] [1], the accepted answer also makes note of the correct procedure of how to de-allocate and delete said array, namely "Be careful to delete the contained pointers, the row arrays, and the column array all separately and in the correct order." So, I've been successfully using this 2D array in a cellular automaton simulation program. I cannot, however, get this array's memory management correct. I do not see an SO answer for how to do this other than the reference above.
I allocate the 2D array as follows:
Object*** matrix_0 = new Object**[rows];
for (int i = 0; i < rows; i++) {
matrix_0[i] = new Object*[cols];
}
My futile attempt(s) (according to Valgrind) to properly de-allocate the above array are as follows:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
matrix_0[i][j] = NULL;
}
}
delete [] matrix_0;
matrix_0 = NULL;
Clearly, I'm missing the rows and cols part as reference [1] suggests. Can you show me what I'm missing? Thanks in advance.
[1]: (20 Nov 2009) How to allocate a 2D array of pointers in C++
You have a tonne of deleting to do in this:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
delete matrix_0[i][j]; // delete stored pointer
}
delete[] matrix_0[i]; // delete sub array
}
delete [] matrix_0; //delete outer array
matrix_0 = NULL;
There is no need to NULL anything except matrix_0 because they are gone after delete.
This is horrible and unnecessary. Use a std::vector and seriously reconsider the pointer to the contained object.
std::vector<std::vector<Object*>> matrix_0(rows, std::vector<Object*>(cols));
Gets what you want and reduces the delete work to
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
delete matrix_0[i][j]; // delete stored pointer
}
}
But SergeyA's suggestion of storing unique_ptr, std::vector<std::vector<std::unique_ptr<Object>>> matrix_0; reduces the deletions required to 0.
Since speed is one of OP's goals, there is one more improvement:
std::vector<std::unique_ptr<Object>> matrix_0(rows * cols);
Access is
matrix_0[row * cols + col];
This trades a bit of visible math for the invisible math and pointer dereferences currently going on behind the scenes. The important part is the vector is now stored as a nice contiguous block of memory increasing spacial locality and reducing the number of cache misses. It can't help with the misses that will result from the pointers to Objects being scattered throughout memory, but you can't always win.
A note on vector vs array. Once a vector has been built, and in this case it's all done in one shot here:
std::vector<std::unique_ptr<Object>> matrix_0(rows * cols);
all a vector is is a pointer to an and a couple other pointers to mark end and the the location of the last location used. Access to the data array is no different from access to a dynamic array made with new. Using the index operator [] compiles down to data_pointer + index exactly the same as using [] on an array. There is no synchronizing or the like as in Java's Vector. It is just plain raw math.
Compared to a dynamic array all a preallocated vector costs you is two pointers worth of memory and in return you get as close to no memory management woes as you are likely to ever see.
Before setting the pointers to NULL, you should delete them first. After every pointer in the column are deleted, you can delete[] the row and set it to NULL, as every element is deleted and gone.

resize an array of pointers without memory leak

I have a pointer to an array of pointers-to-objects, and need to resize the array. I do realize this is a perfect time to use vectors, But I'm not permitted to do so. My code works, but I don't completely follow what I've written, and concerned i may have created memory leaks:
void foo (DataSet &add_data)
{
if (sets == NULL)
{
sets = new DataSet*[1];
sets[0] = &add_data;
}
else
{
DataSet **transfer;
transfer = new DataSet*[num_of_sets];
for (int i = 0; i < num_of_sets; i++) // Copy addresses?
transfer[i] = sets[i];
delete [] sets; // Free the array.
sets = new DataSet*[num_of_sets + 1]; // Create new sets
for (int i = 0; i < num_of_sets; i++) // Copy back
sets[i] = transfer[i];
sets[num_of_sets] = &add_data; // Add the new set
delete [] transfer;
transfer = NULL;
}
num_of_sets++;
}
Why does Visual Studio throw an exception for:
for (int i = 0; i < num_of_sets; i++) // Copy addresses?
*transfer[i] = *sets[i];
but not:
for (int i = 0; i < num_of_sets; i++) // Copy addresses?
transfer[i] = sets[i];
But both code segments compile and run without fault in linux. This code should copy the pointers-to-objects. Is that what is happening with:
for (int i = 0; i < num_of_sets; i++) // Copy addresses?
transfer[i] = sets[i];
And do I need to be concerned if I want to free these objects with say a remove function later?
You do not need to allocate twice, just allocate once the final size:
transfer = new DataSet*[num_of_sets + 1]; // Create new sets - final size
for (int i = 0; i < num_of_sets; i++) // Copy addresses?
transfer[i] = sets[i];
delete [] sets; // Free the array.
sets = transfer;
sets[num_of_sets] = &add_data; // Add the new set
// now no need to delete[] transfer
That way you also get improved exception safety btw. - in your original code, you deleted the sets before allocating the new data to it - if that would throw std::bad_alloc, not only your object will become inconsistent (having a dangling sets ptr because you do not assign null to it after delete) but also the memory allocated to transfer would leak. If you allocate transfer directly to final size before delete[] sets, if that will throw, sets will stay intact and transfer will not leak (because it threw during allocation i.e. did not allocate).
Of course, make sure that you delete[] sets in the destructor (and maybe the pointers as well, in case your set is owning them).
*transfer[i] = *sets[i];
Does not copy addresses, like the other sample (without asterisks) does, it tries to dereference the uninitialized pointer elements of transfer and call operator= on DataSet objects on these addresses.
It's undefined behavior, that's why it appears to work under changed circumstances.

Freeing memory between loop executions

Hi I'm coding a C++ program containing a loop consuming too much unnecessary memory, so much that the computer freezes before reaching the end...
Here is how this loop looks like:
float t = 0.20;
while(t<0.35){
CustomClass a(t);
a.runCalculations();
a.writeResultsInFile("results_" + t);
t += 0.001;
}
If relevant, the program is a physics simulation from which I want results for several values of an external parameter called t for temperature. It seems that the memory excess is due to not "freeing" the space taken by the instance of my class from one execution of the loop to the following, which I thought would be automatic if created without using pointers or the new instruction. I tried doing it with a destructor for the class but it didn't help. Could it be because the main memory use of my class is a 2d array defined with a new instruction in there?
Precision, it seems that the code above is not the problem (thanks for the ones pointing this out) so here is how I initiate my array (by the largest object in my CustomClass) in its constructor:
tab = new int*[h];
for(int i=0; i<h; i++) {
tab[i] = new int[v];
for(int j=0; j<v; j++) {
tab[i][j] = bitd(gen)*2-1; //initializing randomly the lattice
}
}
bitd(gen) is a random number generator outputing 1 or 0.
And also, another method of my CustomClass object doubles the size of the array in the following way:
int ** temp = new int*[h];
for(int i=0; i<h; i++) {
temp[i] = new int[v];
for(int j=0; j<v; j++) {
temp[i][j] = tab[i/2][j/2];
}
}
delete[] tab;
tab = temp;
Could there be that I should free the pointer temp?
You're leaking memory.
Could there be that I should free te pointer temp?
No. After you allocate the memory for the new array of double size and copy the contents, you should free the memory that tab is pointing to. Right now, you're only deleting the array of pointers with delete [] tab; but the memory that each of those pointers points to is lost. Run a loop and delete each one. Only then do tab = temp.
Better still, use standard containers that handle memory management for you so you can forget messing with raw pointers and focus on your real work instead.

Deleting double pointer causes heap corruption

I am using a double pointer but when I try to delete it it causes Heap Corruption: CRT detected that the application wrote to memory after end of heap. It "crashes" inside the destructor of the object:
Map::~Map()
{
for(int i = 0; i < mTilesLength; i++)
delete mTiles[i];
delete[] mTiles;
}
mTiles is declared something like this:
Tile **mTiles = NULL;
mTiles = new Tile *[mTilesLength];
for(int i = 0; i < mTilesLength; i++)
mTiles[i] = new Tile(...);
If notable mTiles is a object of "Tile" which inherits from a object "Sprite" all 3 destructors are set as virtual (map, tile, sprite), not sure if that makes any difference but seemed to work until now.
The code you posted does not seem to have any problems in it. I created a simple, self contained, compiling (and correct) example from it:
struct Tile {int x; Tile():x(7) {}};
struct Map {
Tile **mTiles;
int mTilesLength;
Map(int TilesLength_);
~Map();
};
Map::~Map()
{
for(int i = 0; i < mTilesLength; i++) {
delete mTiles[i];
}
delete[] mTiles;
}
Map::Map(int TilesLength_):
mTiles(),
mTilesLength(TilesLength_)
{
mTiles = new Tile *[mTilesLength];
for(int i = 0; i < mTilesLength; i++) {
mTiles[i] = new Tile();
}
}
int main() {
Map* m = new Map(1000);
delete m;
}
I compiled and ran it <- link, and nothing bad was noticed.
Your problem lies in code you have not shared with us. In order to find the code that is causing the problem and ask the right question, go here: http://sscce.org/
Then take your code and start trimming parts off it until the code is simple, yet still demonstrates your heap corruption. Keep copies of each version as you trim away irrelevant code so you don't skip over the part where the problem occurs (this is one of the many reasons you want a version control system even on your personal projects).