So I have a few vector collections below that are in different files and I'm having trouble figuring out how to deallocate them. I have my attempts below and put them all in the same destructor to reduce space, but when I run valgrind to check for memory leaks there are a BUNCH of errors and leaks detected. I was wondering if someone could confirm if I'm freeing memory for each of the collections below correctly? I'm used to working with arrays and this is my first time using vectors so I'm not quite comfortable with how they work 100%.
Vectors declared in header file:
Control class:
vector<docCreator*>* docs;
vector<string>* menuOptions;
CreateReport class:
static vector<Document*> elements;
static vector<Property<int>*> allNames
Destructors:
Control.cc:
Control::~Control(){
for (vector<docCreator *>::iterator i = docs->begin(); i != docs->end(); ++i) {
delete *i;
}
docs->clear();
delete docs;
for(int i = 0; i < menuOptions.size(); ++i){
delete menuOptions[i];
}
}
CreateReport.cc
CreateReport::~CreateReport(){
for(int i = 0; i < elements.size(); ++i){
delete elements[i];
}
for(int j = 0; j < allNames.size(); ++j){
delete allNames[j];
}
}
CreateReport is an abstract class which is why its members are static. Property is a class template.
I see at least two errors in your code:
for(int j = 0; j < allNames.size(); ++i)
You iterate using j but increment the i.
The menuOptions is the pointer to a vector of strings: you don't need to delete each element. That would not even compile. What you need to delete is the menuOptions itself:
delete menuOptions;
Anyway, this code is error prone. Why not to use smart pointers? Oe even better avoid using pointers at all?
Related
Just curious as to how I would delete this once it is done being used.
TicTacNode *t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i] = new TicTacNode();
}
Would any pointers that are assigned with values in t need to be deleted as well?? For example,
TicTacNode * m = (t[i + 1]);
Like this:
TicTacNode *t[nodenum] = {};
for (int i = 0; i < nodenum; ++i)
{
t[i] = new TicTacNode();
}
...
for (int i = 0; i < nodenum; ++i)
{
delete t[i];
}
Though, you really should use smart pointers instead, then you don't need to worry about calling delete manually at all:
#include <memory>
std::unique_ptr<TicTacNode> t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i].reset(new TicTacNode);
// or, in C++14 and later:
// t[i] = std::make_unique<TicTacNode>();
}
Or, you could simply not use dynamic allocation at all:
TicTacNode t[nodenum];
Would any pointers that are assigned with values in t need to be deleted as well?
No. However, you have to make sure that you don't use those pointers any more after the memory have been deallocated.
Just curious as to how I would delete this once it is done being used.
As simple as this:
std::unique_ptr<TicTacNode> t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i] = std::make_unique<TicTacNode>();
}
// as soon as scope for t ends all data will be cleaned properly
or even simpler as looks like there is no reason to allocate them dynamically:
TicTacNode t[nodenum]; // default ctor is called for each object and all will be deleted when t destroyed
Actual you don't have to explicitly allocate and deallocate memory. All you need is the right data structure for the job.
In your case either std::vector or std::list might do the job very well
Using std::vector the whole code might be replaced by
auto t = std::vector<TicTacNode>(nodenum)
or using std::list
auto t = std::list<TicTacNode>(nodenum)
Benefits:
Less and clear code.
No need for std::new, since both containers will allocate and
initialise nodenum of objects.
No need for std::delete, since containers will free memory
automatically when they go out of scope.
I have a function using a 2D array and I want to copy data from one array to another and I used a tmp array, but valgrind kept on saying I have memory leak. I can't figure out why. The following is part of a function.
// valgrind gave me error as operator new[] (unsigned long) for the following line
T** temp_pointer = new T*[rows];
for (int i=0; i < rows; i++) {
temp_pointer[i] = new T[columns];
}
for (int i =0; i< rows; i++) {
for (int j =0; j < (columns-3); j++) {
temp_pointer[i][j] = Arry[i][j];
}
temp_pointer[i][columns -3 ] = myvalue1;
temp_pointer[i][columns-2] = myvalue2;
temp_pointer[i][columns-1] = myvalue3;
}
for ( int i =0; i< rows; i++)
delete [] Arry[i];
delete [] Arry;
Arry= temp_pointer;
I also have a destructor which recursively delete the Arry pointers. Arry is private member of a template class.
I just could not figure out why it was a memory leak. Am I supposed to recursively delete temp_pointer ?? (I tried and it didn't work)
I just didn't know where did it leak?
It is not entirely clear why valgrind claims that the memory is being leaked, but you clearly have an out of bound access in your loop.
temp_pointer[i][columns] = myvalue;
The index of the last element of an array is not its size, it is (size-1). Writing into a location outside of the array's bound could clobber the memory allocator's housekeeping information and cause valgrind to complain.
I've searched through many topics here, but they didn't seem to answer me exactly.
I'm trying to do some dynamic reallocation od arrays in C++. I can't use anything from STL libraries as I need to use this in homework where STL (vectors,...) is explicitly forbidden.
So far, I've tried to elaborate with code like this:
int * items = new int[3]; //my original array I'm about to resize
int * temp = new int[10];
for (int i = 0; i < 3; i++) temp[i] = items[i];
delete [] items; //is this necessary to delete?
items = new int [10];
for (int i = 0; i < 10; i++) items[i] = temp[i];
delete [] temp;
This seem to work, but what bothers me is the excessive number of iterations. Can't this be done anyhow smarter? Obviously, I'm working with much larger arrays than this. Unfortunately, I have to work with arrays though.
edit: When I try to do items = temp; instead of
for (int i = 0; i < 10; i++) items[i] = temp[i]; and try to std::cout all my elements, I end up with losing first two elements, but valgrind prints them correctly.
Yes, the first delete[] is necessary. Without it, you'd be leaking memory.
As to the code that comes after that first delete[], all of it can be replaced with:
items = temp;
This would make items point to the ten-element array you've just populated:
int * items = new int[3]; //my original array I'm about to resize
int * temp = new int[10];
for (int i = 0; i < 3; i++) temp[i] = items[i];
delete [] items; //delete the original array before overwriting the pointer
items = temp;
Finally, don't forget to delete[] items; when you are done with the array.
The containers of the STL were made to ease work like this. It is tedious, but there is not much of a choice, when you need to use C-arrays.
The deletion of
delete [] items;
is necessary, as when you abandon the reference to the array, which you would do with assigning a new reference in
items = new int [10];
will cause a memory leak, so this is necessary.
When I allocate multidimensional arrays using new, I am doing it this way:
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
}
where ncols[] contains the length for each element in matrix, and nrows the number of element in matrix.
If I want to populate matrix, I then have
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
But I am reading C++ FAQ, who is telling be to be very careful. I should initialize each row with NULL first. Then, I should trycatch the allocation for rows. I really do not understand why all this. I have always (but I am in the beginning) initialized in C style with the above code.
FAQ wants me to do this
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = NULL;
try {
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
}
}
}
catch (...) {
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
throw; // Re-throw the current exception
}
}
1/ Is it farfetched or very proper to always initialize so cautiously ?
2/ Are they proceeding this way because they are dealing with non built-in types? Would code be the same (with same level of cautiousness) with double* matrix = new double[nrows]; ?
Thanks
EDIT
Part of the answer is in next item in FAQ
The reason for being this careful is that you'll have memory leaks if any of those allocations fail, or if the Fred constructor throws. If you were to catch the exception higher up the callstack, you have no handles to the memory you allocated, which is a leak.
1) It's correct, but generally if you're going to this much trouble to protect against memory leaks, you'd prefer to use std::vector and std::shared_ptr (and so on) to manage memory for you.
2) It's the same for built-in types, though at least then the only exception that will be thrown is std::bad_alloc if the allocation fails.
I would think that it depends on the target platform and the requirements to your system. If safety is a high priority and / or if you can run out of memory, then no, this is not farfetched. However, if you are not concerned too much with safety and you know that the users of your system will have ample free memory, then I would not do this either.
It does not depend on whether builtin-types are used or not. The FAQ solution is nulling the pointers to the rows so that in the event of an exception, only those rows which have already been created are deleted (and not some random memory location).
That said, I can only second R. Martinho Ferndandes' comment that you should use STL containers for this. Managing your own memory is tedious and dangerous.
I had some code thrown at me to 'productionize.' I ran a memory leak checker and it calls out the following line within the 'for' loop below as a memory leak.
someStruct->arrayMap = new std::list<BasisIndex>*[someStruct->mapSizeX];
for(int i=0; i<someStruct->mapSizeX; i++){
someStruct->arrayMap[i] = new std::list<BasisIndex>[someStruct->mapSizeY];
}
Here is how the array map is declared:
struct SomeStruct{
int mapSizeX;
int mapSizeY;
std::list<BasisIndex>** arrayMap;
};
Here are a couple usages of it:
someStruct->arrayMap[xVal][yVal].push_back(tempIndex);
for(it = someStruct->arrayMap[xVal][yVal].begin(); it != someStruct->arrayMap[xVal][yVal].end(); it++){
...
}
The memory leak checker dumped for 5 minutes before I killed it. Then I added the following bit of code in a cleanup routine but it still dumps out 150 warnings all pointing to the line of code within the for loop at the top.
for(int x=0; x<someStruct->mapSizeX; x++){
for(int y=0; y<someStruct->mapSizeY; y++){
someStruct->arrayMap[x][y].clear();
someStruct->arrayMap[x][y].~list();
}
}
std::list<BasisIndex> ** temp = someStruct->arrayMap;
delete temp;
How do I completely delete the memory associated with this array map?
Deallocate the objects in the reverse order that you allocated them.
Allocation:
someStruct->arrayMap = new std::list<BasisIndex>*[someStruct->mapSizeX];
for(int i=0; i<someStruct->mapSizeX; i++){
someStruct->arrayMap[i] = new std::list<BasisIndex>[someStruct->mapSizeY];
}
Deallocation:
for (int i=0; i<someStruct->mapSizeX; i++){
delete[] someStruct->arrayMap[i];
}
delete[] someStruct->arrayMap;
someStruct->arrayMap[x][y].~list(); <-- You should not call the destructor manually. (I didn't even know it was valid to do it that way when placement new wasn't used first...) You need to use delete instead.