Confused about deleting dynamic memory allocated to array of struct - c++

I'm having a memory leak issue and it's related to an array of structs inside a class (not sure if it matters that they're in a class). When I call delete on the struct, the memory is not cleared. When I use the exact same process with int and dbl it works fine and frees the memory as it should.
I've created very simple examples and they work correctly so it's related to something else in the code but I'm not sure what that could be. I never get any errors and the code executes correctly. However, the allocation / deallocation occurs in a loop so the memory usage continually rises.
In other words, here's a summary of the problem:
struct myBogusStruct {
int bogusInt1, bogusInt2;
};
class myBogusClass {
public:
myBogusStruct *bogusStruct;
};
void main(void) {
int i, arraySize;
double *bogusDbl;
myBogusClass bogusClass;
// arraySize is read in from an input file
for(i=0;i<100;i++) {
bogusDbl = new double[arraySize];
bogusClass.bogusStruct = new myBogusStruct[arraySize];
// bunch of other code
delete [] bogusDbl; // this frees memory
delete [] bogusClass.bogusStruct; // this does not free memory
}
}
When I remove the bunch of other code, both delete lines work correctly. When it's there, though, the second delete line does nothing. Again, I never get any errors from the code, just memory leaks. Also, if I replace arraySize with a fixed number like 5000 then both delete lines works correctly.
I'm not really sure where to start looking - what could possibly cause the delete line not to work?

There is no reason at all for you to either allocate or delete myBogusDbl inside the for loop, because arraySize never changes inside the loop.
Same goes for myBogusClass.myBogusStruct. No reason to allocate/delete it at all inside the loop:
myBogusDbl = new double[arraySize];
myBogusClass.myBogusStruct = new bogusStruct[arraySize];
for (i = 0; i < 100; i++) {
// bunch of other code
}
delete[] myBogusDbl;
delete[] myBogusClass.myBogusStruct;
You should also consider using std::vector instead of using raw memory allocation.
Now to the possible reason of why the second delete in the original code doesn't do anything: deleting a NULL pointer does, by definition, nothing. It's a no-op. So for debugging purposes, try introducing a test before deleting it to see if it's NULL and if yes abort(). (I'd use a debugger instead though, as it's much quicker to set up a watch expression there compared to writing debug code.)
In general though, we need to see that "bunch of other code".

Related

new and delete on different scope

Consider the following code:
int *expand_array(int *old_arr,int array_length)
{
int *new_arr = new int[array_length +3];
for(int counter=0;counter<array_length;counter++)
new_arr[counter]=old_arr[counter];
delete[] old_arr;
return new_arr;
}
int main()
{
int *my_first_arr = new int[4];
int *my_expanded_arr=expand_array(my_first_arr,4);
delete[] my_expanded_arr;
}
will there be any memory leak here?
And to generalize the question,
if the pointer returned from a new statement is copied ,passed to a function or assigned to a different pointer, will the delete copied_pointer release the memory?
Your code is perfectly valid C++ and has no memory leaks. You can copy a pointer as often as you want to and deleteing any of those copies in any scope has the same effect.
It is still bad practice, however, and you shouldn't write code like this. Use of raw new and delete is too error prone and will make for poorly maintainable code. Instead, use RAII wrapper types like std::unique_ptr, std::shared_ptr or, in this case, std::vector.
The code in your question is basically equivalent to this.
int
main()
{
auto numbers = std::vector<int>(4);
numbers.resize(7);
}
Much simple, no?
Why do you believe that there would be a memory leak? Of course there wouldn't be.
But there is a different bug in this code. If the new array size is larger than the size of the existing old_arr, the code that copies the old array to the newly allocated int array is going to copy too much, run off past the end of the old array, resulting in undefined behavior; possibly a crash (old array size is 2 ints, array_length is 10, the for loop will attempt to copy 10 values from the old array which only has 2).

C++ Struct with pointers : constructor and destructor

I am having a problem with a simulation program that calls a DLL to perform an optimization task. After having studied this issue for a certain time, I think my problem lies in the destructor I use to free memory after the DLL has returned the desired information. The simulation program was developed on Borland C++ Builder v6 and the DLL was developed on MS Visual C++ 2005.
For the simulation program (P) and the DLL to exchange data, I created two structures InputCPLEX and OutputCPLEX and a function optimize that takes two arguments: one object of type InputCPLEX and one object of type OutputCPLEX. Both structures are declared in a header file structures.h which belongs to the P project and the DLL project.
Both InputCPLEX and OutputCPLEX structures have int and int* members, so basically the file structures.h looks like :
//structures.h
struct InputCPLEX{
public:
int i;
int* inputData;
}
struct OutputCPLEX{
public:
int j;
int* outputData;
}
The idea is that along the simulation process (the execution of P), I periodically call the DLL to solve an optimization problem, so inputData is the array corresponding to the variables in my optimization problem and outputData is the array of optimal values for my variables. I know that it would have been easier to use the STL containers, such as vector<int>, however - correct me if I am wrong - it seems it is difficult to exchange STL objects between two different compilers.
Here is how things look in my main file (in P):
//main.h
InputCPLEX* input;
OutputCPLEX* output;
int* var;
int* sol;
//main.cpp
[...] //lots of code
input = new InputCPLEX;
output = new OutputCPLEX;
int n = X; //where X is an integer
var = new int[n];
[...] //some code to fill var
input->i = n;
input->inputData = var;
optimize(input,output); //calls the DLL
int m = output->j;
sol = new int[n];
sol = output->outputData;
[...] //some code to use the optimized data
delete[] var;
delete[] sol;
delete input;
delete output;
[...] //lots of code
For more than one year I have been using this code without any constructor or destructor in the file structures.h, so no initialization of the structures members was performed. As you may have guessed, I am no expert in C++, in fact it's quite the opposite. I also want to underline that I did not code most of the simulation program, just some functions, this program was developed for more than 10 years by several developers, and the result is quite messy.
However, everything was working just fine until recently. I decided to provide more information to the DLL (for optimization purposes), and consequently the simulation program has been crashing systematically when running large simulations (involving large data sets). The extra information are pointers in both structures, my guess is that the program was leaking memory, so I tried to code a constructor and a destructor so that the memory allocated to the structures input and output could be properly managed. I tried the following code which I found searching up the internet :
//structures.h
struct InputCPLEX{
public:
int i;
int* inputData;
int* inputData2; // extra info
int* inputData3; // extra info
InputCPLEX(): i(0), inputData(0), inputData2(0), inputData3(0) {}
~InputCPLEX(){
if (inputData) delete inputData;
if (inputData2) delete inputData2;
if (inputData3) delete inputData3;
}
}
struct OutputCPLEX{
public:
int j;
int* outputData;
int* outputData2;
int* outputData3;
OutputCPLEX(): j(0), outputData(0), outputData2(0), outputData3(0) {}
~OutputCPLEX(){
if (outputData) delete outputData;
if (outputData2) delete outputData2;
if (outputData3) delete outputData3;
}
}
But it does not seems to work: the program crashes even faster, after only a short amount of time. Can someone help me identify the issues in my code? I know that there may be other factors affecting the execution of my program, but if I remove both constructors and destructors in structures.h file, then the simulation program is still able to execute small simulations, involving small data sets.
Thank you very much for your assistance,
David.
You have to use consistent way of new - delete. If something was acquired by new[] you should delete it by delete[], if by new -> delete by delete. In your code you create input and output by new but delete via delete[].
BTW, you do not have to check a pointer for zero before deletion. delete handles zero pointers with no problems.
I see several problems in your code:
1) Memory leak/double deletion:
sol = new int[n];
sol = output->outputData;
Here you override sol pointer right after initialization and data allocated by new int[n] is leaked. Also you double delete pointer in sol - second time in destructor of output. The same problem with var - you delete it twice, by explicit delete[] and in destructor of input.
Double deletion problem is raised after you have added destructors with delete, looks like before it was not a problem.
Also as #Riga mentioned you use new[] to allocate array, but delete instead of delete[] in destructors. This is not correct and this is Undefined Behavior. Despite this doesn't look like crash cause. In real world most compilers don't make difference implementing delete and delete[] for built-in and POD types. Serious problems can arise only when you delete array of objects with non-trivial destructors.
2) Where output->outputData is allocated? If in DLL it is another problem, as you usually cannot safely deallocate memory in your main program if it was allocated in DLL implemented with another compiler. The reason is different new/delete implementation and different heaps used by runtimes of main program and DLL.
You always shall allocate/deallocate memory on same side. Or use some common lower-level API - e.g. VirtualAlloc()/VirtualFree() or HeapAlloc()/HeapFree() with same heap handle.
This looks odd:
int m = output->j;
sol = new int[n];
sol = output->outputData;
as far as I understood it you return the size in m but allocate with n
then you overwrite the array by setting the pointer (sol) to outputData
I think you meant something like:
int m = output->j;
sol = new int[m];
memcpy(sol,output->outputData,sizeof(int)*m);

What can cause a segmentation fault using delete command in C++?

I've written a program that allocates a new object of the class T like this:
T* obj = new T(tid);
where tid is an int
Somewhere else in my code, I'm trying to release the object I've allocated, which is inside a vector, using:
delete(myVec[i]);
and then:
myVec[i] = NULL;
Sometimes it passes without any errors, and in some cases it causes a crash—a segmentation fault.
I've checked before calling delete, and that object is there—I haven't deleted it elsewhere before.
What can cause this crash?
This is my code for inserting objects of the type T to the vector:
_myVec is global
int add() {
int tid = _myVec.size();
T* newT = new T (tid);
if (newT == NULL){
return ERR_CODE;
}
_myVec.push_back(newT);
// _myVec.push_back(new T (tid));
return tid;
}
as it is - the program sometimes crash.
When I replace the push_back line with the commented line, and leave the rest as it is-it works.
but when I replace this code with:
int add() {
int tid = _myVec.size();
if (newT == NULL){
return ERR_CODE;
}
_myVec.push_back(new T (tid));
return tid;
}
it crashes in a different stage...
the newT in the second option is unused, and still - changes the whole process... what is going on here?
Segfaulting mean trying to manipulate a memory location that shouldn't be accessible to the application.
That means that your problem can come from three cases :
Trying to do something with a pointer that points to NULL;
Trying to do something with an uninitialized pointer;
Trying to do something with a pointer that pointed to a now deleted object;
1) is easy to check so I assume you already do it as you nullify the pointers in the vector. If you don't do checks, then do it before the delete call. That will point the case where you are trying to delete an object twice.
3) can't happen if you set NULL to the pointer in the vector.
2) might happen too. In you case, you're using a std::vector, right? Make sure that implicit manipulations of the vector (like reallocation of the internal buffer when not big enough anymore) doesn't corrupt your list.
So, first check that you delete NULL pointers (note that delete(NULL) will not throw! it's the standard and valid behaviour! ) - in your case you shouldn't get to the point to try to delete(NULL).
Then if it never happen, check that you're not having your vector fill with pointers pointing to trash. For example, you should make sure you're familiar with the [Remove-Erase idiom][1].
Now that you added some code I think I can see the problem :
int tid = _myVec.size();
You're using indice as ids.
Now, all depends on the way you delete your objects. (please show it for a more complete answer)
You just set the pointer to NULL.
You remove the pointer from the vector.
If you only do 1), then it should be safe (if you don't bother having a vector that grows and never get released and ids aren't re-used).
If you do 2. then this is all wrong : each time you remove an object from a vector, all the object still contains after the removed object position will be lowered by one. Making any stored id/index invalid.
Make sure you're coherent on this point, it is certainly a source of errors.
that segmentation fault is most probably and memory access violation. Some reasons
1) object already deallocated. be sure you set that array position on NULL after delete
2) you are out of array bounds
3) if you access that array from multiple threads make sure you are synchronizing correctly
If you're completely certain that pointer points to a valid object, and that the act of deleting it causes the crash, then you have heap corruption.
You should try using a ptr_vector, unlike your code, it's guaranteed to be exception-safe.
Hint: if you write delete, you're doing it wrong
You can't be sure that the object is still valid: the memory that was occupied by the object is not necessarily cleaned, and therefore, you can be seeing something that appears to be your object but it is not anymore.
You can use a mark in order to be sure that the object is still alive, and delete that mark in the destructor.
class A {
public:
static const unsigned int Inactive;
static const unsigned int Active;
A();
~A();
/* more things ...*/
private:
unsigned int mark;
};
const unsigned int A::Inactive = 0xDEADBEEF;
const unsigned int A::Active = 0x11BEBEEF;
A::A() : mark( Active )
{}
A::~A()
{
mark = Inactive;
}
This way, checking the first 4 bytes in your object you can easily verify whether your object has finished its live or not.

Is delete p where p is a pointer to array always a memory leak?

following a discussion in a software meeting I've set out to find out if deleting a dynamically allocated, primitives array with plain delete will cause a memory leak.
I have written this tiny program and compiled it with visual studio 2008 running on windows XP:
#include "stdafx.h"
#include "Windows.h"
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*1000; i++)
{
int* p = new int[1024*100000];
for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2;
Sleep(1000);
delete p;
}
}
I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected
I've modified my test program to allocate a non primitive type array :
#include "stdafx.h"
#include "Windows.h"
struct aStruct
{
aStruct() : i(1), j(0) {}
int i;
char j;
} NonePrimitive;
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*100000; i++)
{
aStruct* p = new aStruct[1024*100000];
Sleep(1000);
delete p;
}
}
after running for for 10 minutes there was no meaningful increase in memory
I compiled the project with warning level 4 and got no warnings.
is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?
delete p, where p is an array is called undefined behaviour.
Specifically, when you allocate an array of raw data types (ints), the compiler doesnt have a lot of work to do, so it turns it into a simple malloc(), so delete p will probably work.
delete p is going to fail, typically, when:
p was a complex data type - delete p; won't know to call individual destructors.
a "user" overloads operator new[] and delete[] to use a different heap to the regular heap.
the debug runtime overloads operator new[] and delete[] to add extra tracking information for the array.
the compiler decides it needs to store extra RTTI information along with the object, which delete p; won't understand, but delete []p; will.
No, it's undefined behavior. Don't do it - use delete[].
In VC++ 7 to 9 it happens to work when the type in question has trivial destructor, but it might stop working on newer versions - usual stuff with undefined behavior. Don't do it anyway.
It's called undefined behaviour; it might work, but you don't know why, so you shouldn't stick with it.
I don't think Visual Studio keeps track of how you allocated the objects, as arrays or plain objects, and magically adds [] to your delete. It probably compiles delete p; to the same code as if you allocated with p = new int, and, as I said, for some reason it works. But you don't know why.
One answer is that yes, it can cause memory leaks, because it doesn't call the destructor for every item in the array. That means that any additional memory owned by items in the array will leak.
The more standards-compliant answer is that it's undefined behaviour. The compiler, for example, has every right to use different memory pools for arrays than for non-array items. Doing the new one way but the delete the other could cause heap corruption.
Your compiler may make guarantees that the standard doesn't, but the first issue remains. For POD items that don't own additional memory (or resources like file handles) you might be OK.
Even if it's safe for your compiler and data items, don't do it anyway - it's also misleading to anyone trying to read your code.
no, you should use delete[] when dealing with arrays
Just using delete won't call the destructors of the objects in the array. While it will possibly work as intended it is undefined as there are some differences in exactly how they work. So you shouldn't use it, even for built in types.
The reason seems not to leak memory is because delete is typically based on free, which already knows how much memory it needs to free. However, the c++ part is unlikely to be cleaned up correctly. I bet that only the destructor of the first object is called.
Using delete with [] tells the compiler to call the destructor on every item of the array.
Not using delete [] can cause memory leaks if used on an array of objects that use dynamic memory allocations like follows:
class AClass
{
public:
AClass()
{
aString = new char[100];
}
~AClass()
{
delete [] aString;
}
private:
const char *aString;
};
int main()
{
AClass * p = new AClass[1000];
delete p; // wrong
return 0;
}

Heap corruption

Why is it a problem if we have a huge piece of code between new and delete of a char array.
Example
void this_is_bad() /* You wouldn't believe how often this kind of code can be found */
{
char *p = new char[5]; /* spend some cycles in the memory manager */
/* do some stuff with p */
delete[] p; /* spend some more cycles, and create an opportunity for a leak */
}
Because somebody may throw an exception.
Because somebody may add a return.
If you have lots of code between the new and delete you may not spot that you need to deallocate the memory before the throw/return?
Why do you have a RAW pointer in your code.
Use a std::vector.
The article you reference is making the point that
char p[5];
would be just as effective in this case and have no danger of a leak.
In general, you avoid leaks by making the life cycle of allocated memory very clear, the new and the delete can be seen to be related.
Large separation between the two is harder to check, and needs to consider carefully whether there are any ways out of the code that might dodge the delete.
The link (and source) of that code is lamenting the unnecessary use of the heap in that code. For a constant, and small, amount of memory there's no reason not to allocate it on the stack.
Instead:
void this_is_good()
{
/* Avoid allocation of small temporary objects on the heap*/
char p[5]; /* Use the stack instead */
/* do some stuff */
}
There's nothing inherently wrong with the original code though, its just less than optimal.
Next to all interesting answers about the heap, and about having new and delete occur close to each other, I might add that the sheer fact of having a huge amount of code in one function is to be avoided. If the huge amount of code separates two related lines of code, it's even worse.
I would differentiate between 'amount of work' and 'amount of code':
void do_stuff( char* const p );
void formerly_huge_function() {
char* p = new char[5];
CoInitialize( NULL );
do_stuff( p );
CoUninitialize();
delete[] p;
}
Now do_stuff can do a lot of things without interfering with the allocation problem. But also other symmetrical stuff stays together this way.
It's all about the guy who's going to maintain your code. It might be you, in a month.
That particular example isn't stating that having a bunch of code in between a new and delete is necessarily bad; it stating that if there are ways to write code that don't use the heap, you might want to prefer that to avoid heap corruption.
It's decent enough advice; if you reduce the amount you use the heap, you know where to look when the heap is corrupted.
I would argue that it's not a problem to have huge amounts of code between a new and delete of any variable. The idea of using new is to place a value on the heap and hence keep it alive for long periods of time. Having code execute on these values is an expected operation.
What can get you into trouble when you have huge amounts of code within the same method between a new and delete is the chance of accidentally leaking the variable due to ...
An exception being thrown
Methods that get so long you can't see the begining or end and hence people start arbitrarily returning from the middle without realizing they skipped a delete call
Both of these can be fixed by using an RAII type such as std::vector<char> instead of a new / delete pair.
It isn't, assuming that the "huge piece of code":
Always runs "delete [] p;" before calling "p = new char[size];" or "throw exception;"
Always runs "p = 0;" after calling "delete [] p;"
Failure to meet the first condition, will cause the contents of p to be leaked. Failure to meet the second condition may result in a double-delete. In general, it is best to use std::vector, so as to avoid any problems.
Are you asking if this would be better?
void this_is_great()
{
char* p = new char[5];
delete[] p;
return;
}
It's not.