C++ Struct with pointers : constructor and destructor - c++

I am having a problem with a simulation program that calls a DLL to perform an optimization task. After having studied this issue for a certain time, I think my problem lies in the destructor I use to free memory after the DLL has returned the desired information. The simulation program was developed on Borland C++ Builder v6 and the DLL was developed on MS Visual C++ 2005.
For the simulation program (P) and the DLL to exchange data, I created two structures InputCPLEX and OutputCPLEX and a function optimize that takes two arguments: one object of type InputCPLEX and one object of type OutputCPLEX. Both structures are declared in a header file structures.h which belongs to the P project and the DLL project.
Both InputCPLEX and OutputCPLEX structures have int and int* members, so basically the file structures.h looks like :
//structures.h
struct InputCPLEX{
public:
int i;
int* inputData;
}
struct OutputCPLEX{
public:
int j;
int* outputData;
}
The idea is that along the simulation process (the execution of P), I periodically call the DLL to solve an optimization problem, so inputData is the array corresponding to the variables in my optimization problem and outputData is the array of optimal values for my variables. I know that it would have been easier to use the STL containers, such as vector<int>, however - correct me if I am wrong - it seems it is difficult to exchange STL objects between two different compilers.
Here is how things look in my main file (in P):
//main.h
InputCPLEX* input;
OutputCPLEX* output;
int* var;
int* sol;
//main.cpp
[...] //lots of code
input = new InputCPLEX;
output = new OutputCPLEX;
int n = X; //where X is an integer
var = new int[n];
[...] //some code to fill var
input->i = n;
input->inputData = var;
optimize(input,output); //calls the DLL
int m = output->j;
sol = new int[n];
sol = output->outputData;
[...] //some code to use the optimized data
delete[] var;
delete[] sol;
delete input;
delete output;
[...] //lots of code
For more than one year I have been using this code without any constructor or destructor in the file structures.h, so no initialization of the structures members was performed. As you may have guessed, I am no expert in C++, in fact it's quite the opposite. I also want to underline that I did not code most of the simulation program, just some functions, this program was developed for more than 10 years by several developers, and the result is quite messy.
However, everything was working just fine until recently. I decided to provide more information to the DLL (for optimization purposes), and consequently the simulation program has been crashing systematically when running large simulations (involving large data sets). The extra information are pointers in both structures, my guess is that the program was leaking memory, so I tried to code a constructor and a destructor so that the memory allocated to the structures input and output could be properly managed. I tried the following code which I found searching up the internet :
//structures.h
struct InputCPLEX{
public:
int i;
int* inputData;
int* inputData2; // extra info
int* inputData3; // extra info
InputCPLEX(): i(0), inputData(0), inputData2(0), inputData3(0) {}
~InputCPLEX(){
if (inputData) delete inputData;
if (inputData2) delete inputData2;
if (inputData3) delete inputData3;
}
}
struct OutputCPLEX{
public:
int j;
int* outputData;
int* outputData2;
int* outputData3;
OutputCPLEX(): j(0), outputData(0), outputData2(0), outputData3(0) {}
~OutputCPLEX(){
if (outputData) delete outputData;
if (outputData2) delete outputData2;
if (outputData3) delete outputData3;
}
}
But it does not seems to work: the program crashes even faster, after only a short amount of time. Can someone help me identify the issues in my code? I know that there may be other factors affecting the execution of my program, but if I remove both constructors and destructors in structures.h file, then the simulation program is still able to execute small simulations, involving small data sets.
Thank you very much for your assistance,
David.

You have to use consistent way of new - delete. If something was acquired by new[] you should delete it by delete[], if by new -> delete by delete. In your code you create input and output by new but delete via delete[].
BTW, you do not have to check a pointer for zero before deletion. delete handles zero pointers with no problems.

I see several problems in your code:
1) Memory leak/double deletion:
sol = new int[n];
sol = output->outputData;
Here you override sol pointer right after initialization and data allocated by new int[n] is leaked. Also you double delete pointer in sol - second time in destructor of output. The same problem with var - you delete it twice, by explicit delete[] and in destructor of input.
Double deletion problem is raised after you have added destructors with delete, looks like before it was not a problem.
Also as #Riga mentioned you use new[] to allocate array, but delete instead of delete[] in destructors. This is not correct and this is Undefined Behavior. Despite this doesn't look like crash cause. In real world most compilers don't make difference implementing delete and delete[] for built-in and POD types. Serious problems can arise only when you delete array of objects with non-trivial destructors.
2) Where output->outputData is allocated? If in DLL it is another problem, as you usually cannot safely deallocate memory in your main program if it was allocated in DLL implemented with another compiler. The reason is different new/delete implementation and different heaps used by runtimes of main program and DLL.
You always shall allocate/deallocate memory on same side. Or use some common lower-level API - e.g. VirtualAlloc()/VirtualFree() or HeapAlloc()/HeapFree() with same heap handle.

This looks odd:
int m = output->j;
sol = new int[n];
sol = output->outputData;
as far as I understood it you return the size in m but allocate with n
then you overwrite the array by setting the pointer (sol) to outputData
I think you meant something like:
int m = output->j;
sol = new int[m];
memcpy(sol,output->outputData,sizeof(int)*m);

Related

new and delete on different scope

Consider the following code:
int *expand_array(int *old_arr,int array_length)
{
int *new_arr = new int[array_length +3];
for(int counter=0;counter<array_length;counter++)
new_arr[counter]=old_arr[counter];
delete[] old_arr;
return new_arr;
}
int main()
{
int *my_first_arr = new int[4];
int *my_expanded_arr=expand_array(my_first_arr,4);
delete[] my_expanded_arr;
}
will there be any memory leak here?
And to generalize the question,
if the pointer returned from a new statement is copied ,passed to a function or assigned to a different pointer, will the delete copied_pointer release the memory?
Your code is perfectly valid C++ and has no memory leaks. You can copy a pointer as often as you want to and deleteing any of those copies in any scope has the same effect.
It is still bad practice, however, and you shouldn't write code like this. Use of raw new and delete is too error prone and will make for poorly maintainable code. Instead, use RAII wrapper types like std::unique_ptr, std::shared_ptr or, in this case, std::vector.
The code in your question is basically equivalent to this.
int
main()
{
auto numbers = std::vector<int>(4);
numbers.resize(7);
}
Much simple, no?
Why do you believe that there would be a memory leak? Of course there wouldn't be.
But there is a different bug in this code. If the new array size is larger than the size of the existing old_arr, the code that copies the old array to the newly allocated int array is going to copy too much, run off past the end of the old array, resulting in undefined behavior; possibly a crash (old array size is 2 ints, array_length is 10, the for loop will attempt to copy 10 values from the old array which only has 2).

What if I delete an array once in C++, but allocate it multiple times?

Suppose I have the following snippet.
int main()
{
int num;
int* cost;
while(cin >> num)
{
int sum = 0;
if (num == 0)
break;
// Dynamically allocate the array and set to all zeros
cost = new int [num];
memset(cost, 0, num);
for (int i = 0; i < num; i++)
{
cin >> cost[i];
sum += cost[i];
}
cout << sum/num;
}
` `delete[] cost;
return 0;
}
Although I can move the delete statement inside the while loop
for my code, for understanding purposes, I want to know what happens with the code as it's written. Does C++ allocate different memory spaces each time I use operator new?
Does operator delete only delete the last allocated cost array?
Does C++ allocate different memory spaces each time I use operator new?
Yes.
Does operator delete only delete the last allocated cost array?
Yes.
You've lost the only pointers to the others, so they are irrevocably leaked. To avoid this problem, don't juggle pointers, but use RAII to manage dynamic resources automatically. std::vector would be perfect here (if you actually needed an array at all; your example could just keep reading and re-using a single int).
I strongly advise you not to use "C idioms" in a C++ program. Let the std library work for you: that's why it's there. If you want "an array (vector) of n integers," then that's what std::vector is all about, and it "comes with batteries included." You don't have to monkey-around with things such as "setting a maximum size" or "setting it to zero." You simply work with "this thing," whose inner workings you do not [have to ...] care about, knowing that it has already been thoroughly designed and tested.
Furthermore, when you do this, you're working within C++'s existing framework for memory-management. In particular, you're not doing anything "out-of-band" within your own application "that the standard library doesn't know about, and which might (!!) it up."
C++ gives you a very comprehensive library of fast, efficient, robust, well-tested functionality. Leverage it.
There is no cost array in your code. In your code cost is a pointer, not an array.
The actual arrays in your code are created by repetitive new int [num] calls. Each call to new creates a new, independent, nameless array object that lives somewhere in dynamic memory. The new array, once created by new[], is accessible through cost pointer. Since the array is nameless, that cost pointer is the only link you have that leads to that nameless array created by new[]. You have no other means to access that nameless array.
And every time you do that cost = new int [num] in your cycle, you are creating a completely new, different array, breaking the link from cost to the previous array and making cost to point to the new one.
Since cost was your only link to the old array, that old array becomes inaccessible. Access to that old array is lost forever. It is becomes a memory leak.
As you correctly stated it yourself, your delete[] expression only deallocates the last array - the one cost ends up pointing to in the end. Of course, this is only true if your code ever executes the cost = new int [num] line. Note that your cycle might terminate without doing a single allocation, in which case you will apply delete[] to an uninitialized (garbage) pointer.
Yes. So you get a memory leak for each iteration of the loop except the last one.
When you use new, you allocate a new chunk of memory. Assigning the result of the new to a pointer just changes what this pointer points at. It doesn't automatically release the memory this pointer was referencing before (if there was any).
First off this line is wrong:
memset(cost, 0, num);
It assumes an int is only one char long. More typically it's four. You should use something like this if you want to use memset to initialise the array:
memset(cost, 0, num*sizeof(*cost));
Or better yet dump the memset and use this when you allocate the memory:
cost = new int[num]();
As others have pointed out the delete is incorrectly placed and will leak all memory allocated by its corresponding new except for the last. Move it into the loop.
Every time you allocate new memory for the array, the memory that has been previously allocated is leaked. As a rule of thumb you need to free memory as many times as you have allocated.

Confused about deleting dynamic memory allocated to array of struct

I'm having a memory leak issue and it's related to an array of structs inside a class (not sure if it matters that they're in a class). When I call delete on the struct, the memory is not cleared. When I use the exact same process with int and dbl it works fine and frees the memory as it should.
I've created very simple examples and they work correctly so it's related to something else in the code but I'm not sure what that could be. I never get any errors and the code executes correctly. However, the allocation / deallocation occurs in a loop so the memory usage continually rises.
In other words, here's a summary of the problem:
struct myBogusStruct {
int bogusInt1, bogusInt2;
};
class myBogusClass {
public:
myBogusStruct *bogusStruct;
};
void main(void) {
int i, arraySize;
double *bogusDbl;
myBogusClass bogusClass;
// arraySize is read in from an input file
for(i=0;i<100;i++) {
bogusDbl = new double[arraySize];
bogusClass.bogusStruct = new myBogusStruct[arraySize];
// bunch of other code
delete [] bogusDbl; // this frees memory
delete [] bogusClass.bogusStruct; // this does not free memory
}
}
When I remove the bunch of other code, both delete lines work correctly. When it's there, though, the second delete line does nothing. Again, I never get any errors from the code, just memory leaks. Also, if I replace arraySize with a fixed number like 5000 then both delete lines works correctly.
I'm not really sure where to start looking - what could possibly cause the delete line not to work?
There is no reason at all for you to either allocate or delete myBogusDbl inside the for loop, because arraySize never changes inside the loop.
Same goes for myBogusClass.myBogusStruct. No reason to allocate/delete it at all inside the loop:
myBogusDbl = new double[arraySize];
myBogusClass.myBogusStruct = new bogusStruct[arraySize];
for (i = 0; i < 100; i++) {
// bunch of other code
}
delete[] myBogusDbl;
delete[] myBogusClass.myBogusStruct;
You should also consider using std::vector instead of using raw memory allocation.
Now to the possible reason of why the second delete in the original code doesn't do anything: deleting a NULL pointer does, by definition, nothing. It's a no-op. So for debugging purposes, try introducing a test before deleting it to see if it's NULL and if yes abort(). (I'd use a debugger instead though, as it's much quicker to set up a watch expression there compared to writing debug code.)
In general though, we need to see that "bunch of other code".

deleting an array the wrong way [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How could pairing new[] with delete possibly lead to memory leak only?
I was always told that it's not safe to call delete on an array allocated with new[]. You should always pair new with delete and new[] with delete[].
So I was surprised to discover that the following code compiles and runs ok, in both Debug and Release mode under VS2008.
class CBlah
{
public:
CBlah() : m_i(0) {}
private:
int m_i;
};
int _tmain(int argc, _TCHAR* argv[])
{
for(;;)
{
CBlah * p = new CBlah[1000]; // with []
delete p; // no []
}
return 0;
}
It took me a while to figure out why this works at all, and I think it's just luck and some undefined behaviour.
BUT... it made me wonder... why doesn't Visual Studio pick this up, at least in the Debug memory manager? Is it because there's lots of code out there that makes this mistake and they don't want to break it, or do they feel it's not the job of the Debug memory manager to catch this kind of mistake?
Any thoughts? Is this kind of misuse common?
It will certainly compile ok, because there is no information in the pointer (compile-time) which will see if pointer points to array or what. For example:
int* p;
cin>>x;
if(x == 0)
p = new int;
else
p = new int [10];
delete p; //correct or not? :)
Now , about running ok. This is called undefined behavior in C++, that is, there is no guarantee what will happen - everything can run OK, you can get a segfault, you can get just wrong behavior, or your computer may decide to call 911. UB <=> no guarantee
It's undefined behavior and everything is fair in love, war and undefined behavior...:)
According to MDSN, it translates delete to delete[] when trying to delete an array. (See there, for instance). Though you should have a warning after compiling.
The reason the Debug Memory Manager does not pick up on this error is probably because it it not implemented at the level of new/delete, but at the level of the memory manager that gets invoked by new/delete to allocate the required memory.
At that point, the distinction between array new and scalar new is gone.
You can read these SO answers and links about delete and delete[]: About delete, operator delete, delete[], ...
I don't know what makes you think it "works ok". It compiles and completes without crashing. That does not mean necessarily there was no leak or heap corruption. Also if you got away with it this time, it doesn't necessarily make it a safe thing to do.
Sometimes even a buffer overwrite is something you will "get away with" because the bytes you have written to were not used (maybe they are padding for alignment). Still you should not go around doing it.
Incidentally new T[1] is a form of new[] and still requires a delete[] even though in this instance there is only one element.
Interesting point.
Once I did a code review and tried to convince programmers to fix new[]-delete mismatch.
I've argumented with "Item 5" from Effective C++ by Scott Meyers. However, they argumented with "What do you want, it works well!" and proved, that there was no memory leakage.
However, it worked only with POD-types. Looks like, MS tries to fix the mismatch as pointed out by Raveline.
What would happen, if you added a destructor?
#include <iostream>
class CBlah
{
static int instCnt;
public:
CBlah() : m_i(0) {++instCnt;}
~CBlah()
{
std::cout <<"d-tor on "<< instCnt <<" instance."<<std::endl;
--instCnt;
}
private:
int m_i;
};
int CBlah::instCnt=0;
int main()
{
//for(;;)
{
CBlah * p = new CBlah[10]; // with []
delete p; // no []
}
return 0;
}
Whatever silly"inteligence" fix is added to VS, the code is not portable.
Remember that "works properly" is within the universe of "undefined behavior". It is quite possible for a particular version of a particular compiler to implement this in such a way that it works for all intents and purposes. The important thing to remember is that this is not guaranteed and you can't really ever be sure it's working 100%, and you can't know that it will work with the next version of the compiler. It's also not portable, since another compiler might work in a different fashion.
This works because the particular C++ runtime library it was linked with uses the same heap for both operator new and operator new[]. Many do, but some don't, which is why the practice is not recommended.
The other big difference is that if CBlah had a non-trivial destructor, the delete p; would only call it for the first object in the array, whereas delete[] p; is sure to call it for all the objects.

Is delete p where p is a pointer to array always a memory leak?

following a discussion in a software meeting I've set out to find out if deleting a dynamically allocated, primitives array with plain delete will cause a memory leak.
I have written this tiny program and compiled it with visual studio 2008 running on windows XP:
#include "stdafx.h"
#include "Windows.h"
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*1000; i++)
{
int* p = new int[1024*100000];
for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2;
Sleep(1000);
delete p;
}
}
I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected
I've modified my test program to allocate a non primitive type array :
#include "stdafx.h"
#include "Windows.h"
struct aStruct
{
aStruct() : i(1), j(0) {}
int i;
char j;
} NonePrimitive;
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*100000; i++)
{
aStruct* p = new aStruct[1024*100000];
Sleep(1000);
delete p;
}
}
after running for for 10 minutes there was no meaningful increase in memory
I compiled the project with warning level 4 and got no warnings.
is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?
delete p, where p is an array is called undefined behaviour.
Specifically, when you allocate an array of raw data types (ints), the compiler doesnt have a lot of work to do, so it turns it into a simple malloc(), so delete p will probably work.
delete p is going to fail, typically, when:
p was a complex data type - delete p; won't know to call individual destructors.
a "user" overloads operator new[] and delete[] to use a different heap to the regular heap.
the debug runtime overloads operator new[] and delete[] to add extra tracking information for the array.
the compiler decides it needs to store extra RTTI information along with the object, which delete p; won't understand, but delete []p; will.
No, it's undefined behavior. Don't do it - use delete[].
In VC++ 7 to 9 it happens to work when the type in question has trivial destructor, but it might stop working on newer versions - usual stuff with undefined behavior. Don't do it anyway.
It's called undefined behaviour; it might work, but you don't know why, so you shouldn't stick with it.
I don't think Visual Studio keeps track of how you allocated the objects, as arrays or plain objects, and magically adds [] to your delete. It probably compiles delete p; to the same code as if you allocated with p = new int, and, as I said, for some reason it works. But you don't know why.
One answer is that yes, it can cause memory leaks, because it doesn't call the destructor for every item in the array. That means that any additional memory owned by items in the array will leak.
The more standards-compliant answer is that it's undefined behaviour. The compiler, for example, has every right to use different memory pools for arrays than for non-array items. Doing the new one way but the delete the other could cause heap corruption.
Your compiler may make guarantees that the standard doesn't, but the first issue remains. For POD items that don't own additional memory (or resources like file handles) you might be OK.
Even if it's safe for your compiler and data items, don't do it anyway - it's also misleading to anyone trying to read your code.
no, you should use delete[] when dealing with arrays
Just using delete won't call the destructors of the objects in the array. While it will possibly work as intended it is undefined as there are some differences in exactly how they work. So you shouldn't use it, even for built in types.
The reason seems not to leak memory is because delete is typically based on free, which already knows how much memory it needs to free. However, the c++ part is unlikely to be cleaned up correctly. I bet that only the destructor of the first object is called.
Using delete with [] tells the compiler to call the destructor on every item of the array.
Not using delete [] can cause memory leaks if used on an array of objects that use dynamic memory allocations like follows:
class AClass
{
public:
AClass()
{
aString = new char[100];
}
~AClass()
{
delete [] aString;
}
private:
const char *aString;
};
int main()
{
AClass * p = new AClass[1000];
delete p; // wrong
return 0;
}