Program crashes when using delete - c++

I've got this weird issue when using valgrind which confuses me alot.
im running on ubuntu18.04, c++17, valgrind version 3.13.0
consider this struct:
struct KelvinLine {
float m;
float b;
float epsilon;
KelvinLine() {
m = 0;
b = 0;
epsilon = 0;
}
};
my test main looks something like this:
int main(){
auto arrLen = 10;
KelvinLine* kelvinArray = new KelvinLine[arrLen];
delete kelvinArray;
return 0;
}
when running it without valgrind everything is ok, but once running with valgrind it crashes it with unhandled instruction error, if instead of new i use malloc (with the proper size of the struct array that i wanted) it continues its run like normal.
has anyone dealt with this issue before?
thanks
Edit:
fixed this issue by disabling AVX512F in my cmake

Whilst mixing delete and delete [] is UB, in cases like this (almost a POD struct) then it is likely to be harmless and never cause a crash.
The main difference between these two operators is that the array form will call the destructor on each element whilst the scalar form will only call it once (the first element if it is an array). That can potentially result in a resource leak. After calling the destructor, the memory for the instance/array gets freed. At least for the two implementations that I'm somewhat familiar with (llvm and GCC) that just means a call to free().
Since this struct has no user-defined destructor the two forms of operator delete are essentially equivalent.
That said, this isn't safe to assume in general and there is no reason not to call the correct operator delete [].

The problem is that you're deallocating the memory using incorrect delete expression leading to undefined behaivor. Since you've used the array form of new expression, you must use the corresponding array form of delete expression as well:
//-----vv------------->use array form of delete expression
delete []kelvinArray;

Related

Segmentation fault with struct array using calloc

I have a struct:
typedef struct{
int *issueTypeCount;
}issueTypeTracker;
I've declared a variable of type issueTypeTracker:
issueTypeTracker *typeTracker;
I've allocated necessary memory:
typeTracker = (issueTypeTracker*) malloc(sizeof(issueTypeTracker) * issueTypeList.count());
typeTracker->issueTypeCount = (int*) calloc(65536,sizeof(int));
And then when I try to do something with it, I get a segmentation fault
while(qry.next()){ //while there are records in the query
for(j=0;j<locationList.count();j++){ // no problem
if(qry.value(1) == locationList[j]){ //no problem
for(i=0;i<issueTypeList.count();i++){ //no problem
typeTracker[j].issueTypeCount[i]++; //seg fault as soon as we hit this line
}
}
}
}
I figured it would be a problem with the way i've allocated memory, but as far as I'm aware i've done it correctly. I've tried the solutions proposed in this question, however it still did not work.
I've tried replacing typeTracker->issueTypeCount = (int*) calloc(65536,sizeof(int)); with:
for(j=0;j<issueTypeList.count();j++){
typeTracker[j].issueTypeCount = (int*) calloc(65536,sizeof(int));
}
But I still get the same issue. This happens with any value of j or i, even zero.
This is a lot more trouble than it's worth and a poor implementation of what I'm trying to do anyway, so I'm probably going to scrap this entire thing and just use a multidimensional array. Even so, I'd like to know why this doesn't work, so in the future I don't have trouble when i'm faced with a similar scenario.
You have several issues. Firstly, you're not checking your allocations for success, so any of your pointers could be NULL/nullptr.
Secondly,
typeTracker->issueTypeCount = (int*) calloc(65536,sizeof(int));
is equivalent to
typeTracker[0].issueTypeCount = (int*) calloc(65536,sizeof(int));
so, you initialized the issueTypeCount member for only the first issueTypeTracker in your array. For the other issueTypeList.count() - 1 elements in the array, the pointer is uninitialized.
Therefore this line:
typeTracker[j].issueTypeCount[i]++; //seg fault as soon as we hit this line
will invoke UB for any j>0. Obviously if your allocation failed, you have UB for j==0 as well.

Confused about deleting dynamic memory allocated to array of struct

I'm having a memory leak issue and it's related to an array of structs inside a class (not sure if it matters that they're in a class). When I call delete on the struct, the memory is not cleared. When I use the exact same process with int and dbl it works fine and frees the memory as it should.
I've created very simple examples and they work correctly so it's related to something else in the code but I'm not sure what that could be. I never get any errors and the code executes correctly. However, the allocation / deallocation occurs in a loop so the memory usage continually rises.
In other words, here's a summary of the problem:
struct myBogusStruct {
int bogusInt1, bogusInt2;
};
class myBogusClass {
public:
myBogusStruct *bogusStruct;
};
void main(void) {
int i, arraySize;
double *bogusDbl;
myBogusClass bogusClass;
// arraySize is read in from an input file
for(i=0;i<100;i++) {
bogusDbl = new double[arraySize];
bogusClass.bogusStruct = new myBogusStruct[arraySize];
// bunch of other code
delete [] bogusDbl; // this frees memory
delete [] bogusClass.bogusStruct; // this does not free memory
}
}
When I remove the bunch of other code, both delete lines work correctly. When it's there, though, the second delete line does nothing. Again, I never get any errors from the code, just memory leaks. Also, if I replace arraySize with a fixed number like 5000 then both delete lines works correctly.
I'm not really sure where to start looking - what could possibly cause the delete line not to work?
There is no reason at all for you to either allocate or delete myBogusDbl inside the for loop, because arraySize never changes inside the loop.
Same goes for myBogusClass.myBogusStruct. No reason to allocate/delete it at all inside the loop:
myBogusDbl = new double[arraySize];
myBogusClass.myBogusStruct = new bogusStruct[arraySize];
for (i = 0; i < 100; i++) {
// bunch of other code
}
delete[] myBogusDbl;
delete[] myBogusClass.myBogusStruct;
You should also consider using std::vector instead of using raw memory allocation.
Now to the possible reason of why the second delete in the original code doesn't do anything: deleting a NULL pointer does, by definition, nothing. It's a no-op. So for debugging purposes, try introducing a test before deleting it to see if it's NULL and if yes abort(). (I'd use a debugger instead though, as it's much quicker to set up a watch expression there compared to writing debug code.)
In general though, we need to see that "bunch of other code".

C++ Struct with pointers : constructor and destructor

I am having a problem with a simulation program that calls a DLL to perform an optimization task. After having studied this issue for a certain time, I think my problem lies in the destructor I use to free memory after the DLL has returned the desired information. The simulation program was developed on Borland C++ Builder v6 and the DLL was developed on MS Visual C++ 2005.
For the simulation program (P) and the DLL to exchange data, I created two structures InputCPLEX and OutputCPLEX and a function optimize that takes two arguments: one object of type InputCPLEX and one object of type OutputCPLEX. Both structures are declared in a header file structures.h which belongs to the P project and the DLL project.
Both InputCPLEX and OutputCPLEX structures have int and int* members, so basically the file structures.h looks like :
//structures.h
struct InputCPLEX{
public:
int i;
int* inputData;
}
struct OutputCPLEX{
public:
int j;
int* outputData;
}
The idea is that along the simulation process (the execution of P), I periodically call the DLL to solve an optimization problem, so inputData is the array corresponding to the variables in my optimization problem and outputData is the array of optimal values for my variables. I know that it would have been easier to use the STL containers, such as vector<int>, however - correct me if I am wrong - it seems it is difficult to exchange STL objects between two different compilers.
Here is how things look in my main file (in P):
//main.h
InputCPLEX* input;
OutputCPLEX* output;
int* var;
int* sol;
//main.cpp
[...] //lots of code
input = new InputCPLEX;
output = new OutputCPLEX;
int n = X; //where X is an integer
var = new int[n];
[...] //some code to fill var
input->i = n;
input->inputData = var;
optimize(input,output); //calls the DLL
int m = output->j;
sol = new int[n];
sol = output->outputData;
[...] //some code to use the optimized data
delete[] var;
delete[] sol;
delete input;
delete output;
[...] //lots of code
For more than one year I have been using this code without any constructor or destructor in the file structures.h, so no initialization of the structures members was performed. As you may have guessed, I am no expert in C++, in fact it's quite the opposite. I also want to underline that I did not code most of the simulation program, just some functions, this program was developed for more than 10 years by several developers, and the result is quite messy.
However, everything was working just fine until recently. I decided to provide more information to the DLL (for optimization purposes), and consequently the simulation program has been crashing systematically when running large simulations (involving large data sets). The extra information are pointers in both structures, my guess is that the program was leaking memory, so I tried to code a constructor and a destructor so that the memory allocated to the structures input and output could be properly managed. I tried the following code which I found searching up the internet :
//structures.h
struct InputCPLEX{
public:
int i;
int* inputData;
int* inputData2; // extra info
int* inputData3; // extra info
InputCPLEX(): i(0), inputData(0), inputData2(0), inputData3(0) {}
~InputCPLEX(){
if (inputData) delete inputData;
if (inputData2) delete inputData2;
if (inputData3) delete inputData3;
}
}
struct OutputCPLEX{
public:
int j;
int* outputData;
int* outputData2;
int* outputData3;
OutputCPLEX(): j(0), outputData(0), outputData2(0), outputData3(0) {}
~OutputCPLEX(){
if (outputData) delete outputData;
if (outputData2) delete outputData2;
if (outputData3) delete outputData3;
}
}
But it does not seems to work: the program crashes even faster, after only a short amount of time. Can someone help me identify the issues in my code? I know that there may be other factors affecting the execution of my program, but if I remove both constructors and destructors in structures.h file, then the simulation program is still able to execute small simulations, involving small data sets.
Thank you very much for your assistance,
David.
You have to use consistent way of new - delete. If something was acquired by new[] you should delete it by delete[], if by new -> delete by delete. In your code you create input and output by new but delete via delete[].
BTW, you do not have to check a pointer for zero before deletion. delete handles zero pointers with no problems.
I see several problems in your code:
1) Memory leak/double deletion:
sol = new int[n];
sol = output->outputData;
Here you override sol pointer right after initialization and data allocated by new int[n] is leaked. Also you double delete pointer in sol - second time in destructor of output. The same problem with var - you delete it twice, by explicit delete[] and in destructor of input.
Double deletion problem is raised after you have added destructors with delete, looks like before it was not a problem.
Also as #Riga mentioned you use new[] to allocate array, but delete instead of delete[] in destructors. This is not correct and this is Undefined Behavior. Despite this doesn't look like crash cause. In real world most compilers don't make difference implementing delete and delete[] for built-in and POD types. Serious problems can arise only when you delete array of objects with non-trivial destructors.
2) Where output->outputData is allocated? If in DLL it is another problem, as you usually cannot safely deallocate memory in your main program if it was allocated in DLL implemented with another compiler. The reason is different new/delete implementation and different heaps used by runtimes of main program and DLL.
You always shall allocate/deallocate memory on same side. Or use some common lower-level API - e.g. VirtualAlloc()/VirtualFree() or HeapAlloc()/HeapFree() with same heap handle.
This looks odd:
int m = output->j;
sol = new int[n];
sol = output->outputData;
as far as I understood it you return the size in m but allocate with n
then you overwrite the array by setting the pointer (sol) to outputData
I think you meant something like:
int m = output->j;
sol = new int[m];
memcpy(sol,output->outputData,sizeof(int)*m);

deleting an array the wrong way [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How could pairing new[] with delete possibly lead to memory leak only?
I was always told that it's not safe to call delete on an array allocated with new[]. You should always pair new with delete and new[] with delete[].
So I was surprised to discover that the following code compiles and runs ok, in both Debug and Release mode under VS2008.
class CBlah
{
public:
CBlah() : m_i(0) {}
private:
int m_i;
};
int _tmain(int argc, _TCHAR* argv[])
{
for(;;)
{
CBlah * p = new CBlah[1000]; // with []
delete p; // no []
}
return 0;
}
It took me a while to figure out why this works at all, and I think it's just luck and some undefined behaviour.
BUT... it made me wonder... why doesn't Visual Studio pick this up, at least in the Debug memory manager? Is it because there's lots of code out there that makes this mistake and they don't want to break it, or do they feel it's not the job of the Debug memory manager to catch this kind of mistake?
Any thoughts? Is this kind of misuse common?
It will certainly compile ok, because there is no information in the pointer (compile-time) which will see if pointer points to array or what. For example:
int* p;
cin>>x;
if(x == 0)
p = new int;
else
p = new int [10];
delete p; //correct or not? :)
Now , about running ok. This is called undefined behavior in C++, that is, there is no guarantee what will happen - everything can run OK, you can get a segfault, you can get just wrong behavior, or your computer may decide to call 911. UB <=> no guarantee
It's undefined behavior and everything is fair in love, war and undefined behavior...:)
According to MDSN, it translates delete to delete[] when trying to delete an array. (See there, for instance). Though you should have a warning after compiling.
The reason the Debug Memory Manager does not pick up on this error is probably because it it not implemented at the level of new/delete, but at the level of the memory manager that gets invoked by new/delete to allocate the required memory.
At that point, the distinction between array new and scalar new is gone.
You can read these SO answers and links about delete and delete[]: About delete, operator delete, delete[], ...
I don't know what makes you think it "works ok". It compiles and completes without crashing. That does not mean necessarily there was no leak or heap corruption. Also if you got away with it this time, it doesn't necessarily make it a safe thing to do.
Sometimes even a buffer overwrite is something you will "get away with" because the bytes you have written to were not used (maybe they are padding for alignment). Still you should not go around doing it.
Incidentally new T[1] is a form of new[] and still requires a delete[] even though in this instance there is only one element.
Interesting point.
Once I did a code review and tried to convince programmers to fix new[]-delete mismatch.
I've argumented with "Item 5" from Effective C++ by Scott Meyers. However, they argumented with "What do you want, it works well!" and proved, that there was no memory leakage.
However, it worked only with POD-types. Looks like, MS tries to fix the mismatch as pointed out by Raveline.
What would happen, if you added a destructor?
#include <iostream>
class CBlah
{
static int instCnt;
public:
CBlah() : m_i(0) {++instCnt;}
~CBlah()
{
std::cout <<"d-tor on "<< instCnt <<" instance."<<std::endl;
--instCnt;
}
private:
int m_i;
};
int CBlah::instCnt=0;
int main()
{
//for(;;)
{
CBlah * p = new CBlah[10]; // with []
delete p; // no []
}
return 0;
}
Whatever silly"inteligence" fix is added to VS, the code is not portable.
Remember that "works properly" is within the universe of "undefined behavior". It is quite possible for a particular version of a particular compiler to implement this in such a way that it works for all intents and purposes. The important thing to remember is that this is not guaranteed and you can't really ever be sure it's working 100%, and you can't know that it will work with the next version of the compiler. It's also not portable, since another compiler might work in a different fashion.
This works because the particular C++ runtime library it was linked with uses the same heap for both operator new and operator new[]. Many do, but some don't, which is why the practice is not recommended.
The other big difference is that if CBlah had a non-trivial destructor, the delete p; would only call it for the first object in the array, whereas delete[] p; is sure to call it for all the objects.

Is delete p where p is a pointer to array always a memory leak?

following a discussion in a software meeting I've set out to find out if deleting a dynamically allocated, primitives array with plain delete will cause a memory leak.
I have written this tiny program and compiled it with visual studio 2008 running on windows XP:
#include "stdafx.h"
#include "Windows.h"
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*1000; i++)
{
int* p = new int[1024*100000];
for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2;
Sleep(1000);
delete p;
}
}
I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected
I've modified my test program to allocate a non primitive type array :
#include "stdafx.h"
#include "Windows.h"
struct aStruct
{
aStruct() : i(1), j(0) {}
int i;
char j;
} NonePrimitive;
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*100000; i++)
{
aStruct* p = new aStruct[1024*100000];
Sleep(1000);
delete p;
}
}
after running for for 10 minutes there was no meaningful increase in memory
I compiled the project with warning level 4 and got no warnings.
is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?
delete p, where p is an array is called undefined behaviour.
Specifically, when you allocate an array of raw data types (ints), the compiler doesnt have a lot of work to do, so it turns it into a simple malloc(), so delete p will probably work.
delete p is going to fail, typically, when:
p was a complex data type - delete p; won't know to call individual destructors.
a "user" overloads operator new[] and delete[] to use a different heap to the regular heap.
the debug runtime overloads operator new[] and delete[] to add extra tracking information for the array.
the compiler decides it needs to store extra RTTI information along with the object, which delete p; won't understand, but delete []p; will.
No, it's undefined behavior. Don't do it - use delete[].
In VC++ 7 to 9 it happens to work when the type in question has trivial destructor, but it might stop working on newer versions - usual stuff with undefined behavior. Don't do it anyway.
It's called undefined behaviour; it might work, but you don't know why, so you shouldn't stick with it.
I don't think Visual Studio keeps track of how you allocated the objects, as arrays or plain objects, and magically adds [] to your delete. It probably compiles delete p; to the same code as if you allocated with p = new int, and, as I said, for some reason it works. But you don't know why.
One answer is that yes, it can cause memory leaks, because it doesn't call the destructor for every item in the array. That means that any additional memory owned by items in the array will leak.
The more standards-compliant answer is that it's undefined behaviour. The compiler, for example, has every right to use different memory pools for arrays than for non-array items. Doing the new one way but the delete the other could cause heap corruption.
Your compiler may make guarantees that the standard doesn't, but the first issue remains. For POD items that don't own additional memory (or resources like file handles) you might be OK.
Even if it's safe for your compiler and data items, don't do it anyway - it's also misleading to anyone trying to read your code.
no, you should use delete[] when dealing with arrays
Just using delete won't call the destructors of the objects in the array. While it will possibly work as intended it is undefined as there are some differences in exactly how they work. So you shouldn't use it, even for built in types.
The reason seems not to leak memory is because delete is typically based on free, which already knows how much memory it needs to free. However, the c++ part is unlikely to be cleaned up correctly. I bet that only the destructor of the first object is called.
Using delete with [] tells the compiler to call the destructor on every item of the array.
Not using delete [] can cause memory leaks if used on an array of objects that use dynamic memory allocations like follows:
class AClass
{
public:
AClass()
{
aString = new char[100];
}
~AClass()
{
delete [] aString;
}
private:
const char *aString;
};
int main()
{
AClass * p = new AClass[1000];
delete p; // wrong
return 0;
}