This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How could pairing new[] with delete possibly lead to memory leak only?
I was always told that it's not safe to call delete on an array allocated with new[]. You should always pair new with delete and new[] with delete[].
So I was surprised to discover that the following code compiles and runs ok, in both Debug and Release mode under VS2008.
class CBlah
{
public:
CBlah() : m_i(0) {}
private:
int m_i;
};
int _tmain(int argc, _TCHAR* argv[])
{
for(;;)
{
CBlah * p = new CBlah[1000]; // with []
delete p; // no []
}
return 0;
}
It took me a while to figure out why this works at all, and I think it's just luck and some undefined behaviour.
BUT... it made me wonder... why doesn't Visual Studio pick this up, at least in the Debug memory manager? Is it because there's lots of code out there that makes this mistake and they don't want to break it, or do they feel it's not the job of the Debug memory manager to catch this kind of mistake?
Any thoughts? Is this kind of misuse common?
It will certainly compile ok, because there is no information in the pointer (compile-time) which will see if pointer points to array or what. For example:
int* p;
cin>>x;
if(x == 0)
p = new int;
else
p = new int [10];
delete p; //correct or not? :)
Now , about running ok. This is called undefined behavior in C++, that is, there is no guarantee what will happen - everything can run OK, you can get a segfault, you can get just wrong behavior, or your computer may decide to call 911. UB <=> no guarantee
It's undefined behavior and everything is fair in love, war and undefined behavior...:)
According to MDSN, it translates delete to delete[] when trying to delete an array. (See there, for instance). Though you should have a warning after compiling.
The reason the Debug Memory Manager does not pick up on this error is probably because it it not implemented at the level of new/delete, but at the level of the memory manager that gets invoked by new/delete to allocate the required memory.
At that point, the distinction between array new and scalar new is gone.
You can read these SO answers and links about delete and delete[]: About delete, operator delete, delete[], ...
I don't know what makes you think it "works ok". It compiles and completes without crashing. That does not mean necessarily there was no leak or heap corruption. Also if you got away with it this time, it doesn't necessarily make it a safe thing to do.
Sometimes even a buffer overwrite is something you will "get away with" because the bytes you have written to were not used (maybe they are padding for alignment). Still you should not go around doing it.
Incidentally new T[1] is a form of new[] and still requires a delete[] even though in this instance there is only one element.
Interesting point.
Once I did a code review and tried to convince programmers to fix new[]-delete mismatch.
I've argumented with "Item 5" from Effective C++ by Scott Meyers. However, they argumented with "What do you want, it works well!" and proved, that there was no memory leakage.
However, it worked only with POD-types. Looks like, MS tries to fix the mismatch as pointed out by Raveline.
What would happen, if you added a destructor?
#include <iostream>
class CBlah
{
static int instCnt;
public:
CBlah() : m_i(0) {++instCnt;}
~CBlah()
{
std::cout <<"d-tor on "<< instCnt <<" instance."<<std::endl;
--instCnt;
}
private:
int m_i;
};
int CBlah::instCnt=0;
int main()
{
//for(;;)
{
CBlah * p = new CBlah[10]; // with []
delete p; // no []
}
return 0;
}
Whatever silly"inteligence" fix is added to VS, the code is not portable.
Remember that "works properly" is within the universe of "undefined behavior". It is quite possible for a particular version of a particular compiler to implement this in such a way that it works for all intents and purposes. The important thing to remember is that this is not guaranteed and you can't really ever be sure it's working 100%, and you can't know that it will work with the next version of the compiler. It's also not portable, since another compiler might work in a different fashion.
This works because the particular C++ runtime library it was linked with uses the same heap for both operator new and operator new[]. Many do, but some don't, which is why the practice is not recommended.
The other big difference is that if CBlah had a non-trivial destructor, the delete p; would only call it for the first object in the array, whereas delete[] p; is sure to call it for all the objects.
Related
I've got this weird issue when using valgrind which confuses me alot.
im running on ubuntu18.04, c++17, valgrind version 3.13.0
consider this struct:
struct KelvinLine {
float m;
float b;
float epsilon;
KelvinLine() {
m = 0;
b = 0;
epsilon = 0;
}
};
my test main looks something like this:
int main(){
auto arrLen = 10;
KelvinLine* kelvinArray = new KelvinLine[arrLen];
delete kelvinArray;
return 0;
}
when running it without valgrind everything is ok, but once running with valgrind it crashes it with unhandled instruction error, if instead of new i use malloc (with the proper size of the struct array that i wanted) it continues its run like normal.
has anyone dealt with this issue before?
thanks
Edit:
fixed this issue by disabling AVX512F in my cmake
Whilst mixing delete and delete [] is UB, in cases like this (almost a POD struct) then it is likely to be harmless and never cause a crash.
The main difference between these two operators is that the array form will call the destructor on each element whilst the scalar form will only call it once (the first element if it is an array). That can potentially result in a resource leak. After calling the destructor, the memory for the instance/array gets freed. At least for the two implementations that I'm somewhat familiar with (llvm and GCC) that just means a call to free().
Since this struct has no user-defined destructor the two forms of operator delete are essentially equivalent.
That said, this isn't safe to assume in general and there is no reason not to call the correct operator delete [].
The problem is that you're deallocating the memory using incorrect delete expression leading to undefined behaivor. Since you've used the array form of new expression, you must use the corresponding array form of delete expression as well:
//-----vv------------->use array form of delete expression
delete []kelvinArray;
Okay, so i was experimenting with pointers in C++, as i have started to code again after almost an year and a half break(School Stuff). So please spare me if this seems naive to you, i am just rusting off.
Anyways, i was screwing around with pointers in VS 2019, and i noticed something that started to bug me.
This is the code that i wrote:
#include <iostream>
int main() {
int* i = new int[4];
*i = 323;
*(i + 1) = 453;
std::cout << *i << std::endl;
std::cout << *(i + 1) << std::endl;
delete i;
}
Something seems odd right? Dont worry, that delete is intentional, and is kindof the point of this question. Now I expected it to do some memory mishaps, since this is not the way we delete an array on the heap, BUT to my surprise, it did not(I did this in both Debug and Release and had the same observation)
Allocating the array:
First Modification:
Second Modification
Now i was expecting some sort of mishap at delete, since i did not delete it the way it is supposed to be deleted. BUT
Deleting Incorrectly
Now on another run i actually used the delete operator correctly and it did the same thing:
Now i got the same results when i tried to allocate with malloc and use delete to free it.
So my question is - Why does my code not mess up? I mean if this is how it works, I could just use delete (pointer) to wipe the entire array.
The gist of my question is "What does new operator do under the hood?"
What happens on allocating with “new []” and deleting with just “delete”
The behaviour of the program is undefined.
Now I expected it to do some memory mishaps
Your expectation is misguided. Undefined behaviour does not guarantee mishaps in memory or otherwise. Nothing about the behaviour of the program is guaranteed.
I mean if this is how it works
This is how you observed it to "work". It doesn't mean that it will necessarily always work like that. Welcome to undefined behaviour.
I'm studying for a final exam and I stumbled upon a curious question that was part of the exam our teacher gave last year to some poor souls. The question goes something like this:
Is the following program correct, or not? If it is, write down what the program outputs. If it's not, write down why.
The program:
#include<iostream.h>
class cls
{ int x;
public: cls() { x=23; }
int get_x(){ return x; } };
int main()
{ cls *p1, *p2;
p1=new cls;
p2=(cls*)malloc(sizeof(cls));
int x=p1->get_x()+p2->get_x();
cout<<x;
return 0;
}
My first instinct was to answer with "the program is not correct, as new should be used instead of malloc". However, after compiling the program and seeing it output 23 I realize that that answer might not be correct.
The problem is that I was expecting p2->get_x() to return some arbitrary number (whatever happened to be in that spot of the memory when malloc was called). However, it returned 0. I'm not sure whether this is a coincidence or if class members are initialized with 0 when it is malloc-ed.
Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this?
What would your answer to my teacher's question be? (besides forgetting to #include <stdlib.h> for malloc :P)
Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this?
No, p2->x can be anything after the call to malloc. It just happens to be 0 in your test environment.
What would your answer to my teacher's question be? (besides forgetting to #include for malloc :P)
What everyone has told you, new combines the call to get memory from the freestore with a call to the object's constructor. Malloc only does half of that.
Fixing it: While the sample program is wrong. It isn't always wrong to use "malloc" with classes. It is perfectly valid in a shared memory situation you just have to add an in-place call to new:
p2=(cls*)malloc(sizeof(cls));
new(p2) cls;
new calls the constructor, malloc will not. So your object will be in an unknown state.
The actual behaviour is unknown, because new acts pretty the same like malloc + constructor call.
In your code, the second part is missing, hence, it could work in one case, it could not, but you can't say exactly.
Why can't 0 be an arbitrary number too? Are you running in Debug mode? What compiler?
VC++ pre-fills newly allocated memory with a string of 0xCC byte values (in debug mode of-course) so you would not have obtained a zero for an answer if you were using it.
Malloc makes no guarantuee to zero out the memory it allocated and the result of the programm is undefined.
Otherwise there are many other things that keep this program from being correct C++. cout is in namespace std, malloc needs to included through#include <cstdlib> and iostream.h isn't standard compliant either.
following a discussion in a software meeting I've set out to find out if deleting a dynamically allocated, primitives array with plain delete will cause a memory leak.
I have written this tiny program and compiled it with visual studio 2008 running on windows XP:
#include "stdafx.h"
#include "Windows.h"
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*1000; i++)
{
int* p = new int[1024*100000];
for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2;
Sleep(1000);
delete p;
}
}
I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected
I've modified my test program to allocate a non primitive type array :
#include "stdafx.h"
#include "Windows.h"
struct aStruct
{
aStruct() : i(1), j(0) {}
int i;
char j;
} NonePrimitive;
const unsigned long BLOCK_SIZE = 1024*100000;
int _tmain()
{
for (unsigned int i =0; i < 1024*100000; i++)
{
aStruct* p = new aStruct[1024*100000];
Sleep(1000);
delete p;
}
}
after running for for 10 minutes there was no meaningful increase in memory
I compiled the project with warning level 4 and got no warnings.
is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?
delete p, where p is an array is called undefined behaviour.
Specifically, when you allocate an array of raw data types (ints), the compiler doesnt have a lot of work to do, so it turns it into a simple malloc(), so delete p will probably work.
delete p is going to fail, typically, when:
p was a complex data type - delete p; won't know to call individual destructors.
a "user" overloads operator new[] and delete[] to use a different heap to the regular heap.
the debug runtime overloads operator new[] and delete[] to add extra tracking information for the array.
the compiler decides it needs to store extra RTTI information along with the object, which delete p; won't understand, but delete []p; will.
No, it's undefined behavior. Don't do it - use delete[].
In VC++ 7 to 9 it happens to work when the type in question has trivial destructor, but it might stop working on newer versions - usual stuff with undefined behavior. Don't do it anyway.
It's called undefined behaviour; it might work, but you don't know why, so you shouldn't stick with it.
I don't think Visual Studio keeps track of how you allocated the objects, as arrays or plain objects, and magically adds [] to your delete. It probably compiles delete p; to the same code as if you allocated with p = new int, and, as I said, for some reason it works. But you don't know why.
One answer is that yes, it can cause memory leaks, because it doesn't call the destructor for every item in the array. That means that any additional memory owned by items in the array will leak.
The more standards-compliant answer is that it's undefined behaviour. The compiler, for example, has every right to use different memory pools for arrays than for non-array items. Doing the new one way but the delete the other could cause heap corruption.
Your compiler may make guarantees that the standard doesn't, but the first issue remains. For POD items that don't own additional memory (or resources like file handles) you might be OK.
Even if it's safe for your compiler and data items, don't do it anyway - it's also misleading to anyone trying to read your code.
no, you should use delete[] when dealing with arrays
Just using delete won't call the destructors of the objects in the array. While it will possibly work as intended it is undefined as there are some differences in exactly how they work. So you shouldn't use it, even for built in types.
The reason seems not to leak memory is because delete is typically based on free, which already knows how much memory it needs to free. However, the c++ part is unlikely to be cleaned up correctly. I bet that only the destructor of the first object is called.
Using delete with [] tells the compiler to call the destructor on every item of the array.
Not using delete [] can cause memory leaks if used on an array of objects that use dynamic memory allocations like follows:
class AClass
{
public:
AClass()
{
aString = new char[100];
}
~AClass()
{
delete [] aString;
}
private:
const char *aString;
};
int main()
{
AClass * p = new AClass[1000];
delete p; // wrong
return 0;
}
Why is it a problem if we have a huge piece of code between new and delete of a char array.
Example
void this_is_bad() /* You wouldn't believe how often this kind of code can be found */
{
char *p = new char[5]; /* spend some cycles in the memory manager */
/* do some stuff with p */
delete[] p; /* spend some more cycles, and create an opportunity for a leak */
}
Because somebody may throw an exception.
Because somebody may add a return.
If you have lots of code between the new and delete you may not spot that you need to deallocate the memory before the throw/return?
Why do you have a RAW pointer in your code.
Use a std::vector.
The article you reference is making the point that
char p[5];
would be just as effective in this case and have no danger of a leak.
In general, you avoid leaks by making the life cycle of allocated memory very clear, the new and the delete can be seen to be related.
Large separation between the two is harder to check, and needs to consider carefully whether there are any ways out of the code that might dodge the delete.
The link (and source) of that code is lamenting the unnecessary use of the heap in that code. For a constant, and small, amount of memory there's no reason not to allocate it on the stack.
Instead:
void this_is_good()
{
/* Avoid allocation of small temporary objects on the heap*/
char p[5]; /* Use the stack instead */
/* do some stuff */
}
There's nothing inherently wrong with the original code though, its just less than optimal.
Next to all interesting answers about the heap, and about having new and delete occur close to each other, I might add that the sheer fact of having a huge amount of code in one function is to be avoided. If the huge amount of code separates two related lines of code, it's even worse.
I would differentiate between 'amount of work' and 'amount of code':
void do_stuff( char* const p );
void formerly_huge_function() {
char* p = new char[5];
CoInitialize( NULL );
do_stuff( p );
CoUninitialize();
delete[] p;
}
Now do_stuff can do a lot of things without interfering with the allocation problem. But also other symmetrical stuff stays together this way.
It's all about the guy who's going to maintain your code. It might be you, in a month.
That particular example isn't stating that having a bunch of code in between a new and delete is necessarily bad; it stating that if there are ways to write code that don't use the heap, you might want to prefer that to avoid heap corruption.
It's decent enough advice; if you reduce the amount you use the heap, you know where to look when the heap is corrupted.
I would argue that it's not a problem to have huge amounts of code between a new and delete of any variable. The idea of using new is to place a value on the heap and hence keep it alive for long periods of time. Having code execute on these values is an expected operation.
What can get you into trouble when you have huge amounts of code within the same method between a new and delete is the chance of accidentally leaking the variable due to ...
An exception being thrown
Methods that get so long you can't see the begining or end and hence people start arbitrarily returning from the middle without realizing they skipped a delete call
Both of these can be fixed by using an RAII type such as std::vector<char> instead of a new / delete pair.
It isn't, assuming that the "huge piece of code":
Always runs "delete [] p;" before calling "p = new char[size];" or "throw exception;"
Always runs "p = 0;" after calling "delete [] p;"
Failure to meet the first condition, will cause the contents of p to be leaked. Failure to meet the second condition may result in a double-delete. In general, it is best to use std::vector, so as to avoid any problems.
Are you asking if this would be better?
void this_is_great()
{
char* p = new char[5];
delete[] p;
return;
}
It's not.