This is a question that's been nagging me for some time. I always thought that C++ should have been designed so that the delete operator (without brackets) works even with the new[] operator.
In my opinion, writing this:
int* p = new int;
should be equivalent to allocating an array of 1 element:
int* p = new int[1];
If this was true, the delete operator could always be deleting arrays, and we wouldn't need the delete[] operator.
Is there any reason why the delete[] operator was introduced in C++? The only reason I can think of is that allocating arrays has a small memory footprint (you have to store the array size somewhere), so that distinguishing delete vs delete[] was a small memory optimization.
It's so that the destructors of the individual elements will be called. Yes, for arrays of PODs, there isn't much of a difference, but in C++, you can have arrays of objects with non-trivial destructors.
Now, your question is, why not make new and delete behave like new[] and delete[] and get rid of new[] and delete[]? I would go back Stroustrup's "Design and Evolution" book where he said that if you don't use C++ features, you shouldn't have to pay for them (at run time at least). The way it stands now, a new or delete will behave as efficiently as malloc and free. If delete had the delete[] meaning, there would be some extra overhead at run time (as James Curran pointed out).
Damn, I missed the whole point of question but I will leave my original answer as a sidenote. Why we have delete[] is because long time ago we had delete[cnt], even today if you write delete[9] or delete[cnt], the compiler just ignores the thing between [] but compiles OK. At that time, C++ was first processed by a front-end and then fed to an ordinary C compiler. They could not do the trick of storing the count somewhere beneath the curtain, maybe they could not even think of it at that time. And for backward compatibility, the compilers most probably used the value given between the [] as the count of array, if there is no such value then they got the count from the prefix, so it worked both ways. Later on, we typed nothing between [] and everything worked. Today, I do not think delete[] is necessary but the implementations demand it that way.
My original answer (that misses the point):
delete deletes a single object. delete[] deletes an object array. For delete[] to work, the implementation keeps the number of elements in the array. I just double-checked this by debugging ASM code. In the implementation (VS2005) I tested, the count was stored as a prefix to the object array.
If you use delete[] on a single object, the count variable is garbage so the code crashes. If you use delete for an object array, because of some inconsistency, the code crashes. I tested these cases just now !
"delete just deletes the memory allocated for the array." statement in another answer is not right. If the object is a class, delete will call the DTOR. Just place a breakpoint int the DTOR code and delete the object, the breakpoint will hit.
What occurred to me is that, if the compiler & libraries assumed that all the objects allocated by new are object arrays, it would be OK to call delete for single objects or object arrays. Single objects just would be the special case of an object array having a count of 1. Maybe there is something I am missing, anyway.
Since everyone else seems to have missed the point of your question, I'll just add that I had the same thought some year ago, and have never been able to get an answer.
The only thing I can think of is that there's a very tiny bit of extra overhead to treat a single object as an array (an unnecessary "for(int i=0; i<1; ++i)" )
Adding this since no other answer currently addresses it:
Array delete[] cannot be used on a pointer-to-base class ever -- while the compiler stores the count of objects when you invoke new[], it doesn't store the types or sizes of the objects (as David pointed out, in C++ you rarely pay for a feature you're not using). However, scalar delete can safely delete through base class, so it's used both for normal object cleanup and polymorphic cleanup:
struct Base { virtual ~Base(); };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
Base* b = new Derived[2];
delete[] b; // bad! undefined behavior
}
However, in the opposite case -- non-virtual destructor -- scalar delete should be as cheap as possible -- it should not check for number of objects, nor for the type of object being deleted. This makes delete on a built-in type or plain-old-data type very cheap, as the compiler need only invoke ::operator delete and nothing else:
int main(){
int * p = new int;
delete p; // cheap operation, no dynamic dispatch, no conditional branching
}
While not an exhaustive treatment of memory allocation, I hope this helps clarify the breadth of memory management options available in C++.
Marshall Cline has some info on this topic.
delete [] ensures that the destructor of each member is called (if applicable to the type) while delete just deletes the memory allocated for the array.
Here's a good read: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=287
And no, array sizes are not stored anywhere in C++. (Thanks everyone for pointing out that this statement is inaccurate.)
I'm a bit confused by Aaron's answer and frankly admit I don't completely understand why and where delete[] is needed.
I did some experiments with his sample code (after fixing a few typos). Here are my results.
Typos:~Base needed a function body
Base *b was declared twice
struct Base { virtual ~Base(){ }>; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
<strike>Base</strike> b = new Derived[2];
delete[] b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
david#Godel: # No error message
Modified program with delete[] removed
struct Base { virtual ~Base(){}; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
b = new Derived[2];
delete b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
atest(30746) malloc: *** error for object 0x1099008c8: pointer being freed was n
ot allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Of course, I don't know if delete[] b is actually working in the first example; I only know it does not give a compiler error message.
Related
This is a question that's been nagging me for some time. I always thought that C++ should have been designed so that the delete operator (without brackets) works even with the new[] operator.
In my opinion, writing this:
int* p = new int;
should be equivalent to allocating an array of 1 element:
int* p = new int[1];
If this was true, the delete operator could always be deleting arrays, and we wouldn't need the delete[] operator.
Is there any reason why the delete[] operator was introduced in C++? The only reason I can think of is that allocating arrays has a small memory footprint (you have to store the array size somewhere), so that distinguishing delete vs delete[] was a small memory optimization.
It's so that the destructors of the individual elements will be called. Yes, for arrays of PODs, there isn't much of a difference, but in C++, you can have arrays of objects with non-trivial destructors.
Now, your question is, why not make new and delete behave like new[] and delete[] and get rid of new[] and delete[]? I would go back Stroustrup's "Design and Evolution" book where he said that if you don't use C++ features, you shouldn't have to pay for them (at run time at least). The way it stands now, a new or delete will behave as efficiently as malloc and free. If delete had the delete[] meaning, there would be some extra overhead at run time (as James Curran pointed out).
Damn, I missed the whole point of question but I will leave my original answer as a sidenote. Why we have delete[] is because long time ago we had delete[cnt], even today if you write delete[9] or delete[cnt], the compiler just ignores the thing between [] but compiles OK. At that time, C++ was first processed by a front-end and then fed to an ordinary C compiler. They could not do the trick of storing the count somewhere beneath the curtain, maybe they could not even think of it at that time. And for backward compatibility, the compilers most probably used the value given between the [] as the count of array, if there is no such value then they got the count from the prefix, so it worked both ways. Later on, we typed nothing between [] and everything worked. Today, I do not think delete[] is necessary but the implementations demand it that way.
My original answer (that misses the point):
delete deletes a single object. delete[] deletes an object array. For delete[] to work, the implementation keeps the number of elements in the array. I just double-checked this by debugging ASM code. In the implementation (VS2005) I tested, the count was stored as a prefix to the object array.
If you use delete[] on a single object, the count variable is garbage so the code crashes. If you use delete for an object array, because of some inconsistency, the code crashes. I tested these cases just now !
"delete just deletes the memory allocated for the array." statement in another answer is not right. If the object is a class, delete will call the DTOR. Just place a breakpoint int the DTOR code and delete the object, the breakpoint will hit.
What occurred to me is that, if the compiler & libraries assumed that all the objects allocated by new are object arrays, it would be OK to call delete for single objects or object arrays. Single objects just would be the special case of an object array having a count of 1. Maybe there is something I am missing, anyway.
Since everyone else seems to have missed the point of your question, I'll just add that I had the same thought some year ago, and have never been able to get an answer.
The only thing I can think of is that there's a very tiny bit of extra overhead to treat a single object as an array (an unnecessary "for(int i=0; i<1; ++i)" )
Adding this since no other answer currently addresses it:
Array delete[] cannot be used on a pointer-to-base class ever -- while the compiler stores the count of objects when you invoke new[], it doesn't store the types or sizes of the objects (as David pointed out, in C++ you rarely pay for a feature you're not using). However, scalar delete can safely delete through base class, so it's used both for normal object cleanup and polymorphic cleanup:
struct Base { virtual ~Base(); };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
Base* b = new Derived[2];
delete[] b; // bad! undefined behavior
}
However, in the opposite case -- non-virtual destructor -- scalar delete should be as cheap as possible -- it should not check for number of objects, nor for the type of object being deleted. This makes delete on a built-in type or plain-old-data type very cheap, as the compiler need only invoke ::operator delete and nothing else:
int main(){
int * p = new int;
delete p; // cheap operation, no dynamic dispatch, no conditional branching
}
While not an exhaustive treatment of memory allocation, I hope this helps clarify the breadth of memory management options available in C++.
Marshall Cline has some info on this topic.
delete [] ensures that the destructor of each member is called (if applicable to the type) while delete just deletes the memory allocated for the array.
Here's a good read: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=287
And no, array sizes are not stored anywhere in C++. (Thanks everyone for pointing out that this statement is inaccurate.)
I'm a bit confused by Aaron's answer and frankly admit I don't completely understand why and where delete[] is needed.
I did some experiments with his sample code (after fixing a few typos). Here are my results.
Typos:~Base needed a function body
Base *b was declared twice
struct Base { virtual ~Base(){ }>; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
<strike>Base</strike> b = new Derived[2];
delete[] b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
david#Godel: # No error message
Modified program with delete[] removed
struct Base { virtual ~Base(){}; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
b = new Derived[2];
delete b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
atest(30746) malloc: *** error for object 0x1099008c8: pointer being freed was n
ot allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Of course, I don't know if delete[] b is actually working in the first example; I only know it does not give a compiler error message.
I'm trying to answer some past paper questions that I've been given for exam practice but not really sure on these two, any help be greatly appreciated. (Typed code up from image, think it's all right).
Q1: Identify the memory leaks in the C++ code below and explain how to fix them. [9 marks]
#include <string>
class Logger {
public:
static Logger &get_instance () {
static Logger *instance = NULL;
if (!instance){
instance = new Logger();
}
return *instance;
}
void log (std::string const &str){
// ..log string
}
private:
Logger(){
}
Logger(Logger const&) {
}
Logger& operator= (Logger const &) {
}
~Logger() {
}
};
int main(int argcv, char *argv[]){
int *v1 = new int[10];
int *v2 = new int[20];
Logger::get_instance() . log ("Program Started");
// .. do something
delete v1;
delete v2;
return 0;
}
My answer is that if main never finishes executing due to an early return or an exception being thrown that the deletes will never run causing the memory to never be freed.
I've been doing some reading and I believe an auto_ptr would solve the problems? Would this be as simple as changing lines to?? :
auto_ptr<int> v1 = new int[10];
auto_ptr<int> v2 = new int[20];
v1.release();
delete v1;
Q2: Why do virtual members require more memory than objects of a class without virtual members?
A: Because each virtual member requires a pointer to be stored also in a vtable requiring more space. Although this equates to very little increase in space.
Q1: Note that v1 and v2 are int pointers that refer to an array of 10 and 20, respectively. The delete operator does not match - ie, since it is an array, it should be
delete[] v1;
delete[] v2;
so that the whole array is freed. Remember to always match new[] and delete[] and new and delete
I believe you're already correct on Q2. The vtable and corresponding pointers that must be kept track of do increase the memory consumption.
Just to summarize:
the shown program has undefined behavior using incorrect form of delete, so talking about leaks for the execution is immaterial
if the previous was fixed, leaks wold come from:
new Logger(); // always
the other two new uses, if subsequent new throws or string ctor throws or the ... part in log throws.
to fix v1 and v2 auto_ptr is no good ad you allocated with new[]. you could use boost::auto_array or better make v array<int, 10> or at least vector<int>. And you absolutely don't use release() and then manual delete, but leade that to the smart pointer.
fixing instance is interesting. What is presented is called the 'leaky singleton' that is supposed to leak the instance. But be omnipresent after creation in case something wants to use it during program exit. If that was not intended, instance shall not be created using new, but be directly, being local static or namespace static.
the question is badly phrased comparing incompatible things. Assuming it is sanitized the answer is that a for a class with virtual members instances are (very likely) to carry an extra pointer to the VMT. Plus the the VMT itself has one entry per virtual member after some general overhead. The latter is indeed insignificant, but the former may be an issue, as a class with 1 byte of state may pick up a 8 byte pointer, and possibly another 7 bytes of padding.
Your first answer is correct to get credit, but what the examiner was probably looking for is the freeing up of Logger *instance
In the given code, memory for instance is allocated, but never deallocated.
The second answer looks good.
instance is never deleted and you need to use operator delete[] in main().
Q1:
few gotchyas -
singleton pattern is very dangerous, for example it is not thread safe, two threads could come in and create two classes - causing a memory leak, surround with EnterCriticalSection or some other thread sync mechanism, and still unsafe and not recommended to use.
singleton class does not release they memory, singleton should be ref counted to really act properly.
you're using a static variable inside the function, even worse than using a static member for the class.
you allocate with new [] and delete without the delete[]
I suspect your question is two things:
- free the singleton pointer
- use delete[]
In general however the process cleanup will clean the dangling stuff..
Q2:
your second question is right, because virtual members require a vtable which makes the class larger
This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
How could pairing new[] with delete possibly lead to memory leak only?
( POD )freeing memory : is delete[] equal to delete?
Using gcc version 4.1.2 20080704 (Red Hat 4.1.2-48). Haven't tested it on Visual C++.
It seems that delete and delete [] works the same when deleting arrays of "simple" type.
char * a = new char[1024];
delete [] a; // the correct way. no memory leak.
char * a = new char[1024];
delete a; // the incorrect way. also NO memory leak.
But, when deleting arrays of "complex" type, delete will cause memory leak.
class A
{
public:
int m1;
int* m2; // a pointer!
A()
{
m2 = new int[1024];
}
~A()
{
delete [] m2; // destructor won't be called when using delete
}
};
A* a = new A[1024];
delete [] a; // the correct way. no memory leak.
A* a = new A[1024];
delete a; // the incorrect way. MEMORY LEAK!!!
My questions are:
In the first test case, why delete and delete [] are the same under g++?
In the second test case, why g++ doesn't handle it like the first test case?
This is all dependent on the underlying memory manager. Simply put, C++ requires that you delete arrays with delete[] and delete non-arrays with delete. There is no explanation in the standard for your behaviour.
What's likely happening however is that delete p; simply frees the block of memory starting at p (whether it is an array or not). On the other hand delete[] additionally runs through each element of the array and calls the destructor. Since normal data types like char don't have destructors, there is no effect, so delete and delete[] end up doing the same thing.
Like I said, this is all implementation specific. There's no guarantee that delete will work on arrays of any type. It just happens to work in your case. In C++ we call this undefined behaviour -- it might work, it might not, it might do something totally random and unexpected. You'd be best to avoid relying on undefined behaviour.
char * a = new char[1024];
delete a; // the incorrect way. also NO memory leak.
No. It doesn't gaurantee No memory leak. It in fact invokes undefined behavior.
delete and delete[] seemingly being equivalent in g++ is pure luck. Calling delete on memory allocated with new[], and vice versa, is undefined behaviour. Just don't do it.
Because that's undefined behavior. It's not guaranteed to break but it's not guaranteed to work either.
The delete expression calls the destructor of the object to be deleted before releasing the memory. Releasing the memory probably works in either case (but it's still UB), but if you use delete where you needed delete[], then you aren't calling all the destructors. Since your complex object itself allocates memory which it in turn releases in its own destructor, you are failing to make all those deletions when you use the wrong expression.
they technically aren't the same, they just get optimized down to the same meaning on non-complex types. complex types require the vectorized delete so that the destructor can be called for every object in the array you delete (just like vectorized new for constructors).
what your doing just free's the memory like its a pointer array.
What is happening here is that when you call delete, the space taken up by the objects is deleted. In the case of chars, this is all you need to do (although it is still recommended to use delete[] because this is just g++. The actual behavior of calling delete on an array is undefined in the c++ standard.).
In the second example, the space taken up by you array is deallocated, including the pointer m2. However, what m2 is pointing to is not also deleted. When you call delete[] the destructor on each object in the array is called and then what m2 points to is deallocated.
Is it possible to zero out the memory of deleted objects in C++? I want to do this to reproduce a coredump in unit test:
//Some member variable of object-b is passed-by-pointer to object-a
//When object-b is deleted, that member variable is also deleted
//In my unit test code, I want to reproduce this
//even if I explicitly call delete on object-b
//accessBMemberVariable should coredump, but it doesn't
//I'm assuming even though object-b is deleted, it's still intact in memory
A *a = new A();
{
B *b = new B(a);
delete b;
}
a->accessBMemberVariable();
You probably should override the delete operator.
Example for the given class B:
class B
{
public:
// your code
...
// override delete
void operator delete(void * p, size_t s)
{
::memset(p, 0, s);
::operator delete(p, s);
}
};
EDIT: Thanks litb for pointing this out.
accessBMemberVariable should coredump, but it doesn't
Nah, why should it? It's quite possible that the memory that b used to occupy is now owned by the CRT, the CRT that your application owns. The CRT may opt to not release memory back to the OS. Core dumps will only happen if you access memory not owned by your application.
Zeroing out the memory occupied by b may not do you any good depending on the type of variable that A has the address of.
My advice would be to allocate B on the stack, that should bring out the fireworks... but then again, not quite in the way you'd expect...
So if you really want a core dump you should use the OS functions to allocate memory and free it:
char *buf = OS_Alloc(sizeof(B));
B *b = new(buf) B();
a->someBMember = &b->myMember;
b->~B();
OS_Free(buf);
a->accessBMemberVariable();
Another poster suggested:
delete b;
memset(b,0,sizeof(B));
Please don't do this!!! Writes to address space that is returned to the memory manager are UNDEFINED!!!!
Even if your compiler and library let you get away with it now, it is bad bad bad. A change in library or platform, or even an update in the compiler will bite you in the ass.
Think of a race condition where you delete b, then some other thread makes an allocation, the memory at b is given out, and then you call memset! Bang, you're dead.
If you must clear the memory (which who cares) zero it out before calling delete.
memset(b,0,sizeof(B));
delete b;
Use placement "new" if you can (http://www.parashift.com/c++-faq-lite/dtors.html#faq-11.10)
and zero out the chunk you gave after calling the object destructor manually.
Use the debugging malloc/new features in your environment.
On MSVC, link with the debug runtime libraries. On FreeBSD, set MALLOC_OPTIONS to have the 'Z' or 'J' flags, as appropriate. On other platforms, read the documentation or substitute in an appropriate allocator with debugging support.
Calling memset() after deletion is just bad on so many levels.
In your example you write
'A->accessBMemberVariable' is this a typo? shouldn't it be 'a->accessBMemberVariable' ?
Assuming it is a typo (otherwise the whole design seems a bit weird).
If you want to verify that 'a' is deleted properly it probably be better to instead change the way you handle the allocation and use auto_ptr's instead. That way you will be sure things are deleted properly:
auto_ptr<A> a( new A );
{
auto_ptr<B> b( new B(a) ); // B takes ownership of 'a', delete at scope exit
}
a->accessBMemberVariable(); // shouldn't do to well.
and a constructor for B in the form of
B( auto_ptr<A>& a ) : m_a(a) {;}
where
auto_ptr<A> m_a
Once you've deleted b you don't really have permission to write over where it was. But you usually can get away with doing just that; and you'll see code where programmers use memset for this.
The approved way to do this, though, would be to call the destructor directly, then write over the memory (with, say, memset) and then call delete on the object. This does require your destructor to be pretty smart because delete is going to call the destructor. So the destructor must realize that the whole object is nothing but 0's and not do anything:
b->~B();
memset(b, 0, sizeof(b));
delete b;
I came across this kind of code once in a while - I suspect the creator is/was afraid that table delete would iterate over the table and "cost performance" (which imho will not be done either way)... is there any real benefit one might get/consider/imagine from not using the the table delete here?
myClass** table = new myClass* [size];
... //some code that does not reallocate or change the value of the table pointer ;)
delete table; // no [] intentionally
If you do this, you will get what the C++ Standard calls undefined behaviour - anything could happen.
That is a memory leak. A new [] must be matched by a delete []. Further, since table is a pointer to the first element of an array of pointers, any array member, if it's an array by itself will need to be de-allocated using a delete [].
Not only is there no benefit, the code is just plain wrong -- at best, it leaks memory, and at worst, it can crash your program or open up a hard-to-find security hole. You must always match new with delete and new[] with delete[]. Always.
There's really no reason to write like that and a serious reason to never do so.
It's true that for types with trivial destructors (like raw pointers in your case) there's no need to know the actual number of elements in the array and so the compiler might decide to map new[] and delete[] onto new and delete to reduce the overhead. If it decides this way you can't stop it without extra steps taken, so this compiler optimization will take place without your notice and will be free.
At the same time someone using your code might wish to overload the global operators new and delete (and new[] and delete[] as well). If that happens you run into big trouble because this is when you may really need the difference between the delete and delete[].
Add to this that this compiler-dependent optimization is unportable.
So this is the case when you get no benefits displacing delete[] with delete but risk big time relying into undefined behaviour.
It's definitely wrong as a s new[] needs to be paired with delete[]. If you don't you will get undefined behavior.
It may work (partially), because most implementations use new to implement new[]. The only difference for such an implementation would be that it would only call 1 destructor (for the first element instead of all destructors. But avoid it as it is not legal c++.
In theory you should call delete [].
EDIT: The following applies only to Microsoft Visual C++ (I should have said this).
In practice, in Microsoft Visual C++ , it doesn't matter which delete you use when the objects in the array don't have destructors. Since you have an array of pointers, and pointers can't have destructors, you should be OK.
However, as others have pointed out, it is incorrect C++ to mix new [] and delete without []. Although it may work in Visual C++ in this case, the code is not portable and may fail in other compilers.
But going back to the specific case of Visual C++, even if you call delete [], the compiler will realize that it doesn't need to iterate through the array calling destructors when it's an array of primitive types like int, char, or pointers. Calling delete in that case actually works and won't break anything. It would not be slower to do the right thing and call delete [], but it won't be faster either.
In fact, in MSVC++, delete[] p immediately calls the regular operator delete(void *p) when p is a pointer to a simple type, or one without destructors.
Those who don't believe me, step through this code into the CRT code for the first two calls to delete[].
#include "stdafx.h"
#include <malloc.h>
#include <iostream>
using namespace std;
class NoDestructor
{
int m_i;
};
class WithDestructor
{
public:
~WithDestructor()
{
cout << "deleted one WithDestructor at " << (void *) this<< endl;
}
};
int _tmain(int argc, _TCHAR* argv[])
{
int **p = new int *[20];
delete [] p;
p = (int**) malloc(80);
free(p);
NoDestructor *pa = new NoDestructor[20];
delete [] pa;
WithDestructor *pb = new WithDestructor[20];
delete [] pb;
return 0;
}
that statement will leave all of the myClass objects that were pointed to by all the pointers in the array hanging around in memory. It also leaves the array of pointers in memory. There is no way that can be helpful, as it only frees up 32 bits of memory, and the OS still thinks you have (size) myClasses and pointers to each in use. This is just an example of a programmer not cleaning up after themself properly.
Check with the section [16.11] "How do I allocate / unallocate an array of things?" and beyond in C++ FAQ Lite,
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.11
They explain that the array delete is a must when an array is created.
The instanced of myClass pointed to by the elements of your array should also be deleted where they are created.