So I have been working with c++ and pointers for a year and a half now, and i thought i had them succeed. I have called delete on objects many times before and the objects actually got deleted, or at least i thought they did.
The code below is just confusing the hell out of me:
#include <iostream>
class MyClass
{
public:
int a;
MyClass() : a(10) {
std::cout << "constructor ran\n";
}
void method(std::string input_) {
std::cout << param_ << "\n";
}
~MyClass() {
std::cout << "destructor ran\n";
}
};
int main()
{
MyClass* ptr = new MyClass;
ptr->method("1");
delete ptr;
ptr->method("2.5");
}
this code outputs:
constructor ran
1
destructor ran
2.5
I was confused as to why it was not throwing an error of any sort - I was expecting a memory out of bounds exception or alike, but nothing. The for loop was in there incase there was some sort of hidden garbage collection, even though as far as I know there is no garbage collection in c++.
Can anyone explain as to why this code works, or where I am going wrong with this code for it not to give me the error?
You're misunderstanding what delete does. All delete does is call the destructor, and tell the allocator that that memory is free. It doesn't change the actual pointer. Anything beyond that is undefined.
In this case, it does nothing to the actual data pointed to. That pointer points to the same data it pointed to before, and calling methods on it works just fine. However, this behavior is not guaranteed; in fact, it's explicitly unspecified. delete could zero out the data; or the allocator could allocate that same memory for something else, or the compiler could just refuse to compile this.
C++ allows you to do many unsafe things, in the interest of performance. This is one of them. If you want to avoid this kind of mistake, it's a good idea to do:
delete ptr;
ptr = NULL;
to ensure that you don't try to reuse the pointer, and will crash immediately if you do rather than having undefined behavior.
Calling ptr->method("2.5") after delete ptr has undefined behaviour. This means anything can happen, including what you're observing.
In this code:
MyClass* ptr = new MyClass;
ptr->method("1");
delete ptr;
ptr->method("2.5");
you are accessing the memory that has already been freed, which yields undefined behavior, which means that anything can happen including the worst case: that it seems to work correctly.
That's one of the reasons why it is a good practice to set such a pointer to NULL rather than allowing this kind of stuff happen:
delete ptr;
ptr = NULL;
although the best thing to do is to completely avoid using pointers if possible.
you are accessing the memory, which is not reset until it is used by another new call or so.. however it is a good practice every time you call delete, is to set the pointer to NULL.
Related
This is a question that's been nagging me for some time. I always thought that C++ should have been designed so that the delete operator (without brackets) works even with the new[] operator.
In my opinion, writing this:
int* p = new int;
should be equivalent to allocating an array of 1 element:
int* p = new int[1];
If this was true, the delete operator could always be deleting arrays, and we wouldn't need the delete[] operator.
Is there any reason why the delete[] operator was introduced in C++? The only reason I can think of is that allocating arrays has a small memory footprint (you have to store the array size somewhere), so that distinguishing delete vs delete[] was a small memory optimization.
It's so that the destructors of the individual elements will be called. Yes, for arrays of PODs, there isn't much of a difference, but in C++, you can have arrays of objects with non-trivial destructors.
Now, your question is, why not make new and delete behave like new[] and delete[] and get rid of new[] and delete[]? I would go back Stroustrup's "Design and Evolution" book where he said that if you don't use C++ features, you shouldn't have to pay for them (at run time at least). The way it stands now, a new or delete will behave as efficiently as malloc and free. If delete had the delete[] meaning, there would be some extra overhead at run time (as James Curran pointed out).
Damn, I missed the whole point of question but I will leave my original answer as a sidenote. Why we have delete[] is because long time ago we had delete[cnt], even today if you write delete[9] or delete[cnt], the compiler just ignores the thing between [] but compiles OK. At that time, C++ was first processed by a front-end and then fed to an ordinary C compiler. They could not do the trick of storing the count somewhere beneath the curtain, maybe they could not even think of it at that time. And for backward compatibility, the compilers most probably used the value given between the [] as the count of array, if there is no such value then they got the count from the prefix, so it worked both ways. Later on, we typed nothing between [] and everything worked. Today, I do not think delete[] is necessary but the implementations demand it that way.
My original answer (that misses the point):
delete deletes a single object. delete[] deletes an object array. For delete[] to work, the implementation keeps the number of elements in the array. I just double-checked this by debugging ASM code. In the implementation (VS2005) I tested, the count was stored as a prefix to the object array.
If you use delete[] on a single object, the count variable is garbage so the code crashes. If you use delete for an object array, because of some inconsistency, the code crashes. I tested these cases just now !
"delete just deletes the memory allocated for the array." statement in another answer is not right. If the object is a class, delete will call the DTOR. Just place a breakpoint int the DTOR code and delete the object, the breakpoint will hit.
What occurred to me is that, if the compiler & libraries assumed that all the objects allocated by new are object arrays, it would be OK to call delete for single objects or object arrays. Single objects just would be the special case of an object array having a count of 1. Maybe there is something I am missing, anyway.
Since everyone else seems to have missed the point of your question, I'll just add that I had the same thought some year ago, and have never been able to get an answer.
The only thing I can think of is that there's a very tiny bit of extra overhead to treat a single object as an array (an unnecessary "for(int i=0; i<1; ++i)" )
Adding this since no other answer currently addresses it:
Array delete[] cannot be used on a pointer-to-base class ever -- while the compiler stores the count of objects when you invoke new[], it doesn't store the types or sizes of the objects (as David pointed out, in C++ you rarely pay for a feature you're not using). However, scalar delete can safely delete through base class, so it's used both for normal object cleanup and polymorphic cleanup:
struct Base { virtual ~Base(); };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
Base* b = new Derived[2];
delete[] b; // bad! undefined behavior
}
However, in the opposite case -- non-virtual destructor -- scalar delete should be as cheap as possible -- it should not check for number of objects, nor for the type of object being deleted. This makes delete on a built-in type or plain-old-data type very cheap, as the compiler need only invoke ::operator delete and nothing else:
int main(){
int * p = new int;
delete p; // cheap operation, no dynamic dispatch, no conditional branching
}
While not an exhaustive treatment of memory allocation, I hope this helps clarify the breadth of memory management options available in C++.
Marshall Cline has some info on this topic.
delete [] ensures that the destructor of each member is called (if applicable to the type) while delete just deletes the memory allocated for the array.
Here's a good read: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=287
And no, array sizes are not stored anywhere in C++. (Thanks everyone for pointing out that this statement is inaccurate.)
I'm a bit confused by Aaron's answer and frankly admit I don't completely understand why and where delete[] is needed.
I did some experiments with his sample code (after fixing a few typos). Here are my results.
Typos:~Base needed a function body
Base *b was declared twice
struct Base { virtual ~Base(){ }>; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
<strike>Base</strike> b = new Derived[2];
delete[] b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
david#Godel: # No error message
Modified program with delete[] removed
struct Base { virtual ~Base(){}; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
b = new Derived[2];
delete b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
atest(30746) malloc: *** error for object 0x1099008c8: pointer being freed was n
ot allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Of course, I don't know if delete[] b is actually working in the first example; I only know it does not give a compiler error message.
Probably this question was already asked but I couldn't find it. Please redirect me if you you saw something.
Question :
what is the benefit of using :
myClass* pointer;
over
myClass* pointer = new(myClass);
From reading on other topics, I understand that the first option allocates a space on the stack and makes the pointer point to it while the second allocates a space on the heap and make a pointer point to it.
But I read also that the second option is tedious because you have to deallocate the space with delete.
So why would one ever use the second option.
I am kind of a noob so please explain in details.
edit
#include <iostream>
using namespace std;
class Dog
{
public:
void bark()
{
cout << "wouf!!!" << endl;
}
};
int main()
{
Dog* myDog = new(Dog);
myDog->bark();
delete myDog;
return 0;
}
and
#include <iostream>
using namespace std;
class Dog
{
public:
void bark()
{
cout << "wouf!!!" << endl;
}
};
int main()
{
Dog* myDog;
myDog->bark();
return 0;
}
both compile and give me "wouf!!!". So why should I use the "new" keyword?
I understand that the first option allocates a space on the stack and
makes the pointer point to it while the second allocates a space on
the heap and make a pointer point to it.
The above is incorrect -- the first option allocates space for the pointer itself on the stack, but doesn't allocate space for any object for the pointer to point to. That is, the pointer isn't pointing to anything in particular, and thus isn't useful to use (unless/until you set the pointer to point to something)
In particular, it's only pure blind luck that this code appears to "work" at all:
Dog* myDog;
myDog->bark(); // ERROR, calls a method on an invalid pointer!
... the above code is invoking undefined behavior, and in an ideal world it would simply crash, since you are calling a method on an invalid pointer. But C++ compilers typically prefer maximizing efficiency over handling programmer errors gracefully, so they typically don't put in a check for invalid-pointers, and since your bark() method doesn't actually use any data from the Dog object, it is able to execute without any obvious crashing. Try making your bark() method virtual, OTOH, and you will probably see a crash from the above code.
the second allocates a space on the heap and make a pointer point to
it.
That is correct.
But I read also that the second option is tedious because you have to
deallocate the space with delete.
Not only tedious, but error-prone -- it's very easy (in a non-trivial program) to end up with a code path where you forgot to call delete, and then you have a memory leak. Or, alternatively, you could end up calling delete twice on the same pointer, and then you have undefined behavior and likely crashing or data corruption. Neither mistake is much fun to debug.
So why would one ever use the second option.
Traditionally you'd use dynamic allocation when you need the object to remain valid for longer than the scope of the calling code -- for example, if you needed the object to stick around even after the function you created the object in has returned. Contrast that with a stack allocation:
myClass someStackObject;
... in which someStackObject is guaranteed to be destroyed when the calling function returns, which is usually a good thing -- but not if you need someStackObject to remain in existence even after your function has returned.
These days, most people would avoid using raw/C-style pointers entirely, since they are so dangerously error-prone. The modern C++ way to allocate an object on the heap would look like this:
std::shared_ptr<myClass> pointer = std::make_shared<myClass>();
... and this is preferred because it gives you a heap-allocated myClass object whose pointed-to-object will continue to live for as long as there is at least one std::shared_ptr pointing to it (good), but also will automagically be deleted the moment there are no std::shared_ptr's pointing to it (even better, since that means no memory leak and no need to explicitly call delete, which means no potential double-deletes)
I'm sure this is answered somewhere, but I'm lacking the vocabulary to formulate a search.
#include <iostream>
class Thing
{
public:
int value;
Thing();
virtual ~Thing() { std::cout << "Destroyed a thing with value " << value << std::endl; }
};
Thing::Thing(int newval)
{
value = newval;
}
int main()
{
Thing *myThing1 = new Thing(5);
std::cout << "Value 1: " << myThing1->value << std::endl;
Thing myThing2 = Thing(6);
std::cout << "Value 2: " << myThing2.value << std::endl;
return 0;
}
Output indicates myThing2 was destroyed, my myThing1 was not.
So... do I need to deconstruct it manually somehow? Is this a memory leak? Should I avoid using the * in this situation, and if so, when would it be appropriate?
The golden rule is, wherever you use a new you must use a delete. You are creating dynamic memory for myThing1, but you never release it, hence the destructor for myThing1 is never called.
The difference between this and myThing2 is that myThing2 is a scoped object. The operation:
Thing myThing2 = Thing(6);
is not similar at all to:
Thing *myThing1 = new Thing(5);
Read more about dynamic allocation here. But as some final advice, you should be using the new keyword sparingly, read more about that here:
Why should C++ programmers minimize use of 'new'?
myThing1 is a Thing* not a Thing. When a pointer goes out of scope nothing happens except that you leak the memory it was holding as there is no way to get it back. In order for the destructor to be called you need to delete myThing1; before it goes out of scope. delete frees the memory that was allocated and calls the destructor for class types.
The rule of thumb is for every new/new[] there should be a corresponding delete/delete[]
You need to explicitly delete myThing1 or use shared_ptr / unique_ptr.
delete myThing1;
The problem is not related to using pointer Thing *. A pointer can point to an object with automatic storage duration.
The problem is that in this statement
Thing *myThing1 = new Thing(5);
there is created an object new Thing(5) using the new operator. This object can be deleted by using the delete operator.
delete myThing1;
Otherwise it will preserve the memory until the program will not finish.
Thing myThing2 = Thing(6);
This line creates a Thing in main's stack with automatic storage duration. When main() ends it will get cleaned up.
Thing *myThing1 = new Thing(5);
This, on the other hand, creates a pointer to a Thing. The pointer resides on the stack, but the actual object is in the heap. When the pointer goes out of scope nothing happens to the pointed-to thing, the only thing reclaimed is the couple of bytes used by the pointer itself.
In order to fix this you have two options, one good, one less good.
Less good:
Put a delete myThing1; towards the end of your function. This will free to allocated object. As noted in other answers, every allocation of memory must have a matching deallocation, else you will leak memory.
However, in modern C++, unless you have good reason not to, you should really be using shared_ptr / unique_ptr to manage your memory. If you had instead declared myThing1 thusly:
shared_ptr<Thing> myThing1(new Thing(5));
Then the code you have now would work the way you expect. Smart pointers are powerful and useful in that they greatly reduce the amount of work you have to do to manage memory (although they do have some gotchas, circular references take extra work, for example).
Say I have an AbstractBaseClass and a ConcreteSubclass.
The following code creates the ConcreteSubclass and then disposes of it perfectly fine and without memory leaks:
ConcreteSubclass *test = new ConcreteSubclass(args...);
delete test;
However, as soon as I push this pointer into a vector, I get a memory leak:
std::vector<AbstractBaseClass*> arr;
arr.push_back(new ConcreteSubclass(args...));
delete arr[0];
I get fewer memory leaks with delete arr[0]; than with no delete at all, but still some memory gets leaked.
I did plenty of looking online and this seems to be a well understood problem - I'm deleting the pointer to the memory but not the actual memory itself. So I tried a basic dereference... delete *arr[0]; but that just gives a compiler error. I also tried a whole load of other references and dereferences but each just gives either a compiler error or program crash. I'm just not hitting on the right solution.
Now, I can use a shared_ptr to get the job done without a leak just fine (Boost as I don't have C++11 available to me):
std::vector<boost::shared_ptr<AbstractBaseClass>> arr2;
arr2.push_back(boost::shared_ptr<AbstractBaseClass>(new ConcreteSubclass(args...)));
but I can't get the manual method to work. It's not really important - it's perfectly easy to just use Boost, but I really want to know what I'm doing wrong so I can learn from it rather than just move without finding out my error.
I tried tracing through Boost's shared_ptr templates, but I keep getting lost because each function has so many overloads and I'm finding it very difficult to follow which branch to take each time.
I think it's this one:
template<class Y>
explicit shared_ptr( Y * p ): px( p ), pn() // Y must be complete
{
boost::detail::sp_pointer_construct( this, p, pn );
}
But I keep ending up at checked_array_delete, but that can't be right as that uses delete[]. I need to get down to just
template<class T> inline void checked_delete(T * x)
{
// intentionally complex - simplification causes regressions
typedef char type_must_be_complete[ sizeof(T)? 1: -1 ];
(void) sizeof(type_must_be_complete);
delete x;
}
And that's just calling delete, nothing special. Trying to follow through all of the reference and dereference operators was a disaster from the start. So basically I'm stuck, and I couldn't figure out how Boost made it work whilst my code didn't. So back to the start, how can I re-write this code to make the delete not leak?
std::vector<AbstractBaseClass*> arr;
arr.push_back(new ConcreteSubclass(args...));
delete arr[0];
Thank you.
shared_ptr stores a "deleter", a helper function1 that casts the pointer back to its original type (ConcreteSubclass*) before calling delete. You haven't done this.
If AbstractBaseClass has a virtual destructor, then calling delete arr[0]; is fine and works just as well as delete (ConcreteSubclass*)arr[0];. If it doesn't, then deletion through a base subobject is undefined behavior, and can cause far worse things to happen than memory leaks.
Rule of thumb: Every abstract base class should have a user-declared (explicitly defaulted is ok) destructor that is
virtual
OR
modified by protected: accessibility
or both.
1 You've found the implementation, it is checked_delete. But it is instantiated with the return type from new -- checked_delete<ConcreteSubclass>(ConcreteSubclass* x). So it is using ordinary delete, but on a pointer to type ConcreteSubclass, and that makes it possible for the compiler to find the right destructor even without the help of virtual dispatch.
This is a question that's been nagging me for some time. I always thought that C++ should have been designed so that the delete operator (without brackets) works even with the new[] operator.
In my opinion, writing this:
int* p = new int;
should be equivalent to allocating an array of 1 element:
int* p = new int[1];
If this was true, the delete operator could always be deleting arrays, and we wouldn't need the delete[] operator.
Is there any reason why the delete[] operator was introduced in C++? The only reason I can think of is that allocating arrays has a small memory footprint (you have to store the array size somewhere), so that distinguishing delete vs delete[] was a small memory optimization.
It's so that the destructors of the individual elements will be called. Yes, for arrays of PODs, there isn't much of a difference, but in C++, you can have arrays of objects with non-trivial destructors.
Now, your question is, why not make new and delete behave like new[] and delete[] and get rid of new[] and delete[]? I would go back Stroustrup's "Design and Evolution" book where he said that if you don't use C++ features, you shouldn't have to pay for them (at run time at least). The way it stands now, a new or delete will behave as efficiently as malloc and free. If delete had the delete[] meaning, there would be some extra overhead at run time (as James Curran pointed out).
Damn, I missed the whole point of question but I will leave my original answer as a sidenote. Why we have delete[] is because long time ago we had delete[cnt], even today if you write delete[9] or delete[cnt], the compiler just ignores the thing between [] but compiles OK. At that time, C++ was first processed by a front-end and then fed to an ordinary C compiler. They could not do the trick of storing the count somewhere beneath the curtain, maybe they could not even think of it at that time. And for backward compatibility, the compilers most probably used the value given between the [] as the count of array, if there is no such value then they got the count from the prefix, so it worked both ways. Later on, we typed nothing between [] and everything worked. Today, I do not think delete[] is necessary but the implementations demand it that way.
My original answer (that misses the point):
delete deletes a single object. delete[] deletes an object array. For delete[] to work, the implementation keeps the number of elements in the array. I just double-checked this by debugging ASM code. In the implementation (VS2005) I tested, the count was stored as a prefix to the object array.
If you use delete[] on a single object, the count variable is garbage so the code crashes. If you use delete for an object array, because of some inconsistency, the code crashes. I tested these cases just now !
"delete just deletes the memory allocated for the array." statement in another answer is not right. If the object is a class, delete will call the DTOR. Just place a breakpoint int the DTOR code and delete the object, the breakpoint will hit.
What occurred to me is that, if the compiler & libraries assumed that all the objects allocated by new are object arrays, it would be OK to call delete for single objects or object arrays. Single objects just would be the special case of an object array having a count of 1. Maybe there is something I am missing, anyway.
Since everyone else seems to have missed the point of your question, I'll just add that I had the same thought some year ago, and have never been able to get an answer.
The only thing I can think of is that there's a very tiny bit of extra overhead to treat a single object as an array (an unnecessary "for(int i=0; i<1; ++i)" )
Adding this since no other answer currently addresses it:
Array delete[] cannot be used on a pointer-to-base class ever -- while the compiler stores the count of objects when you invoke new[], it doesn't store the types or sizes of the objects (as David pointed out, in C++ you rarely pay for a feature you're not using). However, scalar delete can safely delete through base class, so it's used both for normal object cleanup and polymorphic cleanup:
struct Base { virtual ~Base(); };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
Base* b = new Derived[2];
delete[] b; // bad! undefined behavior
}
However, in the opposite case -- non-virtual destructor -- scalar delete should be as cheap as possible -- it should not check for number of objects, nor for the type of object being deleted. This makes delete on a built-in type or plain-old-data type very cheap, as the compiler need only invoke ::operator delete and nothing else:
int main(){
int * p = new int;
delete p; // cheap operation, no dynamic dispatch, no conditional branching
}
While not an exhaustive treatment of memory allocation, I hope this helps clarify the breadth of memory management options available in C++.
Marshall Cline has some info on this topic.
delete [] ensures that the destructor of each member is called (if applicable to the type) while delete just deletes the memory allocated for the array.
Here's a good read: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=287
And no, array sizes are not stored anywhere in C++. (Thanks everyone for pointing out that this statement is inaccurate.)
I'm a bit confused by Aaron's answer and frankly admit I don't completely understand why and where delete[] is needed.
I did some experiments with his sample code (after fixing a few typos). Here are my results.
Typos:~Base needed a function body
Base *b was declared twice
struct Base { virtual ~Base(){ }>; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
<strike>Base</strike> b = new Derived[2];
delete[] b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
david#Godel: # No error message
Modified program with delete[] removed
struct Base { virtual ~Base(){}; };
struct Derived : Base { };
int main(){
Base* b = new Derived;
delete b; // this is good
b = new Derived[2];
delete b; // bad! undefined behavior
}
Compilation and execution
david#Godel:g++ -o atest atest.cpp
david#Godel: ./atest
atest(30746) malloc: *** error for object 0x1099008c8: pointer being freed was n
ot allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Of course, I don't know if delete[] b is actually working in the first example; I only know it does not give a compiler error message.