Avoid program crash when deleting pointer - c++

The second delete in following code will cause program crash because it has been deleted before:
int* i = new int;
delete i;
delete i;
Trying to catch it using exception doesn't help either:
int* i = new int;
delete i;
try {
delete i;
} catch(exception& e) { // program just crashes, doesn't go into this exception block
cout << "delete failed" << endl;
}
How to perform a safe delete (check first if region pointed by pointer has been deleted before)?
Or if it's not possible, how to spit out the line number where crash occurs (without debugging tool)?

delete does not try detect whether the pointer is valid or not, it just deallocates the pointer passed to it. You can set i to nullptr after deletion each time. And check if(i==nullptr) before deleting again(although deleting nullptr again will not cause any problem, since deleting nullptr is no op, it effectively does nothing).
If you are just playing around, then this kind of code may help to learn about the language well. But in production you should be careful about these kinds of code and eliminate them. It is also a good indicator that your code may have other resource management bugs.

The modern C++ solution is to never use new or delete. Just make C++ handle everything automatically.
unique_ptr<int> i = make_unique<int>();
or
shared_ptr<int> i = make_shared<int>();
No need to delete it. In case you do not have make_unique you can write your own.

{
std::unique_ptr<int> i(new int);
}

Let's start by saying that a double delete is an undefined behaviour... and that means a behaviour which is not defined :)
Allow me to remind also that deleting a null pointer is a no-op. So no need to check on whether the pointer is null.
Without more context, the answer is that there is no portable solution.
Depending on architecture, OS, even the memory allocation library you are using, it will produce different effects and provide you with different options to detect it.
There's no guarantee that the program will crash on the second delete. It might simply corrupt the memory allocator structures in a way that will only crash after some time.
If your objective is detecting the problem, your best chance is to set up your program for capturing crashing signals (e.g. SIGABRT, SIGSEGV, SIGBUS...) and print a stack trace on the signal handler, before allowing the program to terminate/write the core file. As I said above, this might be or might not be the place of the memory corruption... but it will be the place where the memory allocator/program cannot go on any more.
That's the less invasive.
Using customized memory allocators or memory allocators with debugging options (e.g. libumem in Solaris) can help you in detecting earlier or more accurately where the problem was. The catch is that usually there's a bigger or smaller performance penalty.
If your objective is to prevent the problem... then you have to resort to best practices... For example.
Resort to using RAII or smart pointers in general, or at least use them when you cannot safely establish the memory ownership throughout your program.
At the very least, always remember to to set to null a pointer that you have deleted. That doesn't guarantee anything because you can always have concurrent deletes... but it helps reduce the scenarios where you could have a crash.

You could set the pointer that you delete to (0, NULL) prior to C++11 or to nullptr for after C++11 compilers. However, this merely solves the problem (look example below):
void deletePointer(int* & iptr) {
delete iptr;
iptr = nullptr;
}
There is no portable and standard test to check whether a pointer is "valid" for deletion. The best you can do, if your compiler supports C++11, is to go with smart pointers. Thus, you wouldn't have to worry about invalid deletions.

Related

clearing a vector of pointers (affecting the deleted pointers to nullptr?) [duplicate]

I'll start out by saying, use smart pointers and you'll never have to worry about this.
What are the problems with the following code?
Foo * p = new Foo;
// (use p)
delete p;
p = NULL;
This was sparked by an answer and comments to another question. One comment from Neil Butterworth generated a few upvotes:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when it is a good thing to do, and times when it is pointless and can hide errors.
There are plenty of circumstances where it wouldn't help. But in my experience, it can't hurt. Somebody enlighten me.
Setting a pointer to 0 (which is "null" in standard C++, the NULL define from C is somewhat different) avoids crashes on double deletes.
Consider the following:
Foo* foo = 0; // Sets the pointer to 0 (C++ NULL)
delete foo; // Won't do anything
Whereas:
Foo* foo = new Foo();
delete foo; // Deletes the object
delete foo; // Undefined behavior
In other words, if you don't set deleted pointers to 0, you will get into trouble if you're doing double deletes. An argument against setting pointers to 0 after delete would be that doing so just masks double delete bugs and leaves them unhandled.
It's best to not have double delete bugs, obviously, but depending on ownership semantics and object lifecycles, this can be hard to achieve in practice. I prefer a masked double delete bug over UB.
Finally, a sidenote regarding managing object allocation, I suggest you take a look at std::unique_ptr for strict/singular ownership, std::shared_ptr for shared ownership, or another smart pointer implementation, depending on your needs.
Setting pointers to NULL after you've deleted what it pointed to certainly can't hurt, but it's often a bit of a band-aid over a more fundamental problem: Why are you using a pointer in the first place? I can see two typical reasons:
You simply wanted something allocated on the heap. In which case wrapping it in a RAII object would have been much safer and cleaner. End the RAII object's scope when you no longer need the object. That's how std::vector works, and it solves the problem of accidentally leaving pointers to deallocated memory around. There are no pointers.
Or perhaps you wanted some complex shared ownership semantics. The pointer returned from new might not be the same as the one that delete is called on. Multiple objects may have used the object simultaneously in the meantime. In that case, a shared pointer or something similar would have been preferable.
My rule of thumb is that if you leave pointers around in user code, you're Doing It Wrong. The pointer shouldn't be there to point to garbage in the first place. Why isn't there an object taking responsibility for ensuring its validity? Why doesn't its scope end when the pointed-to object does?
I've got an even better best practice: Where possible, end the variable's scope!
{
Foo* pFoo = new Foo;
// use pFoo
delete pFoo;
}
I always set a pointer to NULL (now nullptr) after deleting the object(s) it points to.
It can help catch many references to freed memory (assuming your platform faults on a deref of a null pointer).
It won't catch all references to free'd memory if, for example, you have copies of the pointer lying around. But some is better than none.
It will mask a double-delete, but I find those are far less common than accesses to already freed memory.
In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.
If you're already using RAII, then there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.
It's often convenient, when debugging, to see the null value rather than a stale pointer.
If this still bothers you, use a smart pointer or a reference instead.
I also set other types of resource handles to the no-resource value when the resource is free'd (which is typically only in the destructor of an RAII wrapper written to encapsulate the resource).
I worked on a large (9 million statements) commercial product (primarily in C). At one point, we used macro magic to null out the pointer when memory was freed. This immediately exposed lots of lurking bugs that were promptly fixed. As far as I can remember, we never had a double-free bug.
Update: Microsoft believes that it's a good practice for security and recommends the practice in their SDL policies. Apparently MSVC++11 will stomp the deleted pointer automatically (in many circumstances) if you compile with the /SDL option.
Firstly, there are a lot of existing questions on this and closely related topics, for example Why doesn't delete set the pointer to NULL?.
In your code, the issue what goes on in (use p). For example, if somewhere you have code like this:
Foo * p2 = p;
then setting p to NULL accomplishes very little, as you still have the pointer p2 to worry about.
This is not to say that setting a pointer to NULL is always pointless. For example, if p were a member variable pointing to a resource who's lifetime was not exactly the same as the class containing p, then setting p to NULL could be a useful way of indicating the presence or absence of the resource.
If there is more code after the delete, Yes. When the pointer is deleted in a constructor or at the end of method or function, No.
The point of this parable is to remind the programmer, during run-time, that the object has already been deleted.
An even better practice is to use Smart Pointers (shared or scoped) which automagically delete their target objects.
As others have said, delete ptr; ptr = 0; is not going to cause demons to fly out of your nose. However, it does encourage the usage of ptr as a flag of sorts. The code becomes littered with delete and setting the pointer to NULL. The next step is to scatter if (arg == NULL) return; through your code to protect against the accidental usage of a NULL pointer. The problem occurs once the checks against NULL become your primary means of checking for the state of an object or program.
I'm sure that there is a code smell about using a pointer as a flag somewhere but I haven't found one.
I'll change your question slightly:
Would you use an uninitialized
pointer? You know, one that you didn't
set to NULL or allocate the memory it
points to?
There are two scenarios where setting the pointer to NULL can be skipped:
the pointer variable goes out of scope immediately
you have overloaded the semantic of the pointer and are using its value not only as a memory pointer, but also as a key or raw value. this approach however suffers from other problems.
Meanwhile, arguing that setting the pointer to NULL might hide errors to me sounds like arguing that you shouldn't fix a bug because the fix might hide another bug. The only bugs that might show if the pointer is not set to NULL would be the ones that try to use the pointer. But setting it to NULL would actually cause exactly the same bug as would show if you use it with freed memory, wouldn't it?
If you have no other constraint that forces you to either set or not set the pointer to NULL after you delete it (one such constraint was mentioned by Neil Butterworth), then my personal preference is to leave it be.
For me, the question isn't "is this a good idea?" but "what behavior would I prevent or allow to succeed by doing this?" For example, if this allows other code to see that the pointer is no longer available, why is other code even attempting to look at freed pointers after they are freed? Usually, it's a bug.
It also does more work than necessary as well as hindering post-mortem debugging. The less you touch memory after you don't need it, the easier it is to figure out why something crashed. Many times I have relied on the fact that memory is in a similar state to when a particular bug occurred to diagnose and fix said bug.
Explicitly nulling after delete strongly suggests to a reader that the pointer represents something which is conceptually optional. If I saw that being done, I'd start worrying that everywhere in the source the pointer gets used that it should be first tested against NULL.
If that's what you actually mean, it's better to make that explicit in the source using something like boost::optional
optional<Foo*> p (new Foo);
// (use p.get(), but must test p for truth first!...)
delete p.get();
p = optional<Foo*>();
But if you really wanted people to know the pointer has "gone bad", I'll pitch in 100% agreement with those who say the best thing to do is make it go out of scope. Then you're using the compiler to prevent the possibility of bad dereferences at runtime.
That's the baby in all the C++ bathwater, shouldn't throw it out. :)
In a well structured program with appropriate error checking, there is no reason not to assign it null. 0 stands alone as a universally recognized invalid value in this context. Fail hard and Fail soon.
Many of the arguments against assigning 0 suggest that it could hide a bug or complicate control flow. Fundamentally, that is either an upstream error (not your fault (sorry for the bad pun)) or another error on the programmer's behalf -- perhaps even an indication that program flow has grown too complex.
If the programmer wants to introduce the use of a pointer which may be null as a special value and write all the necessary dodging around that, that's a complication they have deliberately introduced. The better the quarantine, the sooner you find cases of misuse, and the less they are able to spread into other programs.
Well structured programs may be designed using C++ features to avoid these cases. You can use references, or you can just say "passing/using null or invalid arguments is an error" -- an approach which is equally applicable to containers, such as smart pointers. Increasing consistent and correct behavior forbids these bugs from getting far.
From there, you have only a very limited scope and context where a null pointer may exist (or is permitted).
The same may be applied to pointers which are not const. Following the value of a pointer is trivial because its scope is so small, and improper use is checked and well defined. If your toolset and engineers cannot follow the program following a quick read or there is inappropriate error checking or inconsistent/lenient program flow, you have other, bigger problems.
Finally, your compiler and environment likely has some guards for the times when you would like to introduce errors (scribbling), detect accesses to freed memory, and catch other related UB. You can also introduce similar diagnostics into your programs, often without affecting existing programs.
Let me expand what you've already put into your question.
Here's what you've put into your question, in bullet-point form:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when:
it is a good thing to do
and times when it is pointless and can hide errors.
However, there is no times when this is bad! You will not introduce more bugs by explicitly nulling it, you will not leak memory, you will not cause undefined behaviour to happen.
So, if in doubt, just null it.
Having said that, if you feel that you have to explicitly null some pointer, then to me this sounds like you haven't split up a method enough, and should look at the refactoring approach called "Extract method" to split up the method into separate parts.
There is always Dangling Pointers to worry about.
Yes.
The only "harm" it can do is to introduce inefficiency (an unnecessary store operation) into your program - but this overhead will be insignificant in relation to the cost of allocating and freeing the block of memory in most cases.
If you don't do it, you will have some nasty pointer derefernce bugs one day.
I always use a macro for delete:
#define SAFEDELETE(ptr) { delete(ptr); ptr = NULL; }
(and similar for an array, free(), releasing handles)
You can also write "self delete" methods that take a reference to the calling code's pointer, so they force the calling code's pointer to NULL. For example, to delete a subtree of many objects:
static void TreeItem::DeleteSubtree(TreeItem *&rootObject)
{
if (rootObject == NULL)
return;
rootObject->UnlinkFromParent();
for (int i = 0; i < numChildren)
DeleteSubtree(rootObject->child[i]);
delete rootObject;
rootObject = NULL;
}
edit
Yes, these techniques do violate some rules about use of macros (and yes, these days you could probably achieve the same result with templates) - but by using over many years I never ever accessed dead memory - one of the nastiest and most difficult and most time consuming to debug problems you can face. In practice over many years they have effectively eliminated a whjole class of bugs from every team I have introduced them on.
There are also many ways you could implement the above - I am just trying to illustrate the idea of forcing people to NULL a pointer if they delete an object, rather than providing a means for them to release the memory that does not NULL the caller's pointer.
Of course, the above example is just a step towards an auto-pointer. Which I didn't suggest because the OP was specifically asking about the case of not using an auto pointer.
"There are times when it is a good thing to do, and times when it is pointless and can hide errors"
I can see two problems:
That simple code:
delete myObj;
myobj = 0
becomes to a for-liner in multithreaded environment:
lock(myObjMutex);
delete myObj;
myobj = 0
unlock(myObjMutex);
The "best practice" of Don Neufeld don't apply always. E.g. in one automotive project we had to set pointers to 0 even in destructors. I can imagine in safety-critical software such rules are not uncommon. It is easier (and wise) to follow them than trying to persuade
the team/code-checker for each pointer use in code, that a line nulling this pointer is redundant.
Another danger is relying on this technique in exceptions-using code:
try{
delete myObj; //exception in destructor
myObj=0
}
catch
{
//myObj=0; <- possibly resource-leak
}
if (myObj)
// use myObj <--undefined behaviour
In such code either you produce resource-leak and postpone the problem or the process crashes.
So, this two problems going spontaneously through my head (Herb Sutter would for sure tell more) make for me all the questions of the kind "How to avoid using smart-pointers and do the job safely with normal pointers" as obsolete.
If you're going to reallocate the pointer before using it again (dereferencing it, passing it to a function, etc.), making the pointer NULL is just an extra operation. However, if you aren't sure whether it will be reallocated or not before it is used again, setting it to NULL is a good idea.
As many have said, it is of course much easier to just use smart pointers.
Edit: As Thomas Matthews said in this earlier answer, if a pointer is deleted in a destructor, there isn't any need to assign NULL to it since it won't be used again because the object is being destroyed already.
I can imagine setting a pointer to NULL after deleting it being useful in rare cases where there is a legitimate scenario of reusing it in a single function (or object). Otherwise it makes no sense - a pointer needs to point to something meaningful as long as it exists - period.
If the code does not belong to the most performance-critical part of your application, keep it simple and use a shared_ptr:
shared_ptr<Foo> p(new Foo);
//No more need to call delete
It performs reference counting and is thread-safe. You can find it in the tr1 (std::tr1 namespace, #include < memory >) or if your compiler does not provide it, get it from boost.

Do we always need to point dangling pointers to null? [duplicate]

I'll start out by saying, use smart pointers and you'll never have to worry about this.
What are the problems with the following code?
Foo * p = new Foo;
// (use p)
delete p;
p = NULL;
This was sparked by an answer and comments to another question. One comment from Neil Butterworth generated a few upvotes:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when it is a good thing to do, and times when it is pointless and can hide errors.
There are plenty of circumstances where it wouldn't help. But in my experience, it can't hurt. Somebody enlighten me.
Setting a pointer to 0 (which is "null" in standard C++, the NULL define from C is somewhat different) avoids crashes on double deletes.
Consider the following:
Foo* foo = 0; // Sets the pointer to 0 (C++ NULL)
delete foo; // Won't do anything
Whereas:
Foo* foo = new Foo();
delete foo; // Deletes the object
delete foo; // Undefined behavior
In other words, if you don't set deleted pointers to 0, you will get into trouble if you're doing double deletes. An argument against setting pointers to 0 after delete would be that doing so just masks double delete bugs and leaves them unhandled.
It's best to not have double delete bugs, obviously, but depending on ownership semantics and object lifecycles, this can be hard to achieve in practice. I prefer a masked double delete bug over UB.
Finally, a sidenote regarding managing object allocation, I suggest you take a look at std::unique_ptr for strict/singular ownership, std::shared_ptr for shared ownership, or another smart pointer implementation, depending on your needs.
Setting pointers to NULL after you've deleted what it pointed to certainly can't hurt, but it's often a bit of a band-aid over a more fundamental problem: Why are you using a pointer in the first place? I can see two typical reasons:
You simply wanted something allocated on the heap. In which case wrapping it in a RAII object would have been much safer and cleaner. End the RAII object's scope when you no longer need the object. That's how std::vector works, and it solves the problem of accidentally leaving pointers to deallocated memory around. There are no pointers.
Or perhaps you wanted some complex shared ownership semantics. The pointer returned from new might not be the same as the one that delete is called on. Multiple objects may have used the object simultaneously in the meantime. In that case, a shared pointer or something similar would have been preferable.
My rule of thumb is that if you leave pointers around in user code, you're Doing It Wrong. The pointer shouldn't be there to point to garbage in the first place. Why isn't there an object taking responsibility for ensuring its validity? Why doesn't its scope end when the pointed-to object does?
I've got an even better best practice: Where possible, end the variable's scope!
{
Foo* pFoo = new Foo;
// use pFoo
delete pFoo;
}
I always set a pointer to NULL (now nullptr) after deleting the object(s) it points to.
It can help catch many references to freed memory (assuming your platform faults on a deref of a null pointer).
It won't catch all references to free'd memory if, for example, you have copies of the pointer lying around. But some is better than none.
It will mask a double-delete, but I find those are far less common than accesses to already freed memory.
In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.
If you're already using RAII, then there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.
It's often convenient, when debugging, to see the null value rather than a stale pointer.
If this still bothers you, use a smart pointer or a reference instead.
I also set other types of resource handles to the no-resource value when the resource is free'd (which is typically only in the destructor of an RAII wrapper written to encapsulate the resource).
I worked on a large (9 million statements) commercial product (primarily in C). At one point, we used macro magic to null out the pointer when memory was freed. This immediately exposed lots of lurking bugs that were promptly fixed. As far as I can remember, we never had a double-free bug.
Update: Microsoft believes that it's a good practice for security and recommends the practice in their SDL policies. Apparently MSVC++11 will stomp the deleted pointer automatically (in many circumstances) if you compile with the /SDL option.
Firstly, there are a lot of existing questions on this and closely related topics, for example Why doesn't delete set the pointer to NULL?.
In your code, the issue what goes on in (use p). For example, if somewhere you have code like this:
Foo * p2 = p;
then setting p to NULL accomplishes very little, as you still have the pointer p2 to worry about.
This is not to say that setting a pointer to NULL is always pointless. For example, if p were a member variable pointing to a resource who's lifetime was not exactly the same as the class containing p, then setting p to NULL could be a useful way of indicating the presence or absence of the resource.
If there is more code after the delete, Yes. When the pointer is deleted in a constructor or at the end of method or function, No.
The point of this parable is to remind the programmer, during run-time, that the object has already been deleted.
An even better practice is to use Smart Pointers (shared or scoped) which automagically delete their target objects.
As others have said, delete ptr; ptr = 0; is not going to cause demons to fly out of your nose. However, it does encourage the usage of ptr as a flag of sorts. The code becomes littered with delete and setting the pointer to NULL. The next step is to scatter if (arg == NULL) return; through your code to protect against the accidental usage of a NULL pointer. The problem occurs once the checks against NULL become your primary means of checking for the state of an object or program.
I'm sure that there is a code smell about using a pointer as a flag somewhere but I haven't found one.
I'll change your question slightly:
Would you use an uninitialized
pointer? You know, one that you didn't
set to NULL or allocate the memory it
points to?
There are two scenarios where setting the pointer to NULL can be skipped:
the pointer variable goes out of scope immediately
you have overloaded the semantic of the pointer and are using its value not only as a memory pointer, but also as a key or raw value. this approach however suffers from other problems.
Meanwhile, arguing that setting the pointer to NULL might hide errors to me sounds like arguing that you shouldn't fix a bug because the fix might hide another bug. The only bugs that might show if the pointer is not set to NULL would be the ones that try to use the pointer. But setting it to NULL would actually cause exactly the same bug as would show if you use it with freed memory, wouldn't it?
If you have no other constraint that forces you to either set or not set the pointer to NULL after you delete it (one such constraint was mentioned by Neil Butterworth), then my personal preference is to leave it be.
For me, the question isn't "is this a good idea?" but "what behavior would I prevent or allow to succeed by doing this?" For example, if this allows other code to see that the pointer is no longer available, why is other code even attempting to look at freed pointers after they are freed? Usually, it's a bug.
It also does more work than necessary as well as hindering post-mortem debugging. The less you touch memory after you don't need it, the easier it is to figure out why something crashed. Many times I have relied on the fact that memory is in a similar state to when a particular bug occurred to diagnose and fix said bug.
Explicitly nulling after delete strongly suggests to a reader that the pointer represents something which is conceptually optional. If I saw that being done, I'd start worrying that everywhere in the source the pointer gets used that it should be first tested against NULL.
If that's what you actually mean, it's better to make that explicit in the source using something like boost::optional
optional<Foo*> p (new Foo);
// (use p.get(), but must test p for truth first!...)
delete p.get();
p = optional<Foo*>();
But if you really wanted people to know the pointer has "gone bad", I'll pitch in 100% agreement with those who say the best thing to do is make it go out of scope. Then you're using the compiler to prevent the possibility of bad dereferences at runtime.
That's the baby in all the C++ bathwater, shouldn't throw it out. :)
In a well structured program with appropriate error checking, there is no reason not to assign it null. 0 stands alone as a universally recognized invalid value in this context. Fail hard and Fail soon.
Many of the arguments against assigning 0 suggest that it could hide a bug or complicate control flow. Fundamentally, that is either an upstream error (not your fault (sorry for the bad pun)) or another error on the programmer's behalf -- perhaps even an indication that program flow has grown too complex.
If the programmer wants to introduce the use of a pointer which may be null as a special value and write all the necessary dodging around that, that's a complication they have deliberately introduced. The better the quarantine, the sooner you find cases of misuse, and the less they are able to spread into other programs.
Well structured programs may be designed using C++ features to avoid these cases. You can use references, or you can just say "passing/using null or invalid arguments is an error" -- an approach which is equally applicable to containers, such as smart pointers. Increasing consistent and correct behavior forbids these bugs from getting far.
From there, you have only a very limited scope and context where a null pointer may exist (or is permitted).
The same may be applied to pointers which are not const. Following the value of a pointer is trivial because its scope is so small, and improper use is checked and well defined. If your toolset and engineers cannot follow the program following a quick read or there is inappropriate error checking or inconsistent/lenient program flow, you have other, bigger problems.
Finally, your compiler and environment likely has some guards for the times when you would like to introduce errors (scribbling), detect accesses to freed memory, and catch other related UB. You can also introduce similar diagnostics into your programs, often without affecting existing programs.
Let me expand what you've already put into your question.
Here's what you've put into your question, in bullet-point form:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when:
it is a good thing to do
and times when it is pointless and can hide errors.
However, there is no times when this is bad! You will not introduce more bugs by explicitly nulling it, you will not leak memory, you will not cause undefined behaviour to happen.
So, if in doubt, just null it.
Having said that, if you feel that you have to explicitly null some pointer, then to me this sounds like you haven't split up a method enough, and should look at the refactoring approach called "Extract method" to split up the method into separate parts.
There is always Dangling Pointers to worry about.
Yes.
The only "harm" it can do is to introduce inefficiency (an unnecessary store operation) into your program - but this overhead will be insignificant in relation to the cost of allocating and freeing the block of memory in most cases.
If you don't do it, you will have some nasty pointer derefernce bugs one day.
I always use a macro for delete:
#define SAFEDELETE(ptr) { delete(ptr); ptr = NULL; }
(and similar for an array, free(), releasing handles)
You can also write "self delete" methods that take a reference to the calling code's pointer, so they force the calling code's pointer to NULL. For example, to delete a subtree of many objects:
static void TreeItem::DeleteSubtree(TreeItem *&rootObject)
{
if (rootObject == NULL)
return;
rootObject->UnlinkFromParent();
for (int i = 0; i < numChildren)
DeleteSubtree(rootObject->child[i]);
delete rootObject;
rootObject = NULL;
}
edit
Yes, these techniques do violate some rules about use of macros (and yes, these days you could probably achieve the same result with templates) - but by using over many years I never ever accessed dead memory - one of the nastiest and most difficult and most time consuming to debug problems you can face. In practice over many years they have effectively eliminated a whjole class of bugs from every team I have introduced them on.
There are also many ways you could implement the above - I am just trying to illustrate the idea of forcing people to NULL a pointer if they delete an object, rather than providing a means for them to release the memory that does not NULL the caller's pointer.
Of course, the above example is just a step towards an auto-pointer. Which I didn't suggest because the OP was specifically asking about the case of not using an auto pointer.
"There are times when it is a good thing to do, and times when it is pointless and can hide errors"
I can see two problems:
That simple code:
delete myObj;
myobj = 0
becomes to a for-liner in multithreaded environment:
lock(myObjMutex);
delete myObj;
myobj = 0
unlock(myObjMutex);
The "best practice" of Don Neufeld don't apply always. E.g. in one automotive project we had to set pointers to 0 even in destructors. I can imagine in safety-critical software such rules are not uncommon. It is easier (and wise) to follow them than trying to persuade
the team/code-checker for each pointer use in code, that a line nulling this pointer is redundant.
Another danger is relying on this technique in exceptions-using code:
try{
delete myObj; //exception in destructor
myObj=0
}
catch
{
//myObj=0; <- possibly resource-leak
}
if (myObj)
// use myObj <--undefined behaviour
In such code either you produce resource-leak and postpone the problem or the process crashes.
So, this two problems going spontaneously through my head (Herb Sutter would for sure tell more) make for me all the questions of the kind "How to avoid using smart-pointers and do the job safely with normal pointers" as obsolete.
If you're going to reallocate the pointer before using it again (dereferencing it, passing it to a function, etc.), making the pointer NULL is just an extra operation. However, if you aren't sure whether it will be reallocated or not before it is used again, setting it to NULL is a good idea.
As many have said, it is of course much easier to just use smart pointers.
Edit: As Thomas Matthews said in this earlier answer, if a pointer is deleted in a destructor, there isn't any need to assign NULL to it since it won't be used again because the object is being destroyed already.
I can imagine setting a pointer to NULL after deleting it being useful in rare cases where there is a legitimate scenario of reusing it in a single function (or object). Otherwise it makes no sense - a pointer needs to point to something meaningful as long as it exists - period.
If the code does not belong to the most performance-critical part of your application, keep it simple and use a shared_ptr:
shared_ptr<Foo> p(new Foo);
//No more need to call delete
It performs reference counting and is thread-safe. You can find it in the tr1 (std::tr1 namespace, #include < memory >) or if your compiler does not provide it, get it from boost.

What happens in a double delete?

Obj *op = new Obj;
Obj *op2 = op;
delete op;
delete op2; // What happens here?
What's the worst that can happen when you accidentally double delete? Does it matter? Is the compiler going to throw an error?
It causes undefined behaviour. Anything can happen. In practice, a runtime crash is probably what I'd expect.
Undefined behavior. There are no guarantees whatsoever made by the standard. Probably your operating system makes some guarantees, like "you won't corrupt another process", but that doesn't help your program very much.
Your program could crash. Your data could be corrupted. The direct deposit of your next paycheck could instead take 5 million dollars out of your account.
It's undefined behavior, so the actual result will vary depending on the compiler & runtime environment.
In most cases, the compiler won't notice. In many, if not most, cases, the runtime memory management library will crash.
Under the hood, any memory manager has to maintain some metadata about each block of data it allocates, in a way that allows it to look up the metadata from the pointer that malloc/new returned. Typically this takes the form of a structure at fixed offset before the allocated block. This structure can contain a "magic number" -- a constant that is unlikely to occur by pure chance. If the memory manager sees the magic number in the expected place, it knows that the pointer provided to free/delete is most likely valid. If it doesn't see the magic number, or if it sees a different number that means "this pointer was recently freed", it can either silently ignore the free request, or it can print a helpful message and abort. Either is legal under the spec, and there are pro/con arguments to either approach.
If the memory manager doesn't keep a magic number in the metadata block, or doesn't otherwise check the sanity of the metadata, then anything can happen. Depending on how the memory manager is implemented, the result is most likely a crash without a helpful message, either immediately in the memory manager logic, somewhat later the next time the memory manager tries to allocate or free memory, or much later and far away when two different parts of the program each think they have ownership of the same chunk of memory.
Let's try it. Turn your code into a complete program in so.cpp:
class Obj
{
public:
int x;
};
int main( int argc, char* argv[] )
{
Obj *op = new Obj;
Obj *op2 = op;
delete op;
delete op2;
return 0;
}
Compile it (I'm using gcc 4.2.1 on OSX 10.6.8, but YMMV):
russell#Silverback ~: g++ so.cpp
Run it:
russell#Silverback ~: ./a.out
a.out(1965) malloc: *** error for object 0x100100080: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap
Lookie there, the gcc runtime actually detects that it was a double delete and is fairly helpful before it crashes.
While this is undefined:
int* a = new int;
delete a;
delete a; // same as your code
this is well defined:
int* a = new int;
delete a;
a = nullptr; // or just NULL or 0 if your compiler doesn't support c++11
delete a; // nothing happens!
Thought I should post it since no one else was mentioning it.
The compiler may give a warning or something, especially in obvious (like in your example) but it is not possible for it to always detect. (You can use something like valgrind, which at runtime can detect it though). As for the behaviour, it can be anything. Some safe library might check, and handle it fine -- but other runtimes (for speed) will make the assumption you call is correct (which it's not) and then crash or worse. The runtime is allowed to make the assumption you're not double deleting (even if double deleting would do something bad, e.g. crashing up your computer)
Everyone already told you that you shouldn't do this and that it will cause undefined behavior. That is widely known, so let's elaborate on this on a lower level and let's see what actually happens.
Standard universal answer is that anything can happen, that's not entirely true. For example, the computer will not attempt to kill you for doing this (unless you are programming AI for a robot) :)
The reason why there can't be any universal answer is that as this is undefined, it may differ from compiler to compiler and even across different versions of same compiler.
But this is what "roughly" happens in most cases:
delete consist of 2 primary operations:
it calls the destructor if it's defined
it somehow frees the memory allocated to the object
So, if your destructor contains any code that access any data of class that already was deleted, it may segfault OR (most likely) you will read some nonsense data. If these deleted data are pointers then it will most likely segfault, because you will attempt to access memory that contains something else, or doesn't belong to you.
If your constructor doesn't touch any data or isn't present (let's not consider virtual destructors here for simplicity), it may not be a reason for crash in most compiler implementations. However, calling a destructor is not the only operation that is going to happen here.
The memory needs to be free'd. How it's done depends on implementation in compiler, but it may as well execute some free like function, giving it the pointer and size of your object. Calling free on memory that was already deleted may crash, because the memory may not belong to you anymore. If it does belong to you, it may not crash immediately, but it may overwrite memory that was already allocated for some different object of your program.
That means one or more of your memory structures just got corrupted and your program will likely crash sooner or later or it might behave incredibly weirdly. The reasons will not be obvious in your debugger and you may spend weeks figuring out what the hell just happened.
So, as others have said, it's generally a bad idea, but I suppose you already know that. Don't worry though, innocent kitten will most likely not die if you delete an object twice.
Here is example code that is wrong but may work just fine as well (it works OK with GCC on linux):
class a {};
int main()
{
a *test = new a();
delete test;
a *test2 = new a();
delete test;
return 0;
}
If I don't create intermediate instance of that class between deletes, 2 calls to free on same memory happens as expected:
*** Error in `./a.out': double free or corruption (fasttop): 0x000000000111a010 ***
To answer your questions directly:
What is the worst that can happen:
In theory, your program causes something fatal. It may even randomly attempt to wipe your hard drive in some extreme cases. The chances depends on what your program actually is (kernel driver? user space program?).
In practice, it would most likely just crash with segfault. But something worse might happen.
Is the compiler going to throw an error?
It shouldn't.
No, it isn't safe to delete the same pointer twice. It is undefined behaviour according to C++ standard.
From the C++ FAQ: visit this link
Is it safe to delete the same pointer twice?
No! (Assuming you didn’t get that pointer back from new in between.)
For example, the following is a disaster:
class Foo { /*...*/ };
void yourCode()
{
Foo* p = new Foo();
delete p;
delete p; // DISASTER!
// ...
}
That second delete p line might do some really bad things to you. It might, depending on the phase of the moon, corrupt your heap, crash your program, make arbitrary and bizarre changes to objects that are already out there on the heap, etc. Unfortunately these symptoms can appear and disappear randomly. According to Murphy’s law, you’ll be hit the hardest at the worst possible moment (when the customer is looking, when a high-value transaction is trying to post, etc.).
Note: some runtime systems will protect you from certain very simple cases of double delete. Depending on the details, you might be okay if you happen to be running on one of those systems and if no one ever deploys your code on another system that handles things differently and if you are deleting something that doesn’t have a destructor and if you don’t do anything significant between the two deletes and if no one ever changes your code to do something significant between the two deletes and if your thread scheduler (over which you likely have no control!) doesn’t happen to swap threads between the two deletes and if, and if, and if. So back to Murphy: since it can go wrong, it will, and it will go wrong at the worst possible moment.
A non-crash doesn’t prove the absence of a bug; it merely fails to prove the presence of a bug.
Trust me: double-delete is bad, bad, bad. Just say no.
On GCC 7.2, Clang 7.0, and Zapcc 5.0, it aborts on the 9th delete.
#include <iostream>
int main() {
int *x = new int;
for(size_t n = 1;;n++) {
std::cout << "delete x " << n << " time(s)\n";
delete x;
}
return 0;
}

Is there an established pointer value for a released pointee?

Some programmers like to set a pointer variable to null after releasing the pointee:
delete ptr;
ptr = 0;
If someone tries to release the pointee again, nothing will happen. In my opinion, this is wrong. Accessing a pointer after the pointee has been released is a bug, and bugs should jump in your face ASAP.
Is there an alternative value I could assign to a pointer variable that designates released pointees?
delete ptr;
ptr = SOME_MAGIC_VALUE;
Ideally, I would want Visual Studio 2008 to tell me "The program has been terminated because you tried to access an already released pointee here!" in debug mode.
Okay, it seems I have to do the checking myself. Anything wrong with the following template?
template <typename T>
void sole_delete(T*& p)
{
if (p)
{
delete p;
p = 0;
}
else
{
std::cerr << "pointee has already been released!\n";
abort();
}
}
No. Test for "0" when trying to delete something if you really want to warn or error out about it.
Alternatively, during development you could omit ptr = 0; and rely on valgrind to tell you where and when you're attempting a double free. Just be sure to put the ptr = 0; back for release.
Edit Yes, people I know C++ doesn't require a test around delete 0;
I am not suggesting if (ptr != 0) delete ptr;. I am suggesting if (ptr == 0) { some user error that the OP asked for } delete ptr;
Assign NULL after releasing a pointer. And before using it, check for its NULLity.. If it is null, report an error by yourself.
Is there an alternative value I could assign to a pointer variable that designates released pointees?
Ideally, I would want Visual Studio 2008 to tell me "The program has been terminated because you tried to access an already released pointee here!" in debug mode.
You get this very likely by just doing delete ptr. The run-time will catch you if you double-delete this pointer.
Anyway, I don't think I have written ptr = NULL more than a handful of times in the last decade. Why would I do this? Such a pointer is certainly hidden within an object whose destructor will delete the object it refers to, and after that destructor has been invoked the pointer is gone, too.
And if some circumstances would require me to leave a pointer to hang around after the pointee has been deleted, I wouldn't set it to NULL simply because I would want the code to crash ASAP if I'd double-delete. Setting the pointer to NULL just masks an error.
Of course, all this doesn't mean that one wouldn't want a pointer that might be explicitly set to "nothing", and use NULL for that. But not to mask a double-deletion error.
No, calling delete on a null pointer is perfectly normal from C++ point of view. Assigning some magic value will break code severely - you'll now have to distinguish between null pointers, valid pointers and magic value pointers and I guess it will be a huge mess.
If you really oppose deleting a null pointer you can have a separate boolean flag together with each pointer meaning that it has been deleted. Perhaps you could write a wrapper class for that.
If you just want to check allocations and deletions the easiest way is to write your own global operator new and operator delete and manually keep track of all pointers that are allocated and deallocated.
Of course, you can also use an existing tool that does that for you, e.g. Valgrind.
If you also want to protocol each pointer access, this gets hairy. You essentially have to either patch the executable or execute it in a virtual machine where each pointer access is redirected to your bookkeeping routine.
But once again, existing tools such as Valgrind already do that for you. In the case of Valgrind, your executable is run inside a virtual machine; other programs go the way of patching your application by modifying the byte code.
When you delete a pointer in debug mode, many compilers will paint the bytes with some values to indicate the memory as "invalid" in case you try to read it. Of course genuine memory may have those bytes, so it allocates a bit extra to indicate whether the pointer you are reading is valid or not, and paints the bytes you do not directly access.
It is not wrong to call delete multiple times on the same pointer (variable) just on what it points to.
Maybe this isn't the best way to do this but it's totally legal of course...
T * array[N];
for( i = 0; i < N; ++ i )
{
array[i] = new T;
}
T* ptr;
for( i = 0; i < N; ++i )
{
ptr = array[i];
delete ptr;
}
and apart from not being the best way to do things, I am calling delete on the variable "ptr" multiple times but on different addresses and clearly not an error.
I think sharptooth already provided a valid answer, but I think he failed to spell it out explicitly:
If it is an error in your code to access a pointer variable after its object has been deleted via that pointer variable, then you have to add some checking yourself. (Possibly via some flag.)
Answering the question in question[sic].
No, there's no established value for released pointers.
I think any access to an invalid pointer (like NULL) should be noted - not only accessing them after release, which may never happen if no (non-NULL) initialization takes place. The debugger is bound to warn you when you try to access a null pointer - if it doesn't, you shouldn't be using it.
edit: end of answering the original question; rambling about double-delete
It really depends on the design if delete on NULL is a bug waiting to happen. In many cases it's not. Perhaps you should use "safe delete" when that is needed - or while debugging? Like this:
template <typename T>
void safe_delete(T*& ptr)
{
if (ptr == 0)
throw std::runtime_error("Deleting NULL!");
delete ptr;
ptr = 0;
}
There is no point.
I won't enter the apparently rather hot conversation going on, just point out an obvious fact: pointers are passed by copy.
With some code, it gives:
T* p = /* something */;
T* q = p;
delete q;
q = 0;
Do you feel safe ?
The problem is that you have no way to ensure that your magic value has been propagated to all pointers to the object.
This is like plugging a hole in a sieve and hoping it'll stop the water from pouring out.
In C++0x, you can
delete ptr;
ptr = nullptr;
Setting the pointer to a specific value will only affect this copy of the pointer, so it provides nearly no protection. If the program needs to be verifiably correct, there are smart pointer classes that track copies of the pointer and invalidate those, otherwise I'd just recommend a tool like Valgrind (on Linux) or Rational Purify (on Windows) that will let you check for memory access errors.
It’s not an error to delete a nullpointer; by definition it does nothing.
Generally it’s a Bad Idea™ to null pointer variables after delete, because there the only effect it can have is to hide a bug that causes multiple deletion (with the pointer variable nulled the second deletion will have no effect, instead of e.g. crashing).
Generally, nulling of pointers belongs, in my view, with all the other Microsoft’isms such as Hungarian notation and extensive use of macros.
It’s something that may once have had a good rationale, but which today, as of 2011, just has negative effects, and is used out of sheer inertia: idea propagation of the same kind that Knuth once described for random generators – the almost worst possible one gaining popularity and then incorporated as the default generator in umpteen language implementations and libraries, with most people thinking the extensive usage meant it had to be at least reasonable.
However, having said that, for the person who leans towards the ultra-formally pedantic it can be at least an emotionally satisfying idea to null pointers in e.g. a std::vector, after delete. The reason is that the Holy Standard, ISO/IEC 14882, allows the std::vector destructor to do rather unholy things, such as copying the pointer values around. And in the formally pedantic view, even such copying of invalid pointer values incurs Undefined Behavior. Not that it is a practical concern. First of all I know of absolutely no modern platform where copying would have any ill effect, and secondly so much code relies on standard containers behaving reasonably that they just have to: otherwise, nobody would use such an implementation.
Cheers & hth.

Is it good practice to NULL a pointer after deleting it?

I'll start out by saying, use smart pointers and you'll never have to worry about this.
What are the problems with the following code?
Foo * p = new Foo;
// (use p)
delete p;
p = NULL;
This was sparked by an answer and comments to another question. One comment from Neil Butterworth generated a few upvotes:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when it is a good thing to do, and times when it is pointless and can hide errors.
There are plenty of circumstances where it wouldn't help. But in my experience, it can't hurt. Somebody enlighten me.
Setting a pointer to 0 (which is "null" in standard C++, the NULL define from C is somewhat different) avoids crashes on double deletes.
Consider the following:
Foo* foo = 0; // Sets the pointer to 0 (C++ NULL)
delete foo; // Won't do anything
Whereas:
Foo* foo = new Foo();
delete foo; // Deletes the object
delete foo; // Undefined behavior
In other words, if you don't set deleted pointers to 0, you will get into trouble if you're doing double deletes. An argument against setting pointers to 0 after delete would be that doing so just masks double delete bugs and leaves them unhandled.
It's best to not have double delete bugs, obviously, but depending on ownership semantics and object lifecycles, this can be hard to achieve in practice. I prefer a masked double delete bug over UB.
Finally, a sidenote regarding managing object allocation, I suggest you take a look at std::unique_ptr for strict/singular ownership, std::shared_ptr for shared ownership, or another smart pointer implementation, depending on your needs.
Setting pointers to NULL after you've deleted what it pointed to certainly can't hurt, but it's often a bit of a band-aid over a more fundamental problem: Why are you using a pointer in the first place? I can see two typical reasons:
You simply wanted something allocated on the heap. In which case wrapping it in a RAII object would have been much safer and cleaner. End the RAII object's scope when you no longer need the object. That's how std::vector works, and it solves the problem of accidentally leaving pointers to deallocated memory around. There are no pointers.
Or perhaps you wanted some complex shared ownership semantics. The pointer returned from new might not be the same as the one that delete is called on. Multiple objects may have used the object simultaneously in the meantime. In that case, a shared pointer or something similar would have been preferable.
My rule of thumb is that if you leave pointers around in user code, you're Doing It Wrong. The pointer shouldn't be there to point to garbage in the first place. Why isn't there an object taking responsibility for ensuring its validity? Why doesn't its scope end when the pointed-to object does?
I've got an even better best practice: Where possible, end the variable's scope!
{
Foo* pFoo = new Foo;
// use pFoo
delete pFoo;
}
I always set a pointer to NULL (now nullptr) after deleting the object(s) it points to.
It can help catch many references to freed memory (assuming your platform faults on a deref of a null pointer).
It won't catch all references to free'd memory if, for example, you have copies of the pointer lying around. But some is better than none.
It will mask a double-delete, but I find those are far less common than accesses to already freed memory.
In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.
If you're already using RAII, then there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.
It's often convenient, when debugging, to see the null value rather than a stale pointer.
If this still bothers you, use a smart pointer or a reference instead.
I also set other types of resource handles to the no-resource value when the resource is free'd (which is typically only in the destructor of an RAII wrapper written to encapsulate the resource).
I worked on a large (9 million statements) commercial product (primarily in C). At one point, we used macro magic to null out the pointer when memory was freed. This immediately exposed lots of lurking bugs that were promptly fixed. As far as I can remember, we never had a double-free bug.
Update: Microsoft believes that it's a good practice for security and recommends the practice in their SDL policies. Apparently MSVC++11 will stomp the deleted pointer automatically (in many circumstances) if you compile with the /SDL option.
Firstly, there are a lot of existing questions on this and closely related topics, for example Why doesn't delete set the pointer to NULL?.
In your code, the issue what goes on in (use p). For example, if somewhere you have code like this:
Foo * p2 = p;
then setting p to NULL accomplishes very little, as you still have the pointer p2 to worry about.
This is not to say that setting a pointer to NULL is always pointless. For example, if p were a member variable pointing to a resource who's lifetime was not exactly the same as the class containing p, then setting p to NULL could be a useful way of indicating the presence or absence of the resource.
If there is more code after the delete, Yes. When the pointer is deleted in a constructor or at the end of method or function, No.
The point of this parable is to remind the programmer, during run-time, that the object has already been deleted.
An even better practice is to use Smart Pointers (shared or scoped) which automagically delete their target objects.
As others have said, delete ptr; ptr = 0; is not going to cause demons to fly out of your nose. However, it does encourage the usage of ptr as a flag of sorts. The code becomes littered with delete and setting the pointer to NULL. The next step is to scatter if (arg == NULL) return; through your code to protect against the accidental usage of a NULL pointer. The problem occurs once the checks against NULL become your primary means of checking for the state of an object or program.
I'm sure that there is a code smell about using a pointer as a flag somewhere but I haven't found one.
I'll change your question slightly:
Would you use an uninitialized
pointer? You know, one that you didn't
set to NULL or allocate the memory it
points to?
There are two scenarios where setting the pointer to NULL can be skipped:
the pointer variable goes out of scope immediately
you have overloaded the semantic of the pointer and are using its value not only as a memory pointer, but also as a key or raw value. this approach however suffers from other problems.
Meanwhile, arguing that setting the pointer to NULL might hide errors to me sounds like arguing that you shouldn't fix a bug because the fix might hide another bug. The only bugs that might show if the pointer is not set to NULL would be the ones that try to use the pointer. But setting it to NULL would actually cause exactly the same bug as would show if you use it with freed memory, wouldn't it?
If you have no other constraint that forces you to either set or not set the pointer to NULL after you delete it (one such constraint was mentioned by Neil Butterworth), then my personal preference is to leave it be.
For me, the question isn't "is this a good idea?" but "what behavior would I prevent or allow to succeed by doing this?" For example, if this allows other code to see that the pointer is no longer available, why is other code even attempting to look at freed pointers after they are freed? Usually, it's a bug.
It also does more work than necessary as well as hindering post-mortem debugging. The less you touch memory after you don't need it, the easier it is to figure out why something crashed. Many times I have relied on the fact that memory is in a similar state to when a particular bug occurred to diagnose and fix said bug.
Explicitly nulling after delete strongly suggests to a reader that the pointer represents something which is conceptually optional. If I saw that being done, I'd start worrying that everywhere in the source the pointer gets used that it should be first tested against NULL.
If that's what you actually mean, it's better to make that explicit in the source using something like boost::optional
optional<Foo*> p (new Foo);
// (use p.get(), but must test p for truth first!...)
delete p.get();
p = optional<Foo*>();
But if you really wanted people to know the pointer has "gone bad", I'll pitch in 100% agreement with those who say the best thing to do is make it go out of scope. Then you're using the compiler to prevent the possibility of bad dereferences at runtime.
That's the baby in all the C++ bathwater, shouldn't throw it out. :)
In a well structured program with appropriate error checking, there is no reason not to assign it null. 0 stands alone as a universally recognized invalid value in this context. Fail hard and Fail soon.
Many of the arguments against assigning 0 suggest that it could hide a bug or complicate control flow. Fundamentally, that is either an upstream error (not your fault (sorry for the bad pun)) or another error on the programmer's behalf -- perhaps even an indication that program flow has grown too complex.
If the programmer wants to introduce the use of a pointer which may be null as a special value and write all the necessary dodging around that, that's a complication they have deliberately introduced. The better the quarantine, the sooner you find cases of misuse, and the less they are able to spread into other programs.
Well structured programs may be designed using C++ features to avoid these cases. You can use references, or you can just say "passing/using null or invalid arguments is an error" -- an approach which is equally applicable to containers, such as smart pointers. Increasing consistent and correct behavior forbids these bugs from getting far.
From there, you have only a very limited scope and context where a null pointer may exist (or is permitted).
The same may be applied to pointers which are not const. Following the value of a pointer is trivial because its scope is so small, and improper use is checked and well defined. If your toolset and engineers cannot follow the program following a quick read or there is inappropriate error checking or inconsistent/lenient program flow, you have other, bigger problems.
Finally, your compiler and environment likely has some guards for the times when you would like to introduce errors (scribbling), detect accesses to freed memory, and catch other related UB. You can also introduce similar diagnostics into your programs, often without affecting existing programs.
Let me expand what you've already put into your question.
Here's what you've put into your question, in bullet-point form:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when:
it is a good thing to do
and times when it is pointless and can hide errors.
However, there is no times when this is bad! You will not introduce more bugs by explicitly nulling it, you will not leak memory, you will not cause undefined behaviour to happen.
So, if in doubt, just null it.
Having said that, if you feel that you have to explicitly null some pointer, then to me this sounds like you haven't split up a method enough, and should look at the refactoring approach called "Extract method" to split up the method into separate parts.
There is always Dangling Pointers to worry about.
Yes.
The only "harm" it can do is to introduce inefficiency (an unnecessary store operation) into your program - but this overhead will be insignificant in relation to the cost of allocating and freeing the block of memory in most cases.
If you don't do it, you will have some nasty pointer derefernce bugs one day.
I always use a macro for delete:
#define SAFEDELETE(ptr) { delete(ptr); ptr = NULL; }
(and similar for an array, free(), releasing handles)
You can also write "self delete" methods that take a reference to the calling code's pointer, so they force the calling code's pointer to NULL. For example, to delete a subtree of many objects:
static void TreeItem::DeleteSubtree(TreeItem *&rootObject)
{
if (rootObject == NULL)
return;
rootObject->UnlinkFromParent();
for (int i = 0; i < numChildren)
DeleteSubtree(rootObject->child[i]);
delete rootObject;
rootObject = NULL;
}
edit
Yes, these techniques do violate some rules about use of macros (and yes, these days you could probably achieve the same result with templates) - but by using over many years I never ever accessed dead memory - one of the nastiest and most difficult and most time consuming to debug problems you can face. In practice over many years they have effectively eliminated a whjole class of bugs from every team I have introduced them on.
There are also many ways you could implement the above - I am just trying to illustrate the idea of forcing people to NULL a pointer if they delete an object, rather than providing a means for them to release the memory that does not NULL the caller's pointer.
Of course, the above example is just a step towards an auto-pointer. Which I didn't suggest because the OP was specifically asking about the case of not using an auto pointer.
"There are times when it is a good thing to do, and times when it is pointless and can hide errors"
I can see two problems:
That simple code:
delete myObj;
myobj = 0
becomes to a for-liner in multithreaded environment:
lock(myObjMutex);
delete myObj;
myobj = 0
unlock(myObjMutex);
The "best practice" of Don Neufeld don't apply always. E.g. in one automotive project we had to set pointers to 0 even in destructors. I can imagine in safety-critical software such rules are not uncommon. It is easier (and wise) to follow them than trying to persuade
the team/code-checker for each pointer use in code, that a line nulling this pointer is redundant.
Another danger is relying on this technique in exceptions-using code:
try{
delete myObj; //exception in destructor
myObj=0
}
catch
{
//myObj=0; <- possibly resource-leak
}
if (myObj)
// use myObj <--undefined behaviour
In such code either you produce resource-leak and postpone the problem or the process crashes.
So, this two problems going spontaneously through my head (Herb Sutter would for sure tell more) make for me all the questions of the kind "How to avoid using smart-pointers and do the job safely with normal pointers" as obsolete.
If you're going to reallocate the pointer before using it again (dereferencing it, passing it to a function, etc.), making the pointer NULL is just an extra operation. However, if you aren't sure whether it will be reallocated or not before it is used again, setting it to NULL is a good idea.
As many have said, it is of course much easier to just use smart pointers.
Edit: As Thomas Matthews said in this earlier answer, if a pointer is deleted in a destructor, there isn't any need to assign NULL to it since it won't be used again because the object is being destroyed already.
I can imagine setting a pointer to NULL after deleting it being useful in rare cases where there is a legitimate scenario of reusing it in a single function (or object). Otherwise it makes no sense - a pointer needs to point to something meaningful as long as it exists - period.
If the code does not belong to the most performance-critical part of your application, keep it simple and use a shared_ptr:
shared_ptr<Foo> p(new Foo);
//No more need to call delete
It performs reference counting and is thread-safe. You can find it in the tr1 (std::tr1 namespace, #include < memory >) or if your compiler does not provide it, get it from boost.