Related
I'll start out by saying, use smart pointers and you'll never have to worry about this.
What are the problems with the following code?
Foo * p = new Foo;
// (use p)
delete p;
p = NULL;
This was sparked by an answer and comments to another question. One comment from Neil Butterworth generated a few upvotes:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when it is a good thing to do, and times when it is pointless and can hide errors.
There are plenty of circumstances where it wouldn't help. But in my experience, it can't hurt. Somebody enlighten me.
Setting a pointer to 0 (which is "null" in standard C++, the NULL define from C is somewhat different) avoids crashes on double deletes.
Consider the following:
Foo* foo = 0; // Sets the pointer to 0 (C++ NULL)
delete foo; // Won't do anything
Whereas:
Foo* foo = new Foo();
delete foo; // Deletes the object
delete foo; // Undefined behavior
In other words, if you don't set deleted pointers to 0, you will get into trouble if you're doing double deletes. An argument against setting pointers to 0 after delete would be that doing so just masks double delete bugs and leaves them unhandled.
It's best to not have double delete bugs, obviously, but depending on ownership semantics and object lifecycles, this can be hard to achieve in practice. I prefer a masked double delete bug over UB.
Finally, a sidenote regarding managing object allocation, I suggest you take a look at std::unique_ptr for strict/singular ownership, std::shared_ptr for shared ownership, or another smart pointer implementation, depending on your needs.
Setting pointers to NULL after you've deleted what it pointed to certainly can't hurt, but it's often a bit of a band-aid over a more fundamental problem: Why are you using a pointer in the first place? I can see two typical reasons:
You simply wanted something allocated on the heap. In which case wrapping it in a RAII object would have been much safer and cleaner. End the RAII object's scope when you no longer need the object. That's how std::vector works, and it solves the problem of accidentally leaving pointers to deallocated memory around. There are no pointers.
Or perhaps you wanted some complex shared ownership semantics. The pointer returned from new might not be the same as the one that delete is called on. Multiple objects may have used the object simultaneously in the meantime. In that case, a shared pointer or something similar would have been preferable.
My rule of thumb is that if you leave pointers around in user code, you're Doing It Wrong. The pointer shouldn't be there to point to garbage in the first place. Why isn't there an object taking responsibility for ensuring its validity? Why doesn't its scope end when the pointed-to object does?
I've got an even better best practice: Where possible, end the variable's scope!
{
Foo* pFoo = new Foo;
// use pFoo
delete pFoo;
}
I always set a pointer to NULL (now nullptr) after deleting the object(s) it points to.
It can help catch many references to freed memory (assuming your platform faults on a deref of a null pointer).
It won't catch all references to free'd memory if, for example, you have copies of the pointer lying around. But some is better than none.
It will mask a double-delete, but I find those are far less common than accesses to already freed memory.
In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.
If you're already using RAII, then there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.
It's often convenient, when debugging, to see the null value rather than a stale pointer.
If this still bothers you, use a smart pointer or a reference instead.
I also set other types of resource handles to the no-resource value when the resource is free'd (which is typically only in the destructor of an RAII wrapper written to encapsulate the resource).
I worked on a large (9 million statements) commercial product (primarily in C). At one point, we used macro magic to null out the pointer when memory was freed. This immediately exposed lots of lurking bugs that were promptly fixed. As far as I can remember, we never had a double-free bug.
Update: Microsoft believes that it's a good practice for security and recommends the practice in their SDL policies. Apparently MSVC++11 will stomp the deleted pointer automatically (in many circumstances) if you compile with the /SDL option.
Firstly, there are a lot of existing questions on this and closely related topics, for example Why doesn't delete set the pointer to NULL?.
In your code, the issue what goes on in (use p). For example, if somewhere you have code like this:
Foo * p2 = p;
then setting p to NULL accomplishes very little, as you still have the pointer p2 to worry about.
This is not to say that setting a pointer to NULL is always pointless. For example, if p were a member variable pointing to a resource who's lifetime was not exactly the same as the class containing p, then setting p to NULL could be a useful way of indicating the presence or absence of the resource.
If there is more code after the delete, Yes. When the pointer is deleted in a constructor or at the end of method or function, No.
The point of this parable is to remind the programmer, during run-time, that the object has already been deleted.
An even better practice is to use Smart Pointers (shared or scoped) which automagically delete their target objects.
As others have said, delete ptr; ptr = 0; is not going to cause demons to fly out of your nose. However, it does encourage the usage of ptr as a flag of sorts. The code becomes littered with delete and setting the pointer to NULL. The next step is to scatter if (arg == NULL) return; through your code to protect against the accidental usage of a NULL pointer. The problem occurs once the checks against NULL become your primary means of checking for the state of an object or program.
I'm sure that there is a code smell about using a pointer as a flag somewhere but I haven't found one.
I'll change your question slightly:
Would you use an uninitialized
pointer? You know, one that you didn't
set to NULL or allocate the memory it
points to?
There are two scenarios where setting the pointer to NULL can be skipped:
the pointer variable goes out of scope immediately
you have overloaded the semantic of the pointer and are using its value not only as a memory pointer, but also as a key or raw value. this approach however suffers from other problems.
Meanwhile, arguing that setting the pointer to NULL might hide errors to me sounds like arguing that you shouldn't fix a bug because the fix might hide another bug. The only bugs that might show if the pointer is not set to NULL would be the ones that try to use the pointer. But setting it to NULL would actually cause exactly the same bug as would show if you use it with freed memory, wouldn't it?
If you have no other constraint that forces you to either set or not set the pointer to NULL after you delete it (one such constraint was mentioned by Neil Butterworth), then my personal preference is to leave it be.
For me, the question isn't "is this a good idea?" but "what behavior would I prevent or allow to succeed by doing this?" For example, if this allows other code to see that the pointer is no longer available, why is other code even attempting to look at freed pointers after they are freed? Usually, it's a bug.
It also does more work than necessary as well as hindering post-mortem debugging. The less you touch memory after you don't need it, the easier it is to figure out why something crashed. Many times I have relied on the fact that memory is in a similar state to when a particular bug occurred to diagnose and fix said bug.
Explicitly nulling after delete strongly suggests to a reader that the pointer represents something which is conceptually optional. If I saw that being done, I'd start worrying that everywhere in the source the pointer gets used that it should be first tested against NULL.
If that's what you actually mean, it's better to make that explicit in the source using something like boost::optional
optional<Foo*> p (new Foo);
// (use p.get(), but must test p for truth first!...)
delete p.get();
p = optional<Foo*>();
But if you really wanted people to know the pointer has "gone bad", I'll pitch in 100% agreement with those who say the best thing to do is make it go out of scope. Then you're using the compiler to prevent the possibility of bad dereferences at runtime.
That's the baby in all the C++ bathwater, shouldn't throw it out. :)
In a well structured program with appropriate error checking, there is no reason not to assign it null. 0 stands alone as a universally recognized invalid value in this context. Fail hard and Fail soon.
Many of the arguments against assigning 0 suggest that it could hide a bug or complicate control flow. Fundamentally, that is either an upstream error (not your fault (sorry for the bad pun)) or another error on the programmer's behalf -- perhaps even an indication that program flow has grown too complex.
If the programmer wants to introduce the use of a pointer which may be null as a special value and write all the necessary dodging around that, that's a complication they have deliberately introduced. The better the quarantine, the sooner you find cases of misuse, and the less they are able to spread into other programs.
Well structured programs may be designed using C++ features to avoid these cases. You can use references, or you can just say "passing/using null or invalid arguments is an error" -- an approach which is equally applicable to containers, such as smart pointers. Increasing consistent and correct behavior forbids these bugs from getting far.
From there, you have only a very limited scope and context where a null pointer may exist (or is permitted).
The same may be applied to pointers which are not const. Following the value of a pointer is trivial because its scope is so small, and improper use is checked and well defined. If your toolset and engineers cannot follow the program following a quick read or there is inappropriate error checking or inconsistent/lenient program flow, you have other, bigger problems.
Finally, your compiler and environment likely has some guards for the times when you would like to introduce errors (scribbling), detect accesses to freed memory, and catch other related UB. You can also introduce similar diagnostics into your programs, often without affecting existing programs.
Let me expand what you've already put into your question.
Here's what you've put into your question, in bullet-point form:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when:
it is a good thing to do
and times when it is pointless and can hide errors.
However, there is no times when this is bad! You will not introduce more bugs by explicitly nulling it, you will not leak memory, you will not cause undefined behaviour to happen.
So, if in doubt, just null it.
Having said that, if you feel that you have to explicitly null some pointer, then to me this sounds like you haven't split up a method enough, and should look at the refactoring approach called "Extract method" to split up the method into separate parts.
There is always Dangling Pointers to worry about.
Yes.
The only "harm" it can do is to introduce inefficiency (an unnecessary store operation) into your program - but this overhead will be insignificant in relation to the cost of allocating and freeing the block of memory in most cases.
If you don't do it, you will have some nasty pointer derefernce bugs one day.
I always use a macro for delete:
#define SAFEDELETE(ptr) { delete(ptr); ptr = NULL; }
(and similar for an array, free(), releasing handles)
You can also write "self delete" methods that take a reference to the calling code's pointer, so they force the calling code's pointer to NULL. For example, to delete a subtree of many objects:
static void TreeItem::DeleteSubtree(TreeItem *&rootObject)
{
if (rootObject == NULL)
return;
rootObject->UnlinkFromParent();
for (int i = 0; i < numChildren)
DeleteSubtree(rootObject->child[i]);
delete rootObject;
rootObject = NULL;
}
edit
Yes, these techniques do violate some rules about use of macros (and yes, these days you could probably achieve the same result with templates) - but by using over many years I never ever accessed dead memory - one of the nastiest and most difficult and most time consuming to debug problems you can face. In practice over many years they have effectively eliminated a whjole class of bugs from every team I have introduced them on.
There are also many ways you could implement the above - I am just trying to illustrate the idea of forcing people to NULL a pointer if they delete an object, rather than providing a means for them to release the memory that does not NULL the caller's pointer.
Of course, the above example is just a step towards an auto-pointer. Which I didn't suggest because the OP was specifically asking about the case of not using an auto pointer.
"There are times when it is a good thing to do, and times when it is pointless and can hide errors"
I can see two problems:
That simple code:
delete myObj;
myobj = 0
becomes to a for-liner in multithreaded environment:
lock(myObjMutex);
delete myObj;
myobj = 0
unlock(myObjMutex);
The "best practice" of Don Neufeld don't apply always. E.g. in one automotive project we had to set pointers to 0 even in destructors. I can imagine in safety-critical software such rules are not uncommon. It is easier (and wise) to follow them than trying to persuade
the team/code-checker for each pointer use in code, that a line nulling this pointer is redundant.
Another danger is relying on this technique in exceptions-using code:
try{
delete myObj; //exception in destructor
myObj=0
}
catch
{
//myObj=0; <- possibly resource-leak
}
if (myObj)
// use myObj <--undefined behaviour
In such code either you produce resource-leak and postpone the problem or the process crashes.
So, this two problems going spontaneously through my head (Herb Sutter would for sure tell more) make for me all the questions of the kind "How to avoid using smart-pointers and do the job safely with normal pointers" as obsolete.
If you're going to reallocate the pointer before using it again (dereferencing it, passing it to a function, etc.), making the pointer NULL is just an extra operation. However, if you aren't sure whether it will be reallocated or not before it is used again, setting it to NULL is a good idea.
As many have said, it is of course much easier to just use smart pointers.
Edit: As Thomas Matthews said in this earlier answer, if a pointer is deleted in a destructor, there isn't any need to assign NULL to it since it won't be used again because the object is being destroyed already.
I can imagine setting a pointer to NULL after deleting it being useful in rare cases where there is a legitimate scenario of reusing it in a single function (or object). Otherwise it makes no sense - a pointer needs to point to something meaningful as long as it exists - period.
If the code does not belong to the most performance-critical part of your application, keep it simple and use a shared_ptr:
shared_ptr<Foo> p(new Foo);
//No more need to call delete
It performs reference counting and is thread-safe. You can find it in the tr1 (std::tr1 namespace, #include < memory >) or if your compiler does not provide it, get it from boost.
I'll start out by saying, use smart pointers and you'll never have to worry about this.
What are the problems with the following code?
Foo * p = new Foo;
// (use p)
delete p;
p = NULL;
This was sparked by an answer and comments to another question. One comment from Neil Butterworth generated a few upvotes:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when it is a good thing to do, and times when it is pointless and can hide errors.
There are plenty of circumstances where it wouldn't help. But in my experience, it can't hurt. Somebody enlighten me.
Setting a pointer to 0 (which is "null" in standard C++, the NULL define from C is somewhat different) avoids crashes on double deletes.
Consider the following:
Foo* foo = 0; // Sets the pointer to 0 (C++ NULL)
delete foo; // Won't do anything
Whereas:
Foo* foo = new Foo();
delete foo; // Deletes the object
delete foo; // Undefined behavior
In other words, if you don't set deleted pointers to 0, you will get into trouble if you're doing double deletes. An argument against setting pointers to 0 after delete would be that doing so just masks double delete bugs and leaves them unhandled.
It's best to not have double delete bugs, obviously, but depending on ownership semantics and object lifecycles, this can be hard to achieve in practice. I prefer a masked double delete bug over UB.
Finally, a sidenote regarding managing object allocation, I suggest you take a look at std::unique_ptr for strict/singular ownership, std::shared_ptr for shared ownership, or another smart pointer implementation, depending on your needs.
Setting pointers to NULL after you've deleted what it pointed to certainly can't hurt, but it's often a bit of a band-aid over a more fundamental problem: Why are you using a pointer in the first place? I can see two typical reasons:
You simply wanted something allocated on the heap. In which case wrapping it in a RAII object would have been much safer and cleaner. End the RAII object's scope when you no longer need the object. That's how std::vector works, and it solves the problem of accidentally leaving pointers to deallocated memory around. There are no pointers.
Or perhaps you wanted some complex shared ownership semantics. The pointer returned from new might not be the same as the one that delete is called on. Multiple objects may have used the object simultaneously in the meantime. In that case, a shared pointer or something similar would have been preferable.
My rule of thumb is that if you leave pointers around in user code, you're Doing It Wrong. The pointer shouldn't be there to point to garbage in the first place. Why isn't there an object taking responsibility for ensuring its validity? Why doesn't its scope end when the pointed-to object does?
I've got an even better best practice: Where possible, end the variable's scope!
{
Foo* pFoo = new Foo;
// use pFoo
delete pFoo;
}
I always set a pointer to NULL (now nullptr) after deleting the object(s) it points to.
It can help catch many references to freed memory (assuming your platform faults on a deref of a null pointer).
It won't catch all references to free'd memory if, for example, you have copies of the pointer lying around. But some is better than none.
It will mask a double-delete, but I find those are far less common than accesses to already freed memory.
In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.
If you're already using RAII, then there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.
It's often convenient, when debugging, to see the null value rather than a stale pointer.
If this still bothers you, use a smart pointer or a reference instead.
I also set other types of resource handles to the no-resource value when the resource is free'd (which is typically only in the destructor of an RAII wrapper written to encapsulate the resource).
I worked on a large (9 million statements) commercial product (primarily in C). At one point, we used macro magic to null out the pointer when memory was freed. This immediately exposed lots of lurking bugs that were promptly fixed. As far as I can remember, we never had a double-free bug.
Update: Microsoft believes that it's a good practice for security and recommends the practice in their SDL policies. Apparently MSVC++11 will stomp the deleted pointer automatically (in many circumstances) if you compile with the /SDL option.
Firstly, there are a lot of existing questions on this and closely related topics, for example Why doesn't delete set the pointer to NULL?.
In your code, the issue what goes on in (use p). For example, if somewhere you have code like this:
Foo * p2 = p;
then setting p to NULL accomplishes very little, as you still have the pointer p2 to worry about.
This is not to say that setting a pointer to NULL is always pointless. For example, if p were a member variable pointing to a resource who's lifetime was not exactly the same as the class containing p, then setting p to NULL could be a useful way of indicating the presence or absence of the resource.
If there is more code after the delete, Yes. When the pointer is deleted in a constructor or at the end of method or function, No.
The point of this parable is to remind the programmer, during run-time, that the object has already been deleted.
An even better practice is to use Smart Pointers (shared or scoped) which automagically delete their target objects.
As others have said, delete ptr; ptr = 0; is not going to cause demons to fly out of your nose. However, it does encourage the usage of ptr as a flag of sorts. The code becomes littered with delete and setting the pointer to NULL. The next step is to scatter if (arg == NULL) return; through your code to protect against the accidental usage of a NULL pointer. The problem occurs once the checks against NULL become your primary means of checking for the state of an object or program.
I'm sure that there is a code smell about using a pointer as a flag somewhere but I haven't found one.
I'll change your question slightly:
Would you use an uninitialized
pointer? You know, one that you didn't
set to NULL or allocate the memory it
points to?
There are two scenarios where setting the pointer to NULL can be skipped:
the pointer variable goes out of scope immediately
you have overloaded the semantic of the pointer and are using its value not only as a memory pointer, but also as a key or raw value. this approach however suffers from other problems.
Meanwhile, arguing that setting the pointer to NULL might hide errors to me sounds like arguing that you shouldn't fix a bug because the fix might hide another bug. The only bugs that might show if the pointer is not set to NULL would be the ones that try to use the pointer. But setting it to NULL would actually cause exactly the same bug as would show if you use it with freed memory, wouldn't it?
If you have no other constraint that forces you to either set or not set the pointer to NULL after you delete it (one such constraint was mentioned by Neil Butterworth), then my personal preference is to leave it be.
For me, the question isn't "is this a good idea?" but "what behavior would I prevent or allow to succeed by doing this?" For example, if this allows other code to see that the pointer is no longer available, why is other code even attempting to look at freed pointers after they are freed? Usually, it's a bug.
It also does more work than necessary as well as hindering post-mortem debugging. The less you touch memory after you don't need it, the easier it is to figure out why something crashed. Many times I have relied on the fact that memory is in a similar state to when a particular bug occurred to diagnose and fix said bug.
Explicitly nulling after delete strongly suggests to a reader that the pointer represents something which is conceptually optional. If I saw that being done, I'd start worrying that everywhere in the source the pointer gets used that it should be first tested against NULL.
If that's what you actually mean, it's better to make that explicit in the source using something like boost::optional
optional<Foo*> p (new Foo);
// (use p.get(), but must test p for truth first!...)
delete p.get();
p = optional<Foo*>();
But if you really wanted people to know the pointer has "gone bad", I'll pitch in 100% agreement with those who say the best thing to do is make it go out of scope. Then you're using the compiler to prevent the possibility of bad dereferences at runtime.
That's the baby in all the C++ bathwater, shouldn't throw it out. :)
In a well structured program with appropriate error checking, there is no reason not to assign it null. 0 stands alone as a universally recognized invalid value in this context. Fail hard and Fail soon.
Many of the arguments against assigning 0 suggest that it could hide a bug or complicate control flow. Fundamentally, that is either an upstream error (not your fault (sorry for the bad pun)) or another error on the programmer's behalf -- perhaps even an indication that program flow has grown too complex.
If the programmer wants to introduce the use of a pointer which may be null as a special value and write all the necessary dodging around that, that's a complication they have deliberately introduced. The better the quarantine, the sooner you find cases of misuse, and the less they are able to spread into other programs.
Well structured programs may be designed using C++ features to avoid these cases. You can use references, or you can just say "passing/using null or invalid arguments is an error" -- an approach which is equally applicable to containers, such as smart pointers. Increasing consistent and correct behavior forbids these bugs from getting far.
From there, you have only a very limited scope and context where a null pointer may exist (or is permitted).
The same may be applied to pointers which are not const. Following the value of a pointer is trivial because its scope is so small, and improper use is checked and well defined. If your toolset and engineers cannot follow the program following a quick read or there is inappropriate error checking or inconsistent/lenient program flow, you have other, bigger problems.
Finally, your compiler and environment likely has some guards for the times when you would like to introduce errors (scribbling), detect accesses to freed memory, and catch other related UB. You can also introduce similar diagnostics into your programs, often without affecting existing programs.
Let me expand what you've already put into your question.
Here's what you've put into your question, in bullet-point form:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when:
it is a good thing to do
and times when it is pointless and can hide errors.
However, there is no times when this is bad! You will not introduce more bugs by explicitly nulling it, you will not leak memory, you will not cause undefined behaviour to happen.
So, if in doubt, just null it.
Having said that, if you feel that you have to explicitly null some pointer, then to me this sounds like you haven't split up a method enough, and should look at the refactoring approach called "Extract method" to split up the method into separate parts.
There is always Dangling Pointers to worry about.
Yes.
The only "harm" it can do is to introduce inefficiency (an unnecessary store operation) into your program - but this overhead will be insignificant in relation to the cost of allocating and freeing the block of memory in most cases.
If you don't do it, you will have some nasty pointer derefernce bugs one day.
I always use a macro for delete:
#define SAFEDELETE(ptr) { delete(ptr); ptr = NULL; }
(and similar for an array, free(), releasing handles)
You can also write "self delete" methods that take a reference to the calling code's pointer, so they force the calling code's pointer to NULL. For example, to delete a subtree of many objects:
static void TreeItem::DeleteSubtree(TreeItem *&rootObject)
{
if (rootObject == NULL)
return;
rootObject->UnlinkFromParent();
for (int i = 0; i < numChildren)
DeleteSubtree(rootObject->child[i]);
delete rootObject;
rootObject = NULL;
}
edit
Yes, these techniques do violate some rules about use of macros (and yes, these days you could probably achieve the same result with templates) - but by using over many years I never ever accessed dead memory - one of the nastiest and most difficult and most time consuming to debug problems you can face. In practice over many years they have effectively eliminated a whjole class of bugs from every team I have introduced them on.
There are also many ways you could implement the above - I am just trying to illustrate the idea of forcing people to NULL a pointer if they delete an object, rather than providing a means for them to release the memory that does not NULL the caller's pointer.
Of course, the above example is just a step towards an auto-pointer. Which I didn't suggest because the OP was specifically asking about the case of not using an auto pointer.
"There are times when it is a good thing to do, and times when it is pointless and can hide errors"
I can see two problems:
That simple code:
delete myObj;
myobj = 0
becomes to a for-liner in multithreaded environment:
lock(myObjMutex);
delete myObj;
myobj = 0
unlock(myObjMutex);
The "best practice" of Don Neufeld don't apply always. E.g. in one automotive project we had to set pointers to 0 even in destructors. I can imagine in safety-critical software such rules are not uncommon. It is easier (and wise) to follow them than trying to persuade
the team/code-checker for each pointer use in code, that a line nulling this pointer is redundant.
Another danger is relying on this technique in exceptions-using code:
try{
delete myObj; //exception in destructor
myObj=0
}
catch
{
//myObj=0; <- possibly resource-leak
}
if (myObj)
// use myObj <--undefined behaviour
In such code either you produce resource-leak and postpone the problem or the process crashes.
So, this two problems going spontaneously through my head (Herb Sutter would for sure tell more) make for me all the questions of the kind "How to avoid using smart-pointers and do the job safely with normal pointers" as obsolete.
If you're going to reallocate the pointer before using it again (dereferencing it, passing it to a function, etc.), making the pointer NULL is just an extra operation. However, if you aren't sure whether it will be reallocated or not before it is used again, setting it to NULL is a good idea.
As many have said, it is of course much easier to just use smart pointers.
Edit: As Thomas Matthews said in this earlier answer, if a pointer is deleted in a destructor, there isn't any need to assign NULL to it since it won't be used again because the object is being destroyed already.
I can imagine setting a pointer to NULL after deleting it being useful in rare cases where there is a legitimate scenario of reusing it in a single function (or object). Otherwise it makes no sense - a pointer needs to point to something meaningful as long as it exists - period.
If the code does not belong to the most performance-critical part of your application, keep it simple and use a shared_ptr:
shared_ptr<Foo> p(new Foo);
//No more need to call delete
It performs reference counting and is thread-safe. You can find it in the tr1 (std::tr1 namespace, #include < memory >) or if your compiler does not provide it, get it from boost.
I believe what I have just experienced is called "undefined behavior", but I'm not quite sure. Basically, I had an instance declared in an outer scope that holds addresses of a class. In the inner level I instantiated an object on the stack and stored the address of that instance into the holder.
After the inner scope had escaped, I checked to see if I could still access methods and properties of the removed instance. To my surprise it worked without any problem.
Is there a simple way to combat this? Is there a way I can clear deleted pointers from the list?
example:
std::vector<int*> holder;
{
int inside = 12;
holder.push_back(&inside);
}
cout << "deleted variable:" << holder[0] << endl;
Is there a simple way to combat this?
Sure, there are a number of ways to avoid this sort of problem.
The easiest way would be to not use pointers at all -- pass objects by value instead. i.e. In your example code, you could use a std::vector<int> instead of a std::vector<int *>.
If your objects are not copy-able for some reason, or are large enough that you think it will be too expensive to make copies of them, you could allocate them on the heap instead, and manage their lifetimes automatically using shared_ptr or unique_ptr or some other smart-pointer class. (Note that passing objects by value is more efficient than you might think, even for larger objects, since it avoids having to deal with the heap, which can be expensive... and modern CPUs are most efficient when dealing with contiguous memory. Finally, modern C++ has various optimizations that allow the compiler to avoid actually doing a data copy in many circumstances)
In general, retaining pointers to stack objects is a bad idea unless you are 100% sure that the pointer's lifetime will be a subset of the lifetime of the stack object it points to. (and even then it's probably a bad idea, because the next programmer who takes over the code after you've moved on to your next job might not see this subtle hazard and is therefore likely to inadvertently introduce dangling-pointer bugs when making changes to the code)
After the inner scope had escaped, I checked to see if I could still
access methods and properties of the removed instance. To my surprise
it worked without any problem.
That can happen if the memory where the object was hasn't been overwritten by anything else yet -- but definitely don't rely on that behavior (or any other particular behavior) if/when you dereference an invalid pointer, unless you like spending a lot of quality time with your debugger chasing down random crashes and/or other odd behavior :)
Is there a way I can clear deleted pointers from the list?
In principle, you could add code to the objects' destructors that would go through the list and look for pointers to themselves and remove them. In practice, I think that is a poor approach, since it uses up CPU cycles trying to recover from an error that a better design would not have allowed to be made in the first place.
Btw this is off topic but it might interest you that the Rust programming language is designed to detect and prevent this sort of error by catching it at compile-time. Maybe someday C++ will get something similar.
There is no such thing as deleted pointer. Pointer is just a number, representing some address in your process virtual address space. Even if stack frame is long gone, memory, that was holding it is still available, since it was allocated when thread started, so technically speaking, it is still a valid pointer, valid in terms, that you could dereference it and get something. But since object it was pointing is already gone, valid term will be dangling pointer. Moral is that if you have pointer to the object in the stack frame, there is no way to determine is it valid or not, not even using functions like IsBadReadPtr (Win32 API just for example). The best way to prevent such situations is avoid returning and storing pointers to the stack objects.
However, if you wish to track your heap allocated memory and automatically deallocate it after it is no longer used, you could utilize smart pointers (std::shared_ptr, boost::shared_ptr, etc).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I was just wondering if there is any benefit in initializing a pointer to NULL, or in setting it to NULL after pointer deletion.
I've read on a few forums that setting to NULL after deletion is unnecessary, and that some compilers do not even consider the line. If it changes nothing, why do some people use it?
After
delete ptr;
the pointer object ptr probably still holds the same value it had before. It now points to a nonexistent object, so attempting to dereference it, or even to refer to its value, has undefined behavior. If you accidentally do something like:
delete ptr;
// ...
*ptr = 42;
it's likely to quietly clobber memory that you no longer own, or that may have been reallocated to some other object.
Setting the pointer to null:
delete ptr;
ptr = NULL; // or 0 or nullptr
means that an accidental attempt to dereference the pointer is more likely to cause your program to crash. (Crashing is a good thing in this context.)
Stealing Casey's comment:
C++11 ยง3.7.4.2 Deallocation Functions [basic.stc.dynamic.deallocation] para 4:
If the argument given to a deallocation function in the standard
library is a pointer that is not the null pointer value (4.10), the
deallocation function shall deallocate the storage referenced by the
pointer, rendering invalid all pointers referring to any part of the
deallocated storage. The effect of using an invalid pointer value
(including passing it to a deallocation function) is undefined.
If the pointer object is at the very end of its lifetime, you needn't bother, since there's no possibility of accidentally using it:
{
int *ptr = new int(42);
// ...
delete ptr;
}
And as others have said, you're probably better off using some higher-level construct like smart pointers anyway.
If you initialise it to null and then immediately assign some other value to it, or if you assign null to it (after deletion) and then immediately let it go out of scope, it's pretty pointless.
However, if there's any chance that some other code might want to use the object it's pointing at while the pointer still exists and the object may not exist, then you will want to be able to signify to that code that nothing is there. If the pointer is set to null, the code can check for null before attempting to dereference the pointer.
There is no benefit from an optimising compiler's point of view: the redundant stores will be eliminated. Modern compilers will be able to see that the value isn't used and remove them.
There may be benefit from the human reader's point of view: it may make the code easier to understand and may help reduce bugs caused by using uninitialised or freed data. In some cases initialising to NULL can actually hide issues, though, as most compilers will warn if you try to use a value without initialising it first.
My personal opinion though is that you should initialise the variable properly with its final value as close to declaring it as possible and let the compiler warn you if you miss an execution path, then you shouldn't be able to use a variable once its value has been freed because it's also gone out of scope. Freeing data using smart pointers helps here.
Just to clarify the other answers, if you attempt to write to or read from memory referred to by an uninitialized pointer, or a pointer referring to memory that has been deleted or freed, the program may not crash, or may crash long after the write/read, making it hard to find the bug.
If you do the same with a null pointer, the program will (probably) crash immediately, making debugging easier.
It is good coding practice to always initialize your pointers to something. In C++, the value of an uninitialized pointer is undefined, so if you have a single line like:
int* p;
p takes on the value of whatever happened to be in the memory p takes up (not what it points to, but the memory for the pointer value itself). There are times when it makes sense to initialize a pointer to NULL, but that really depends on your code.
As for setting NULL after deletion, it's a habit a few people follow. The benefit is that you can call delete on NULL without problems, whereas a double delete on a pointer could cause undefined behavior. Before the idea of RAII was popular, some people followed the "better safe than sorry" method of calling delete on multiple code paths. In those cases, if you don't set pointer to NULL, you might accidentally delete a pointer twice, so setting it to NULL mitigates that problem. Personally, I think it's a sign of poor design if your pointers don't have proper ownership, and you have to guess at when to call delete.
On a modern hardware and operating system, a null pointer is an invalid pointer. Therefore, when you try to access it then your program will be shutdown by either your hardware or your operating system. This can help as it will guarantee that you your program will not attempt to access some memory that has been freed earlier and cause undefined behavior and get away with it.
It can also help with debugging, as you can then see if that memory has been freed at your breakpoint.
But ideally, a good C++ programmer should trie to avoid using raw pointers and use RAII when ever possible.
Initializing a pointer to NULL/nullptr, in any context, is a generally idea because dangling pointers can refer to actual memory. If you nullify a pointer after its use and you dereference it, modern platforms will usually crash cleanly.
Setting a pointer to null after you're done with it can be a good idea for the same reason, but I personally don't think that it's especially useful if you're not keeping the variable around, like when it's a local variable.
Even better practice, though, is to avoid raw pointers altogether. C++11 provides unique_ptr, shared_ptr and weak_ptr (in <memory>) that wrap pointers with those three semantics. Using them, you never need to delete anything manually and you almost never need to set anything to nullptr yourself.
I'll start out by saying, use smart pointers and you'll never have to worry about this.
What are the problems with the following code?
Foo * p = new Foo;
// (use p)
delete p;
p = NULL;
This was sparked by an answer and comments to another question. One comment from Neil Butterworth generated a few upvotes:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when it is a good thing to do, and times when it is pointless and can hide errors.
There are plenty of circumstances where it wouldn't help. But in my experience, it can't hurt. Somebody enlighten me.
Setting a pointer to 0 (which is "null" in standard C++, the NULL define from C is somewhat different) avoids crashes on double deletes.
Consider the following:
Foo* foo = 0; // Sets the pointer to 0 (C++ NULL)
delete foo; // Won't do anything
Whereas:
Foo* foo = new Foo();
delete foo; // Deletes the object
delete foo; // Undefined behavior
In other words, if you don't set deleted pointers to 0, you will get into trouble if you're doing double deletes. An argument against setting pointers to 0 after delete would be that doing so just masks double delete bugs and leaves them unhandled.
It's best to not have double delete bugs, obviously, but depending on ownership semantics and object lifecycles, this can be hard to achieve in practice. I prefer a masked double delete bug over UB.
Finally, a sidenote regarding managing object allocation, I suggest you take a look at std::unique_ptr for strict/singular ownership, std::shared_ptr for shared ownership, or another smart pointer implementation, depending on your needs.
Setting pointers to NULL after you've deleted what it pointed to certainly can't hurt, but it's often a bit of a band-aid over a more fundamental problem: Why are you using a pointer in the first place? I can see two typical reasons:
You simply wanted something allocated on the heap. In which case wrapping it in a RAII object would have been much safer and cleaner. End the RAII object's scope when you no longer need the object. That's how std::vector works, and it solves the problem of accidentally leaving pointers to deallocated memory around. There are no pointers.
Or perhaps you wanted some complex shared ownership semantics. The pointer returned from new might not be the same as the one that delete is called on. Multiple objects may have used the object simultaneously in the meantime. In that case, a shared pointer or something similar would have been preferable.
My rule of thumb is that if you leave pointers around in user code, you're Doing It Wrong. The pointer shouldn't be there to point to garbage in the first place. Why isn't there an object taking responsibility for ensuring its validity? Why doesn't its scope end when the pointed-to object does?
I've got an even better best practice: Where possible, end the variable's scope!
{
Foo* pFoo = new Foo;
// use pFoo
delete pFoo;
}
I always set a pointer to NULL (now nullptr) after deleting the object(s) it points to.
It can help catch many references to freed memory (assuming your platform faults on a deref of a null pointer).
It won't catch all references to free'd memory if, for example, you have copies of the pointer lying around. But some is better than none.
It will mask a double-delete, but I find those are far less common than accesses to already freed memory.
In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.
If you're already using RAII, then there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.
It's often convenient, when debugging, to see the null value rather than a stale pointer.
If this still bothers you, use a smart pointer or a reference instead.
I also set other types of resource handles to the no-resource value when the resource is free'd (which is typically only in the destructor of an RAII wrapper written to encapsulate the resource).
I worked on a large (9 million statements) commercial product (primarily in C). At one point, we used macro magic to null out the pointer when memory was freed. This immediately exposed lots of lurking bugs that were promptly fixed. As far as I can remember, we never had a double-free bug.
Update: Microsoft believes that it's a good practice for security and recommends the practice in their SDL policies. Apparently MSVC++11 will stomp the deleted pointer automatically (in many circumstances) if you compile with the /SDL option.
Firstly, there are a lot of existing questions on this and closely related topics, for example Why doesn't delete set the pointer to NULL?.
In your code, the issue what goes on in (use p). For example, if somewhere you have code like this:
Foo * p2 = p;
then setting p to NULL accomplishes very little, as you still have the pointer p2 to worry about.
This is not to say that setting a pointer to NULL is always pointless. For example, if p were a member variable pointing to a resource who's lifetime was not exactly the same as the class containing p, then setting p to NULL could be a useful way of indicating the presence or absence of the resource.
If there is more code after the delete, Yes. When the pointer is deleted in a constructor or at the end of method or function, No.
The point of this parable is to remind the programmer, during run-time, that the object has already been deleted.
An even better practice is to use Smart Pointers (shared or scoped) which automagically delete their target objects.
As others have said, delete ptr; ptr = 0; is not going to cause demons to fly out of your nose. However, it does encourage the usage of ptr as a flag of sorts. The code becomes littered with delete and setting the pointer to NULL. The next step is to scatter if (arg == NULL) return; through your code to protect against the accidental usage of a NULL pointer. The problem occurs once the checks against NULL become your primary means of checking for the state of an object or program.
I'm sure that there is a code smell about using a pointer as a flag somewhere but I haven't found one.
I'll change your question slightly:
Would you use an uninitialized
pointer? You know, one that you didn't
set to NULL or allocate the memory it
points to?
There are two scenarios where setting the pointer to NULL can be skipped:
the pointer variable goes out of scope immediately
you have overloaded the semantic of the pointer and are using its value not only as a memory pointer, but also as a key or raw value. this approach however suffers from other problems.
Meanwhile, arguing that setting the pointer to NULL might hide errors to me sounds like arguing that you shouldn't fix a bug because the fix might hide another bug. The only bugs that might show if the pointer is not set to NULL would be the ones that try to use the pointer. But setting it to NULL would actually cause exactly the same bug as would show if you use it with freed memory, wouldn't it?
If you have no other constraint that forces you to either set or not set the pointer to NULL after you delete it (one such constraint was mentioned by Neil Butterworth), then my personal preference is to leave it be.
For me, the question isn't "is this a good idea?" but "what behavior would I prevent or allow to succeed by doing this?" For example, if this allows other code to see that the pointer is no longer available, why is other code even attempting to look at freed pointers after they are freed? Usually, it's a bug.
It also does more work than necessary as well as hindering post-mortem debugging. The less you touch memory after you don't need it, the easier it is to figure out why something crashed. Many times I have relied on the fact that memory is in a similar state to when a particular bug occurred to diagnose and fix said bug.
Explicitly nulling after delete strongly suggests to a reader that the pointer represents something which is conceptually optional. If I saw that being done, I'd start worrying that everywhere in the source the pointer gets used that it should be first tested against NULL.
If that's what you actually mean, it's better to make that explicit in the source using something like boost::optional
optional<Foo*> p (new Foo);
// (use p.get(), but must test p for truth first!...)
delete p.get();
p = optional<Foo*>();
But if you really wanted people to know the pointer has "gone bad", I'll pitch in 100% agreement with those who say the best thing to do is make it go out of scope. Then you're using the compiler to prevent the possibility of bad dereferences at runtime.
That's the baby in all the C++ bathwater, shouldn't throw it out. :)
In a well structured program with appropriate error checking, there is no reason not to assign it null. 0 stands alone as a universally recognized invalid value in this context. Fail hard and Fail soon.
Many of the arguments against assigning 0 suggest that it could hide a bug or complicate control flow. Fundamentally, that is either an upstream error (not your fault (sorry for the bad pun)) or another error on the programmer's behalf -- perhaps even an indication that program flow has grown too complex.
If the programmer wants to introduce the use of a pointer which may be null as a special value and write all the necessary dodging around that, that's a complication they have deliberately introduced. The better the quarantine, the sooner you find cases of misuse, and the less they are able to spread into other programs.
Well structured programs may be designed using C++ features to avoid these cases. You can use references, or you can just say "passing/using null or invalid arguments is an error" -- an approach which is equally applicable to containers, such as smart pointers. Increasing consistent and correct behavior forbids these bugs from getting far.
From there, you have only a very limited scope and context where a null pointer may exist (or is permitted).
The same may be applied to pointers which are not const. Following the value of a pointer is trivial because its scope is so small, and improper use is checked and well defined. If your toolset and engineers cannot follow the program following a quick read or there is inappropriate error checking or inconsistent/lenient program flow, you have other, bigger problems.
Finally, your compiler and environment likely has some guards for the times when you would like to introduce errors (scribbling), detect accesses to freed memory, and catch other related UB. You can also introduce similar diagnostics into your programs, often without affecting existing programs.
Let me expand what you've already put into your question.
Here's what you've put into your question, in bullet-point form:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when:
it is a good thing to do
and times when it is pointless and can hide errors.
However, there is no times when this is bad! You will not introduce more bugs by explicitly nulling it, you will not leak memory, you will not cause undefined behaviour to happen.
So, if in doubt, just null it.
Having said that, if you feel that you have to explicitly null some pointer, then to me this sounds like you haven't split up a method enough, and should look at the refactoring approach called "Extract method" to split up the method into separate parts.
There is always Dangling Pointers to worry about.
Yes.
The only "harm" it can do is to introduce inefficiency (an unnecessary store operation) into your program - but this overhead will be insignificant in relation to the cost of allocating and freeing the block of memory in most cases.
If you don't do it, you will have some nasty pointer derefernce bugs one day.
I always use a macro for delete:
#define SAFEDELETE(ptr) { delete(ptr); ptr = NULL; }
(and similar for an array, free(), releasing handles)
You can also write "self delete" methods that take a reference to the calling code's pointer, so they force the calling code's pointer to NULL. For example, to delete a subtree of many objects:
static void TreeItem::DeleteSubtree(TreeItem *&rootObject)
{
if (rootObject == NULL)
return;
rootObject->UnlinkFromParent();
for (int i = 0; i < numChildren)
DeleteSubtree(rootObject->child[i]);
delete rootObject;
rootObject = NULL;
}
edit
Yes, these techniques do violate some rules about use of macros (and yes, these days you could probably achieve the same result with templates) - but by using over many years I never ever accessed dead memory - one of the nastiest and most difficult and most time consuming to debug problems you can face. In practice over many years they have effectively eliminated a whjole class of bugs from every team I have introduced them on.
There are also many ways you could implement the above - I am just trying to illustrate the idea of forcing people to NULL a pointer if they delete an object, rather than providing a means for them to release the memory that does not NULL the caller's pointer.
Of course, the above example is just a step towards an auto-pointer. Which I didn't suggest because the OP was specifically asking about the case of not using an auto pointer.
"There are times when it is a good thing to do, and times when it is pointless and can hide errors"
I can see two problems:
That simple code:
delete myObj;
myobj = 0
becomes to a for-liner in multithreaded environment:
lock(myObjMutex);
delete myObj;
myobj = 0
unlock(myObjMutex);
The "best practice" of Don Neufeld don't apply always. E.g. in one automotive project we had to set pointers to 0 even in destructors. I can imagine in safety-critical software such rules are not uncommon. It is easier (and wise) to follow them than trying to persuade
the team/code-checker for each pointer use in code, that a line nulling this pointer is redundant.
Another danger is relying on this technique in exceptions-using code:
try{
delete myObj; //exception in destructor
myObj=0
}
catch
{
//myObj=0; <- possibly resource-leak
}
if (myObj)
// use myObj <--undefined behaviour
In such code either you produce resource-leak and postpone the problem or the process crashes.
So, this two problems going spontaneously through my head (Herb Sutter would for sure tell more) make for me all the questions of the kind "How to avoid using smart-pointers and do the job safely with normal pointers" as obsolete.
If you're going to reallocate the pointer before using it again (dereferencing it, passing it to a function, etc.), making the pointer NULL is just an extra operation. However, if you aren't sure whether it will be reallocated or not before it is used again, setting it to NULL is a good idea.
As many have said, it is of course much easier to just use smart pointers.
Edit: As Thomas Matthews said in this earlier answer, if a pointer is deleted in a destructor, there isn't any need to assign NULL to it since it won't be used again because the object is being destroyed already.
I can imagine setting a pointer to NULL after deleting it being useful in rare cases where there is a legitimate scenario of reusing it in a single function (or object). Otherwise it makes no sense - a pointer needs to point to something meaningful as long as it exists - period.
If the code does not belong to the most performance-critical part of your application, keep it simple and use a shared_ptr:
shared_ptr<Foo> p(new Foo);
//No more need to call delete
It performs reference counting and is thread-safe. You can find it in the tr1 (std::tr1 namespace, #include < memory >) or if your compiler does not provide it, get it from boost.