I'm facing some conceptual issues in dynamic memory allocation. Firstly if I write the following piece of code
int *p = NULL;
delete p;
why I get no error? I'm trying to delete the pointer (on the stack) which is not pointing to anything. Also if I write the following statement
int *p = new int;
p = NULL;
delete p;
I again get no compile or runt-time error. Why ?
Moving on if I write the following code I get a runtime error
int *p = new int;
p = NULL;
delete p;
delete p;
Why? And if I write the following code, I get no error
int *p = NULL;
delete p;
delete p;
Why ? Can anyone explain conceptually the reasons behind this ?
I assume that in your third example you meant to write
int *p = new int;
delete p;
delete p;
Formally this causes undefined behaviour, which means that anything could happen. In practice you are probably using a memory allocator that checks whether the pointer you are deleting points within its free memory pool.
Others already pointed out that deleting a null pointer doesn't cause an error by definition, so it doesn't matter how many times you do it.
Passing a null pointer to the delete operator is a no-op. The standard says so:
5.3.5/2
In either alternative [delete and delete[]], if the value of the operand of delete is the null pointer the operation has no effect.
Consider an object that owns a pointer to another object. Usually, when the destructor of the owning object is run, it would clean up the memory for the owned object by deleting it. But in the case where the owned object could also be null, what would we do to clean up the memory? One option would be to wrap every single delete in an "if (X) delete x" kind of wrapper. But that's horrendously noisy, for no real added benefit. Therefore, the delete operator does it for you.
"I'm trying to delete the pointer (on the stack) which is not pointing to anything."
This is not true. You cannot delete from stack. With delete you delete memory blocks on the heap whose adress is stored in a pointer. The pointer itself is a stack variable.
In every case you are only deleting a nullpointer, which by definition is always "safe" because it is a no-op (the C++ standard explicitly says so).
In your second and third example, you are reassigning a new value (the nullpointer) to the pointer before deleting it, which means that you are leaking the previously allocated integer. This is something that should not normally happen (in this case, you won't die from leaking a single integer, but it's not a good thing).
The double deletion in the third and fourth examples are normally serious programming errors, but they are "harmless" in your example because the deleted pointer is the nullpointer (so it's a no-op).
Going a bit O/T:
Note that I have put "safe" and "harmless" in quotes above for good reason. I personally disagree with Mr. Stroustrup's design decision here.
Making the deletion of a nullpointer a "harmless no-op" is actually not a very good idea, even if the intent was probably good. Mr. Stroustrup even goes further by allowing delete to set the pointer to the nullponter and saying he wished that implementations actually did that (luckily no implementation that I know does!).
In my opinion, every object that was allocated should be deleted exactly once, none less and not more often.
When and how often a well-behaved, non-broken program may (and must) delete a pointer is exactly defined, it is not a random unknown thing. Deletion must happen exactly once, and the program must be exactly aware of it because it must be certain whether or not an object is valid (because it's illegal to use the object if it isn't valid!).
Setting a pointer to the nullpointer after deleting the object will cause a fault when dereferencing the deleted object afterwards (this is a good thing), but it does not protect from double deletion. Instead, it hides this serious programming error, ignoring it silently.
If a program deletes a pointer twice, then the program logic is broken, it is not working properly. This is not something that can be ignored, it must be fixed. Therefore, such a program should crash. Allocators usually detect double deletion, but by resetting a pointer to the nullpointer, one has effectively disabled this detection mechanism.
If one chooses to reset a pointer after deleting it, one should (in my opinion) set it to an invalid non-nullpointer value, for example (T*)1 or (T*)-1. This will guarantee that both dereferencing and deleting the pointer will crash on the first occasion.
Nobody likes to see program crashes. But crashing early and crashing at the first occasion is a good thing compared to an incorrect program logic continuing for an indeterminate time, and possibly crashing or silently corrupting data at a random occasion.
I think that if you are trying to deleate the pointer, you are actually deleting the place in tye memory of the object that the pointer points. You can do it using reference:
int *p = NULL;
delete &p;
The inner implementation is transparent to us programmers. As you see, delete a NULL pointer may be harmless, but generally you should avoid this. You may have seen words like 'please do not re-delete dynamic pointers'
Related
I always wondered why automatic setting of the pointer to NULL after delete is not part of the standard. If this gets taken care of then many of the crashes due to an invalid pointer would not occur. But having said that I can think of couple of reasons why the standard would have restricted this:
Performance:
An additional instruction could slow down the delete performance.
Could it be because of const pointers.
Then again standard could have done something for this special case I guess.
Does anyone know exact reasons for not allowing this?
Stroustrup himself answers. An excerpt:
C++ explicitly allows an
implementation of delete to zero out
an lvalue operand, and I had hoped
that implementations would do that,
but that idea doesn't seem to have
become popular with implementers.
But the main issue he raises is that delete's argument need not be an lvalue.
First, setting to null would require a memory stored variable. It's true, that you usually have a pointer in a variable but sometimes you might want to delete an object at a just calculated address. That would be impossible with "nullifying" delete.
Then comes performance. You might have written code in such a way that the pointer will go out of scope immediately after delete is done. Filling it with null is just a waste of time. And C++ is a language with "don't need it? then you don't have to pay for it" ideology.
If you need safety there's a wide range of smart pointers at you service or you can write your own - better and smarter.
You can have multiple pointers pointing to that memory. It would create a false sense of security if the pointer you specified for the delete got set to null, but all the other pointers did not. A pointer is nothing more than an address, a number. It might as well be an int with a dereference operation. My point is you would have to also scan every single pointer to find those that are referencing the same memory you just deleted, and null them out as well. It would be computationally intense to scan all the pointers for that address and null them out, because the language is not designed for that. (Although some other languages structure their references to accomplish a similar goal in a different way.)
A pointer can be saved in more than one variable, setting one of these to NULL would still leave invalid pointers in the other variables. So you don't really gain much, you are more likely creating a false sense of security.
Besides of that, you can create your own function that does what you want:
template<typename T>
void deleten(T *&ptr) {
delete ptr;
ptr = NULL;
}
Because there isn't really any need to, and because it would require delete taking pointer-to-pointer rather than just pointer.
delete is used mostly in destructors, in which case setting a member to NULL is pointless. A few lines later, at the closing }, the member no longer exists. In assignment operators, a delete is typically followed by an assignment anyway.
Also, it would render the following code illegal:
T* const foo = new T;
delete foo;
Here's another reason; suppose delete does set its argument to NULL:
int *foo = new int;
int *bar = foo;
delete foo;
Should bar get set to NULL? Can you generalize this?
If you have an array of pointers, and your second action is to delete the empty array, then there is no point setting each value to null when the memory is about to be freed. If you want it to be null.. write null to it :)
C++ allows you to define your own operator new and delete so that for instance they would use your own pool allocator. If you do this then it is possible to use new and delete with things that are not strictly addresses but say indexes in your pool array. In this context the value of NULL (0) might have a legal meaning (referring to the first item in the pool).
So having delete set NULL automatically to its argument doesn't always have the meaning of - set the value to an invalid value. The invalid value may not always be NULL.
Philosophy of C++ is "pay for it only if you use it". I think it may answer your question.
Also sometimes you could have your own heap which will recover deleted memory.. or sometimes pointer not owned by any variable. Or pointer stored in few variables - it possible zero just one of them.
As you can see it have many issues and possible problems.
Setting the pointer to NULL automatically would not solve most of the issues with bad pointer usage. The only crash it would avoid is if you try to delete it twice. What if you call a member function on such a pointer? It would still crash (assuming that it accesses member variables). C++ does not restrict you from calling any function on NULL pointers, nor should it do that from performance point of view.
I see people giving weird answers to this question.
ptr = NULL;
How can such a simple statement cause performance delay?
Another answer is saying that we can have multiple pointers pointing to the same
memory location. Surely we can. In this case delete operation on one pointer would make only that pointer NULL (if delete was making pointer NULL) and the other pointer would be non-NULL and pointing to memory location which is free.
The solution for this should have been that user should delete all pointers pointing to same location. Internally it should check if memory is already freed than don't free. Only make the pointer NULL.
Stroustrup could have designed delete to work in this manner. He thought programmers would take care of this. So he ignored.
I am learning about pointers in C++ currently, in college. I have coded a program that is a binary tree of objects that points to a linked list of sub-objects. IF I am even wording that correctly. Anyways, my program seems to work correctly, but I am having trouble wrapping my head around how to test pointer deletion.
For instance, my delete function for single object of the binary tree is:
void EmployeeRecord::destroyCustomerList()
{
if(m_oCustomerList != NULL)
{
delete m_oCustomerList;
m_oCustomerList = NULL;
}
}
When printing my tree, everything populates and is taken off correctly (meaning the tree is kept intact through every removal of a node)...but how do I confirm what happens to the deallocated memory? I know that since I am setting the pointer *m_oCustomerList to NULL, that I can test for a NULL value on a previously populated object, but what happens to the actual memory?
I am using Visual Studio/C++ and have read that the debugger will use a code starting at 0xCC for deallocated memory...but I can't seem to figure out how to use that information.
Note that your code
void EmployeeRecord::destroyCustomerList()
{
if(m_oCustomerList != NULL)
{
delete m_oCustomerList;
m_oCustomerList = NULL;
}
}
Simplifies to:
void EmployeeRecord::destroyCustomerList()
{
delete m_oCustomerList;
m_oCustomerList = NULL;
}
It is safe to invoke the delete operator on a null pointer in C++. It does nothing. In other words, the check for null is already "built in".
Once you delete an object, it no longer exists, and the pointer to that object becomes and indeterminate value (so it's not a bad idea to null out all copies of that pointer).
What really happens to the memory in actual C++ implementations, rather than in the abstract sense, is that it continues to exist at the same address, but is marked as free, so that it can be allocated for another purpose. An allocation request coming from the program (possibly a completely unrelated module) or possibly from another program in the system, could obtain that memory for its own use.
Any uses of a pointer to an object which no longer exists are "undefined behavior". Functions for safely verifying such a pointer do exist, but they are very platform-specific and rarely perfect.
The problem is that whereas it is not particularly hard for an implementation to confirm that a pointer is bad, it is not possible to confirm that a pointer is good. We can walk the internal memory data structures of the memory allocator to determine that some pointer refers to free storage. But what if the storage is subsequently allocated? Then the pointer no longer refers to free storage. But it does not refer to the original object which was allocated, either! This is known as an "ABA ambiguity": because some A changed into a B, but then back into A, indistinguishable from the original A.
Approaches exist to solve the ABA ambiguity (if not completely than at least partially). For instance, pointers be made "fat" so the they have an extra field in addition to the address bits. The field could contain a sequence number which is used to stamp the pointer that are returned from the allocator. Now when an object is deleted and reallocated, the new pointer to the same location has a different sequence number: we have ABA'. The pointer A has gone bad, making it B, but the when it is resurrected it comes back as A'. If we ask the system to validate A, it will correctly determine that A is bad, because it does not have the expected sequence number. The correct, valid pointer to the object is A', which does not match A.
However, sequence number fields are only so many bits wide and they will wrap around eventually. So the ABA problem has not really been solved. The validation of good versus bad pointers has only been made substantially more reliable. To absolutely deal with the ABA problem, the system must always hand out new pointers which are not equal to any pointers which could still be in use. This means never actually freeing anything (thereby running out of memory) or implementing garbage collection. (Meaning that delete actually does nothing: deleted objects are destructed, but stick around in memory until they are garbage-collected, which happens when the program no longer remembers any copies of the pointer. At that point, the program no longer remembers A, and so A can be re-introduced, and there is no ABA problem.)
To make all pointers "fat", you have to change the entire toolchain and runtime: compilers, libraries, et cetera. There are further difficulties because large programs tend to have multiple memory allocators. If you ask the wrong allocator "is this pointer valid", all it can say is "this pointer is not from my arena". Another approach you can do is to invent your own pointers and implement them as smart pointers in C++. Your pointers can support an is_valid method which tries to be as reliable as possible (dealing with the ABA problem somehow: either partially with some sequence numbers and such, or by implementing your own garbage collection scheme.)
Accessing deleted memory is undefined behaviour by the standard. For instance, if this was a multithreaded application (or some other process had injected a thread into your application) then a new allocation could allocate the memory you just deallocated before you are able to "verify" it.
Once you delete your memory and set your pointer to NULL you no longer have access to that memory even if you want it. So, there is no way to verify that it really gone. However, if you did something wrong and the memory was never deleted it would consist of a memory leak which would cause your program to increase the amount of ram it uses, you could see this as a symptom of a pointer not properly disposed of.
You will probably learn later that you will not have to worry about the deletion of your pointers because of std::shared_ptr which will delete your object when the pointer goes out of scope. Which will be safer later on because you will probably will learn that exceptions can cause your destructor to never fire leaving a memory leak.
...
...
delete m_oCustomerList;
// Try using the deleted pointer here
// This should cause a runtime exception
// which means you did free the pointer
m_oCustomerList->someStrMemberVariable = "This will fail"
...
...
Needless to say, don't do this in the actual code. Hope this helps.
# include <iostream>
int main()
{
using std::cout;
int *p= new int;
*p = 10;
cout<<*p<<"\t"<<p<<"\n";
delete p;
cout<<*p<<"\t"<<p<<"\n";
return 0;
}
Output:
10 0x237c010
0 0x237c010
Here after deleting p, why the pointer p retains its value? Doesn't delete frees the pointer p?
What exactly is meant by 'freeing the pointer'?
Does 'delete p' simply mean '*p = 0'?(which seems from the output)
Here after deleting p, why the pointer p retains its value?
It's how the language is designed. If you want the pointer you hold to be zeroed, you'll need to assign it zero yourself. The pointer p is another piece of memory, separate from the allocation/object it points to.
Doesn't delete frees the pointer p?
It calls the destructor the object and returns the memory to the system (like free). If it is an array (delete[]), destructors for all elements will be called, then the memory will be returned.
What exactly is meant by 'freeing the pointer'?
When you want a piece of memory from the system, you allocate it (e.g. using new). When you are finished using it, you return it using the corresponding free/delete call. It's a resource, which you must return. If you do not, your program will leak (and nobody wants that).
In order to understand what freeing memory means, you must first understand what allocating memory means. What follows is a simplified explanation.
There exists memory. Memory is a large blob of stuff that you could access. But since it's global, you need some way to portion it out. Some way to govern who can access which pieces of memory. One of the systems that governs the apportionment of memory is called the "heap".
The heap owns some quantity of memory (some is owned by the stack and some is owned by static data, but nevermind that now). At the beginning of your program, the heap says that you have access to no heap-owned memory.
What new int does is two fold. First, it goes to the heap system and says, "I want a piece of memory suitable to store an int into." It gets back a pointer to exactly that: a piece of the heap, into which you can safely store and retrieve exactly one value of type int.
You are now the proud owner of one int's worth of memory. The heap guarantees that as long as its rules are followed, whatever you put there will be preserved until you explicitly change it. This is the covenant between you and the almighty heap.
The other thing new int does is initialize that piece of the heap with an int value. In this case, it is default initialized, because no value was passed (new int(5) would initialize it with the value 5).
From this point forward, you are legally allowed to store exactly one int in this piece of memory. You are allowed to retrieve the int stored there. And you're allowed to do one other thing: tell the heap that you are finished using that memory.
When you call delete p, two things happen. First, p is deinitialized. Again, because it is an int, nothing happens. If this were a class, its destructor would be called.
But after that, delete goes out to the heap and says, "Hey heap: remember this pointer to an int you gave me? I'm done with it now." The heap system can do whatever it wants. Maybe it will clear the memory, as some heaps do in debug-builds. In release builds however, the memory may not be cleared.
Of course, the reason why the heap can do whatever it wants is because, the moment you delete that pointer, you enter into a new agreement with the heap. Previously, you asked for a piece of memory for an int, and the heap obliged. You owned that memory, and the heap guaranteed that it was yours for as long as you wanted. Stuff you put there would remain there.
After you had your fun, you returned it to the heap. And here's where the contract comes in. When you say delete p, for any object p, you are saying the following:
I solemnly swear not to touch this memory address again!
Now, the heap might give that memory address back to you if you call new int again. It might give you a different one. But you only have access to memory allocated by the heap during the time between new and delete.
Given this, what does this mean?
delete p;
cout << *p << "\t" << p << "\n";
In C++ parlance, this is called "undefined behavior". The C++ specification has a lot of things that are stated to be "undefined". When you trigger undefined behavior anything can happen! *p could be 0. *p could be the value it used to be. Doing *p could crash your program.
The C++ specification is a contract between you and your compiler/computer. It says what you can do, and it says how the system responds. "Undefined behavior" is what happens when you break the contract, when you do something the C++ specification says you aren't supposed to. At that point, anything can happen.
When you called delete p, you told the system that you were finished using p. By using it again, you were lying to the system. And therefore, the system no longer has to abide by any rules, like storing the values you want to store. Or continuing to run. Or not spawning demons from your nose. Or whatever.
You broke the rules. And you must suffer the consequences.
So no, delete p is not the equivalent of *p = 0. The latter simply means "set 0 into the memory pointed to by p." The former means "I'm finished using the memory pointed to by p, and I won't use it again until you tell me I can."
Here after deleting p, why the pointer p retains its value? Doesn't delete frees the pointer p?
It frees the memory the pointer points to (after calling any appropriate destructor). The value of the pointer itself is unchanged.
What exactly is meant by 'freeing the pointer'?
As above - it means freeing the memory the pointer points to.
Does 'delete p' simply mean '*p = 0'?(which seems from the output)
No. The system doesn't have to write anything to the memory that's freed, and if it does write something it doesn't have to write 0. However, the system does generally have to manage that memory in some way, and that might actually write to the area of memory that the pointer was pointing to. Also, the just-freed memory can be allocated to something else (and in a multi-threaded application, that could happen before the delete operation even returns). The new owner of that memory block can of course write whatever they want to that memory.
A pointer that is pointing to a freed block of memory is often known as a 'dangling' pointer. It is an error to dereference a dangling pointer (for read or write). You will sometimes see code immediately assign a NULL or 0 to a pointer immediately after deleting the pointer, sometimes using a macro or function template that both deletes and clears the pointer. Note that this won't fix all bugs with dangling pointers, since other pointers may have been set to point to the memory block.
The modern method of dealing with these kinds of problems is to avoid using raw pointers altogether in favor of using smart pointers such as shared_ptr or unique_ptr.
delete p simply frees the memory allocated during a call to the new operator. It does not change the value of the pointer or the content of the deallocated memory.
(Note the following isn't how it actually works so take it with a grain of salt.)
Inside the implementation of new it keeps a list of all available memory when you said "int *p= new int;" it cut a int sized block off of its list of available memory and gave it to you. When you run "delete p;" it gets put back in the list of available memory. If your program called new 30 times without calling delete at all you would get 30 different int sized chunks from new. If you called new then delete 30 times in a row you might (but not necessarily) get the same int sized block. This is because you said you weren't using it any more when you called delete and so new was free to reuse it.
TLDR; Delete notifies new that this section of memory is available again it doesn't touch your variable.
If I have a pointer pointing to a specific memory address at the heap. I would like this same pointer to point to another memory address, should I first delete the pointer? But, in this case am I actually deleting the pointer or just breaking the reference (memory address) the pointer is pointing at?
So, in other words, if I delete a pointer, does this mean it doesn't exist any more? Or, it is there, but not pointing to where it was?
The syntax of delete is a bit misleading. When you write
T* ptr = /* ... */
delete ptr;
You are not deleting the variable ptr. Instead, you are deleting the object that ptr points at. The value of ptr is unchanged and it still points where it used to, so you should be sure not to dereference it without first reassigning it.
There is no requirement that you delete a pointer before you reassign it. However, you should ensure that if you are about to reassign a pointer in a way that causes you to lose your last reference to the object being pointed at (for example, if this pointer is the only pointer in the program to its pointee), then you should delete it to ensure that you don't leak memory.
One technique many C++ programmers use to simplify the logic for when to free memory is to use smart pointers, objects that overload the operators necessary to mimic a pointer and that have custom code that executes automatically to help keep track of resources. The new C++0x standard, for example, will provide a shared_ptr and unique_ptr type for this purpose. shared_ptr acts like a regular pointer, except that it keeps track of how many shared_ptrs there are to a resource. When the last shared_ptr to a resource changes where it's pointing (either by being reassigned or by being destroyed), it then frees the resource. For example:
{
shared_ptr<int> myPtr(new int);
*myPtr = 137;
{
shared_ptr<int> myOtherPtr = myPtr;
*myPtr = 42;
}
}
Notice that nowhere in this code is there a call to delete to match the call to new! This is because the shared_ptr is smart enough to notice when the last pointer stops pointing to the resource.
There are a few idiosyncrasies to be aware of when using smart pointers, but they're well worth the time investment to learn about. You can write much cleaner code once you understand how they work.
When you delete a pointer you release the memory allocated to the pointed to object. So if you just want your pointer to point to a new memory location you should not delete the pointer. But if you want to destroy the object currently it is pointing to and then then point to a different object then you should delete the pointer.
xtofl, while being funny, is a bit correct.
If YOU 'new' a memory address, then you should delete it, otherwise leave it alone. Maybe you are thinking too much about it, but you think about it like this. Yea, the memory is always there, but if you put a fence around it, you need to take the fence down, or no one else can you it.
When you call delete you mark the memory pointed to by the pointer as free - the heap takes ownership of it and can reuse it, that's all, the pointer itself is usually unchanged.
What to do with the pointer depends on what you want. If you no longer need that memory block - use delete to free the block. If you need it later - store the address somewhere where you can retrieve it later.
To answer your question directly, this has been asked before. delete will delete what your pointer points to, but there was a suggestion by Bjarne Stroustrup that the value of the pointer itself can no longer be relied upon, particularly if it is an l-value. However that does not affect the ability to reassign it so this would be valid:
for( p = first; p != last; ++p )
{
delete p;
}
if you are iterating over an array of pointers, all of which had been allocated with new.
Memory management in C++ is best done with a technique called RAII, "resource acquisition is initialization".
What that actually means is that at the time you allocate the resource you immediately take care of its lifetime, i.e. you "manage" it by putting it inside some object that will delete it for you when it's no longer required.
shared_ptr is a technique commonly used where the resource will be used in many places and you do not know for certain which will be the last one to "release" it, i.e. no longer require it.
shared_ptr is often used in other places simply for its semantics, i.e. you can copy and assign them easily enough.
There are other memory management smart pointers, in particular std::auto_ptr which will be superceded by unique_ptr, and there is also scoped_ptr. weak_ptr is a methodology to be able to obtain a shared_ptr if one exists somewhere, but not holding a reference yourself. You call "lock()" which gives you a shared_ptr to memory or a NULL one if all the current shared pointers have gone.
For arrays, you would not normally use a smart pointer but simply use vector.
For strings you would normally use the string class rather than think of it as a vector of char.
In short, you do not "delete a pointer", you delete whatever the pointer points to.
This is a classical problem: If you delete it, and someone else is pointing to it, they will read garbage (and most likely crash your application). On the other hand, if you do not and this was the last pointer, your application will leak memory.
In addition, a pointer may point to thing that were not originally allocated by "new", e.g. a static variable, an object on the stack, or into the middle of another object. In all those cases you're not allowed to delete whatever the pointer points to.
Typically, when designing an application, you (yes, you) have to decide which part of the application owns a specific object. It, and only it, should delete the objects when it is done with it.
Some programmers like to set a pointer variable to null after releasing the pointee:
delete ptr;
ptr = 0;
If someone tries to release the pointee again, nothing will happen. In my opinion, this is wrong. Accessing a pointer after the pointee has been released is a bug, and bugs should jump in your face ASAP.
Is there an alternative value I could assign to a pointer variable that designates released pointees?
delete ptr;
ptr = SOME_MAGIC_VALUE;
Ideally, I would want Visual Studio 2008 to tell me "The program has been terminated because you tried to access an already released pointee here!" in debug mode.
Okay, it seems I have to do the checking myself. Anything wrong with the following template?
template <typename T>
void sole_delete(T*& p)
{
if (p)
{
delete p;
p = 0;
}
else
{
std::cerr << "pointee has already been released!\n";
abort();
}
}
No. Test for "0" when trying to delete something if you really want to warn or error out about it.
Alternatively, during development you could omit ptr = 0; and rely on valgrind to tell you where and when you're attempting a double free. Just be sure to put the ptr = 0; back for release.
Edit Yes, people I know C++ doesn't require a test around delete 0;
I am not suggesting if (ptr != 0) delete ptr;. I am suggesting if (ptr == 0) { some user error that the OP asked for } delete ptr;
Assign NULL after releasing a pointer. And before using it, check for its NULLity.. If it is null, report an error by yourself.
Is there an alternative value I could assign to a pointer variable that designates released pointees?
Ideally, I would want Visual Studio 2008 to tell me "The program has been terminated because you tried to access an already released pointee here!" in debug mode.
You get this very likely by just doing delete ptr. The run-time will catch you if you double-delete this pointer.
Anyway, I don't think I have written ptr = NULL more than a handful of times in the last decade. Why would I do this? Such a pointer is certainly hidden within an object whose destructor will delete the object it refers to, and after that destructor has been invoked the pointer is gone, too.
And if some circumstances would require me to leave a pointer to hang around after the pointee has been deleted, I wouldn't set it to NULL simply because I would want the code to crash ASAP if I'd double-delete. Setting the pointer to NULL just masks an error.
Of course, all this doesn't mean that one wouldn't want a pointer that might be explicitly set to "nothing", and use NULL for that. But not to mask a double-deletion error.
No, calling delete on a null pointer is perfectly normal from C++ point of view. Assigning some magic value will break code severely - you'll now have to distinguish between null pointers, valid pointers and magic value pointers and I guess it will be a huge mess.
If you really oppose deleting a null pointer you can have a separate boolean flag together with each pointer meaning that it has been deleted. Perhaps you could write a wrapper class for that.
If you just want to check allocations and deletions the easiest way is to write your own global operator new and operator delete and manually keep track of all pointers that are allocated and deallocated.
Of course, you can also use an existing tool that does that for you, e.g. Valgrind.
If you also want to protocol each pointer access, this gets hairy. You essentially have to either patch the executable or execute it in a virtual machine where each pointer access is redirected to your bookkeeping routine.
But once again, existing tools such as Valgrind already do that for you. In the case of Valgrind, your executable is run inside a virtual machine; other programs go the way of patching your application by modifying the byte code.
When you delete a pointer in debug mode, many compilers will paint the bytes with some values to indicate the memory as "invalid" in case you try to read it. Of course genuine memory may have those bytes, so it allocates a bit extra to indicate whether the pointer you are reading is valid or not, and paints the bytes you do not directly access.
It is not wrong to call delete multiple times on the same pointer (variable) just on what it points to.
Maybe this isn't the best way to do this but it's totally legal of course...
T * array[N];
for( i = 0; i < N; ++ i )
{
array[i] = new T;
}
T* ptr;
for( i = 0; i < N; ++i )
{
ptr = array[i];
delete ptr;
}
and apart from not being the best way to do things, I am calling delete on the variable "ptr" multiple times but on different addresses and clearly not an error.
I think sharptooth already provided a valid answer, but I think he failed to spell it out explicitly:
If it is an error in your code to access a pointer variable after its object has been deleted via that pointer variable, then you have to add some checking yourself. (Possibly via some flag.)
Answering the question in question[sic].
No, there's no established value for released pointers.
I think any access to an invalid pointer (like NULL) should be noted - not only accessing them after release, which may never happen if no (non-NULL) initialization takes place. The debugger is bound to warn you when you try to access a null pointer - if it doesn't, you shouldn't be using it.
edit: end of answering the original question; rambling about double-delete
It really depends on the design if delete on NULL is a bug waiting to happen. In many cases it's not. Perhaps you should use "safe delete" when that is needed - or while debugging? Like this:
template <typename T>
void safe_delete(T*& ptr)
{
if (ptr == 0)
throw std::runtime_error("Deleting NULL!");
delete ptr;
ptr = 0;
}
There is no point.
I won't enter the apparently rather hot conversation going on, just point out an obvious fact: pointers are passed by copy.
With some code, it gives:
T* p = /* something */;
T* q = p;
delete q;
q = 0;
Do you feel safe ?
The problem is that you have no way to ensure that your magic value has been propagated to all pointers to the object.
This is like plugging a hole in a sieve and hoping it'll stop the water from pouring out.
In C++0x, you can
delete ptr;
ptr = nullptr;
Setting the pointer to a specific value will only affect this copy of the pointer, so it provides nearly no protection. If the program needs to be verifiably correct, there are smart pointer classes that track copies of the pointer and invalidate those, otherwise I'd just recommend a tool like Valgrind (on Linux) or Rational Purify (on Windows) that will let you check for memory access errors.
It’s not an error to delete a nullpointer; by definition it does nothing.
Generally it’s a Bad Idea™ to null pointer variables after delete, because there the only effect it can have is to hide a bug that causes multiple deletion (with the pointer variable nulled the second deletion will have no effect, instead of e.g. crashing).
Generally, nulling of pointers belongs, in my view, with all the other Microsoft’isms such as Hungarian notation and extensive use of macros.
It’s something that may once have had a good rationale, but which today, as of 2011, just has negative effects, and is used out of sheer inertia: idea propagation of the same kind that Knuth once described for random generators – the almost worst possible one gaining popularity and then incorporated as the default generator in umpteen language implementations and libraries, with most people thinking the extensive usage meant it had to be at least reasonable.
However, having said that, for the person who leans towards the ultra-formally pedantic it can be at least an emotionally satisfying idea to null pointers in e.g. a std::vector, after delete. The reason is that the Holy Standard, ISO/IEC 14882, allows the std::vector destructor to do rather unholy things, such as copying the pointer values around. And in the formally pedantic view, even such copying of invalid pointer values incurs Undefined Behavior. Not that it is a practical concern. First of all I know of absolutely no modern platform where copying would have any ill effect, and secondly so much code relies on standard containers behaving reasonably that they just have to: otherwise, nobody would use such an implementation.
Cheers & hth.