i have quick question (i think quick). I have to check if pointer is NULL after delete data pointed by pointer. In my case i have data stored in fl->first and i want to clear this data. After clear by delete fl->first i have to check if fl->first pointer is NULL. I read a lot of posts that
delete fl->first;
fl->first = NULL;
is not good idea. And is there better way or in this case it is okey?
You should not use the NULL macro in C++. Using nullptr literal is preferable instead. This is because the macro is ambiguous in some cases which may lead to confusion.
Besides that, you should typically avoid owning bare pointers, and thus deleting anything directly is usually not a good idea.
fl->first = nullptr; is C++ way of pointing to address of 0 in most cases (it does not mean that it is always 0). Why because in C you have void * in C++ you do not. You better read this.
That is as simple as it gets.
Longer answer is in void * you can allocate memory for 1000 chars, but you can read and write ints into them. To avoid this and for compilers to be more type safe C++ is going away from NULL and void *.
Yes this is a good idea, the reason being that it avoids crashes if you accidentally try to delete the same object again.
Example:
Foo* foo = new Foo(); // Create a new object
delete foo; // Deletes the object
delete foo; // Tries to delete an already deleted object and will therefor crash the program
Fix:
Foo* foo = new Foo(); // Create a new object
delete foo; // Deletes the object
foo = 0; // Sets the pointer to 0 (nullptr)
delete foo; // Deletes nothing, the program will NOT crash
Related
Is it always wise to use NULL after a delete in legacy code without any smartpointers to prevent dangling pointers? (bad design architecture of the legacy code excluded)
int* var = new int(100);
delete var;
var = NULL;
Does it also make sense in destructors?
In a getter, does it make sense to test for NULL in second step?
Or is it undefinied behavier anyway?
Foo* getPointer() {
if (m_var!=NULL) { // <-is this wise
return m_var;
}
else {
return nullptr;
}
}
What about this formalism as an alternative? In which cases will it crash?
Foo* getPointer() {
if (m_var) { // <-
return m_var;
}
else {
return nullptr;
}
}
(Edit) Will the code crash in example 3./4. if A. NULL is used after delete or B. NULL is not used after delete.
Is it always wise to use NULL after a delete in legacy code without any smartpointers to prevent dangling pointers? (bad design architecture of the legacy code excluded)
int* var = new int(100);
// ...
delete var;
var = NULL;
Only useful if you test var afterward.
if scope ends, or if you set other value, setting to null is unneeded.
Does it also make sense in destructors?
nullify members in destructor is useless as you cannot access them without UB afterward anyway. (but that might help with debugger)
In a getter, does it make sense to test for NULL in second step? Or is it undefinied behavier anyway?
[..]
[..]
if (m_var != NULL) and if (m_var) are equivalent.
It is unneeded, as, if pointer is nullptr, you return nullptr,
if pointer is not nullptr, you return that pointer, so your getter can simply be
return m_var;
Avoid writing code like this
int* var = new int(100);
// ... do work ...
delete var;
This is prone to memory leaks if "do work" throws, returns or otherwise breaks out of current scope (it may not be the case right now but later when "do work" needs to be extended/changed). Always wrap heap-allocated objects in RAII such that the destructor always runs on scope exit, freeing the memory.
If you do have code like this, then setting var to NULL or even better a bad value like -1 in a Debug build can be helpful in catching use-after-free and double-delete errors.
In case of a destructor:
Setting the pointer to NULL in a destructor is not needed.
In production code it's a waste of CPU time (writing a value that will never be read again).
In debug code it makes catching double-deletes harder. Some compilers fill deleted objects with a marker like 0xDDDDDDDD such that a second delete or any other dereference of the pointer will cause a memory access exception. If the pointer is set to NULL, delete will silently ignore it, hiding the error.
This question is really opinion-based, so I'll offer some opinions ... but also a justification for those opinions, which will hopefully be more useful for learning than the opinions themselves.
Is it always wise to use NULL after a delete in legacy code without any smartpointers to prevent dangling pointers? (bad design architecture of the legacy code excluded)
Short answer: no.
It is generally recommended to avoid raw pointers whenever possible. Regardless of which C++ standard your code claims compliance with.
Even if you somehow find yourself needing to use a raw pointer, it is safer to ensure the pointer ceases to exist when no longer needed, rather than setting it to NULL. That can be achieved with scope (e.g. the pointer is local to a scope, and that scope ends immediately after delete pointer - which absolutely prevents subsequent use of the pointer at all). If a pointer cannot be used when no longer needed, it cannot be accidentally used - and does not need to be set to NULL. This also works for a pointer that is a member of a class, since the pointer ceases to exist when the containing object does i.e. after the destructor completes.
The idiom of "set a pointer to NULL when no longer needed, and check for NULL before using it" doesn't prevent stupid mistakes. As a rough rule, any idiom that requires a programmer to remember to do something - such as setting a pointer to NULL, or comparing a pointer to NULL - is vulnerable to programmer mistakes (forgetting to do what they are required to do).
Does it also make sense in destructors?
Generally speaking, no. Once the destructor completes, the pointer (assuming it is a member of the class) will cease to exist as well. Setting it to NULL immediately before it ceases to exist achieves nothing.
If you have a class with a destructor that, for some reason, shares the pointer with other objects (i.e. the value of the pointer remains valid, and presumably the object it points at, still exist after the destructor completes) then the answer may be different. But that is an exceedingly rare use case - and one which is usually probably better avoided, since it becomes more difficult to manage lifetime of the pointer or the object it points at - and therefore easier to introduce obscure bugs. Setting a pointer to NULL when done is generally not a solution to such bugs.
In a getter, does it make sense to test for NULL in second step? Or is it undefinied behavier anyway?
Obviously that depends on how the pointer was initialised. If the pointer is uninitialised, even comparing it with NULL gives undefined behaviour.
In general terms, I would not do it. There will presumably be some code that initialised the pointer. If that code cannot appropriately initialise a pointer, then that code should deal with the problem in a way that prevents your function being called. Examples may include throwing an exception, terminating program execution. That allows your function to safely ASSUME the pointer points at a valid object.
What about this formalism as an alternative? In which cases will it crash?
The "formalism" is identical to the previous one - practically the difference is stylistic. In both cases, if m_var is uninitialised, accessing its value gives undefined behaviour. Otherwise the behaviour of the function is well-defined.
A crash is not guaranteed in any circumstances. Undefined behaviour is not required to result in a crash.
If the caller exhibits undefined behaviour (e.g. if your function returns NULL the caller dereferences it anyway) there is nothing your function can do to prevent that.
The case you describe remains relatively simple, because the variable is described in a local scope.
But look for example at this scenario:
struct MyObject
{
public :
MyObject (int i){ m_piVal = new int(i); };
~MyObject (){
delete m_piVal;
};
public:
static int *m_piVal;
};
int* MyObject::m_piVal = NULL;
You may have a double free problem by writing this:
MyObject *pObj1 = new MyObject(1);
MyObject *pObj2 = new MyObject(2);
//...........
delete pObj1;
delete pObj2; // You will have double Free on static pointer (m_piVal)
Or here:
struct MyObject2
{
public :
MyObject2 (int i){ m_piVal = new int(i); };
~MyObject2 (){
delete m_piVal;
};
public:
int *m_piVal;
};
when you write this:
MyObject2 Obj3 (3);
MyObject2 Obj4 = Obj3;
At destruction, you will have double Free here because Obj3.m_piVal = Obj4.m_piVal
So there are some cases that need special attention (Implement : smart pointer, copy constructor, ...) to manage the pointer
Assume I have a pointer void* p, then after some passing in and out of functions, let's say p is now pointing to int. Then do I need to manually delete as delete static_cast<int*>(p)?
In most places people say delete only happen when there is new. But in this case, it's not but does C++ itself remember to release that memory?
That all depends on how the int you're pointing to was allocated, you only delete what you new.
Correct (the int is new'd):
int* a = new int;
void* p = a;
//somewhere later...
delete static_cast<int*>(p);
Bad (the int is automatically managed):
int a = 0;
void* p = &a;
//somewhere later...
delete static_cast<int*>(p);
Answering the comment code, doing:
int* a = new int;
void* p = a;
delete p;
Is never okay. You should never delete through a void*, it's undefined behavior.
side note : in modern C++ you really shouldn't be using new or delete, stick with smart pointers or standard containers.
The short answer is: "It depends".
In most places people say delete only happen when there is new.
That's true so far as it goes. To avoid wasting resources and to ensure all destructors are called correctly every new has to be balanced by a delete somewhere. If your code can follow several paths you have to make sure that every path calls delete (if calling delete is appropriate).
The can get tricky when exceptions are thrown which is one reason why Modern C++ programmers generally avoid using new and delete. Instead they use the smart pointers std::unique_ptr and std::shared_ptr along with the helper template functions std::make_unique<T> and std::make_shared<T> (see the SO question: What is a smart pointer and when should I use one?) to implement a technique known as RAII (Resource Acquisition Is Instantiation).
But in this case, it's not …
Remember that the phrase ... when there is a new refers to the object the pointer points to not the pointer itself. Consider the following code...
int *a = new int();
void *p = a;
if (SomeTest())
{
delete a;
}
else
{
a = nullptr;
}
// This line is needed if SomeTest() returned false
// and undefined (dangerous) if SomeTest() returned true
delete static_cast<int *> (p);
Is that last line of code needed?
The object that a and p both point to was created by calling new so delete has to be called on something. If the function SomeTest() returned false then a has been set to nullptr so calling delete on it won't affect the object we created. Which means we do need that last line of code to properly delete the object that was newed up in the first line of code.
On the other hand, if the function SomeTest() returned true then we've already called delete for the object via the pointer a. In that case the last line of code is not needed and in fact may be dangerous.
The C++ standard says that calling delete on an object that has already been deleted results in "undefined behaviour" which means anything could happen. See the SO question: What happens in a double delete?
does C++ itself remember to release that memory?
Not for anything created by calling new. When you call new you are telling the compiler "I've got this, I will release that memory (by calling delete) when appropriate".
do I need to manually delete
Yes: if the object pointed to needs to be deleted here and the void * pointer is the only pointer you can use to delete the object.
I went through this post and had a doubt. Is it a good practice to null an element of an object in its destructor?
The destructor will be called when the object goes out of scope but will its elements need to be set to NULL in the destructor to ensure dangling pointers are not left.
After an object is destroyed, it ceases to exist. There is no point in setting its members to particular values when those values will immediately cease to exist.
The pattern of setting pointers to NULL when deleteing the objects they point to is actively harmful and has caused errors in the past. Unless there's a specific reason the pointer needs to be set to NULL (for example, it is likely to be tested against NULL later) it should not be set to NULL.
Consider:
Foo* foo getFoo();
if (foo!=NULL)
{
do_stuff();
delete foo;
}
// lots more code
return (foo == NULL);
Now, imagine if someone adds foo = NULL; after the delete foo; in this code, thinking that you're supposed to do that. Now the code will give the wrong return value.
Next, consider this:
Foo* foo getFoo();
Foo* bar = null;
if (foo != NULL)
{
bar = foo;
do_stuff(bar);
delete bar;
bar = NULL;
}
// lots more code
delete foo;
We always set pointers to NULL after we delete them, so this delete foo; must be safe, right? Clearly it's not. So setting pointers to NULL after you delete them is neither necessary nor sufficient.
Don't do it.
It's not necessary to set member elements to NULL in the destructor, as delete has been called on the owner object in order for that destructor to be called. It is the responsibility of the code that deletes an object to no longer try to access the contents of that object.
You are also using extra cycles clearing out that memory.
The point of doing this is to provide deterministic behavior in event of a programming error.
If the deleted pointer is used, then the program will always fail on dereferencing a nullptr. (Not allowing the program to continue.)
If the deleted pointer set to nullptr and is then deleted again, then the second attempt is a noop (by design).
This is the normal pattern for pointers that are NOT deleted in a destructor. (Smart pointers are your friend, to avoid this situation, entirely.)
The purpose of nulling a deleted member pointer in a destructor is less obvious; when there is a suspicion that other code may have a reference to that member. This should generally be avoided (to rely on this), because it has limited utility once the memory for the destructed class is reclaimed. Although implementation dependent, in many environments the memory can live long enough to cause the program to fail when the deleted memory is re-used, so that the programmer can find the issue and fix it.
I have a class pointer declaration:
MyClass* a;
In destruction method I have:
if (a)
{
delete a;
a= NULL;
}
I got a problem when delete the pointer a:
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
What is the cause of the problem and how can I get rid of it?
With your current declaration:
MyClass* a;
a gets a random value. If you never give it a valid value later, such as:
a = new MyClass();
It will point to an unknown place in memory, quite probably not a memory area reserved for your program, and hence the error when you try to delete it.
The easiest way to avoid this problem is to give a a value when you declare it:
MyClass* a = new MyClass();
or, if you cannot give it a value when you declare it (maybe you don't know it yet), assign it to null:
MyClass* a = 0;
By the way, you can remove the test (if (a)) from your code. delete is a no-op on a null pointer.
Use smart pointer to free memory. delete in application code is always wrong.
unless you have initialized the pointer to something after this:
MyClass* a;
the pointer a will hold some random value. So your test
if (a) { }
will pass, and you attempt to delete some random memory location.
You can avoid this by initializing the pointer:
MyClass* a = 0;
Other options are that the object pointed to has been deleted elsewhere and the pointer not set to 0, or that it points to an object that is allocated on the stack.
As has been pointed out elsewhere, you could avoid all this trouble by using a smart pointer as opposed to a bare pointer in the first place. I would suggest having a look at std::unique_ptr.
How did you allocate the memory that a points to? If you used new[] (in order to create an array of MyClass), you must deallocate it with delete[] a;. If you allocated it with malloc() (which is probably a bad idea when working with classes), you must deallocate it with free().
If you allocated the memory with new, you have probably made a memory management error somewhere else - for instance, you might already have deallocated a, or you have written outside the bounds of some array. Try using Valgrind to debug memory problems.
You should use
MyClass* a = NULL;
in your declaration. If you never instantiate a, the pointer is pointing to an undefined region of memory. When the containing class destructor executes, it tries to delete that random location.
When you do MyClass* a; you declare a pointer without allocating any memory. You don't initialize it, and a is not necessarily NULL. So when you try to delete it, your test if (a) succeeds, but deallocation fails.
You should do MyClass* a = NULL; or MyClass* a(nullptr); if you can use C++11.
(I assume here you don't use new anywhere in this case, since you tell us that you only declare a pointer.)
On Learn C++, they wrote this to free memory:
int *pnValue = new int; // dynamically allocate an integer
*pnValue = 7; // assign 7 to this integer
delete pnValue;
pnValue = 0;
My question is: "Is the last statement needed to free the memory correctly, completly?"
I thought that the pointer *pnValue was still on the stack and new doesn't make any sense to the pointer. And if it is on the stack it will be cleaned up when the application leaves the scope (where the pointer is declared in), isn't it?
It's not necessary, but it's always a good idea to set pointers to NULL (0) when you're done with them. That way if you pass it to a function (which would be a bug), it will likely give you an error rather than trying to operate on invalid memory (makes the bug much easier to track down). And it also makes conditionals simpler.
Setting a pointer to NULL (or zero) after deleting it is not necessary. However it is good practice. For one thing, you won't be able to access some random data if you later dereference the pointer. Additionally, you'll often find code with the following:
if(ptr)
{
delete ptr;
ptr = NULL;
}
So setting the pointer to NULL will ensure it won't be deleted twice.
Finally, you might find code like this:
void foo(bar *ptr)
{
if(!ptr) throw Exception(); // or return, or do some other error checking
// ...
}
And you'd probably want that safety check to be hit.