The following code is just used to illustrate my question.
template<class T>
class array<T>
{
public:
// constructor
array(cap = 10):capacity(cap)
{element = new T [capacity]; size =0;}
// destructor
~array(){delete [] element;}
void erase(int i);
private:
T *element;
int capacity;
int size;
};
template<class T>
void class array<T>::erase(int i){
// copy
// destruct object
element[i].~T(); ////
// other codes
}
If I have array<string> arr in main.cpp. When I use erase(5), the object of element[5] is destroyed but the space of element[5] will not be deallocated, can I just use element[5] = "abc" to put a new value here? or should I have to use placement new to put new value in the space of element [5]?
When program ends, the arr<string> will call its own destructor which also calls delete [] element. So the destructor of string will run to destroy the object first and then free the space. But since I have explicitly destruct the element[5], does that matter the destructor (which is called by the arr's destuctor) run twice to destruct element[5]? I know the space can not be deallocated twice, how about the object? I made some tests and found it seems fine if I just destruct object twice instead of deallocating space twice.
Update
The answers areļ¼
(1)I have to use placement new if I explicitly call destructor.
(2) repeatedly destructing object is defined as undefined behavior which may be accepted in most systems but should try to avoid this practice.
You need to use the placement-new syntax:
new (element + 5) string("abc");
It would be incorrect to say element[5] = "abc"; this would invoke operator= on element[5], which is not a valid object, yielding undefined behavior.
When program ends, the arr<string> will call its own destructor which also calls delete [] element.
This is wrong: you are going to end up calling the destructor for objects whose destructors have already been called (e.g., elements[5] in the aforementioned example). This also yields undefined behavior.
Consider using the std::allocator and its interface. It allows you easily to separate allocation from construction. It is used by the C++ Standard Library containers.
Just to explain further exactly how the UD is likely to bite you....
If you erase() an element - explicitly invoking the destructor - that destructor may do things like decrement reference counters + cleanup, delete pointers etc.. When your array<> destructor then does a delete[] element, that will invoke the destructors on each element in turn, and for erased elements those destructors are likely to repeat their reference count maintenance, pointer deletion etc., but this time the initial state isn't as they expect and their actions are likely to crash the program.
For that reason, as Ben says in his comment on James' answer, you absolutely must have replaced an erased element - using placement new - before the array's destructor is invoked, so the destructor will have some legitimate state from which to destruct.
The simplest type of T that illustrates this problem is:
struct T
{
T() : p_(new int) { }
~T() { delete p_; }
int* p_;
};
Here, the p_ value set by new would be deleted during an erase(), and if unchanged when ~array() runs. To fix this, p_ must be changed to something for which delete is valid before ~array() - either by somehow clearing it to 0 or to another pointer returned by new. The most sensible way to do that is by the placement new, which will construct a new object, obtaining a new and valid value for p_, overwriting the old and useless memory content.
That said, you might think you could construct types for which repeated destructor was safe: for example, by setting p_ to 0 after the delete. That would probably work on most systems, but I'm pretty sure there's something in the Standard saying to invoke the destructor twice is UD regardless.
Related
In my understanding for C++ memory model, only when object array is created by new[], and deleted by 'delete []', the scalar constructor/destructor are used, and compiler generates an internal for loop to iterate every element location.
int main()
{
B obj;
B* pb = new B;//no scalar constructor
delete pb;//scalar deleting destructor
}
But I found when I use 'new' 'delete' to operate just one element, without using '[]', VC still generates code for 'scalar deleting descturctor' for debug version.
So my question is:
scalar constructor/destructor don't have to be appearing in pair, right? In my test program, I found there's only scalar deleting destructor.
All objects created by new or new[], should have a scalar deleting destructor? Why one element still have this consideration, I don't see any necessity for one element's case as to process exception, stack unwinding, that should rely on an extra deleting destructor. Any reasons?
scalar constructor/destructor don't have to be appearing in pair, right? In my test program, I found there's only scalar deleting destructor.
I could be wrong but I don't think there is anything on the constructor side that is analogous to scalar deleting destructor.
All objects created by new or new[], should have a scalar deleting destructor? Why one element still have this consideration, I don't see any necessity for one element's case as to process exception, stack unwinding, that should rely on an extra deleting destructor. Any reasons?
delete ptr calls the scalar deleting destructor.
delete [] ptr calls the vector deleting destructor.
A very good answer is found at http://www.pcreview.co.uk/threads/scalar-deleting-destructor.1428390/.
It's the name that's reported for a helper function that VC writes for every
class with a destructor. The "scalar deleting destructor" for class A is
roughly equivalent to:
void scalar_deleting_destructor(A* pa)
{
pa->~A();
A::operator delete(pa);
}
There's a sister function that's also generated, which is called the 'vector
deleting destructor'. It looks roughly like:
void vector_deleting_destructor(A* pa, size_t count)
{
for (size_t i = 0; i < count; ++i)
pa.~A();
A::operator delete[](pa);
}
There's a point in my program where the state of a certain object needs to be reset "to factory defaults". The task boils down to doing everything that is written in the destructor and constructor. I could delete and recreate the object - but can I instead just call the destructor and the constructor as normal objects? (in particular, I don't want to redistribute the updated pointer to the new instance as it lingers in copies in other places of the program).
MyClass {
public:
MyClass();
~MyClass();
...
}
void reinit(MyClass* instance)
{
instance->~MyClass();
instance->MyClass();
}
Can I do this? If so, are there any risks, caveats, things I need to remember?
If your assignment operator and constructor are written correctly, you should be able to implement this as:
void reinit(MyClass* instance)
{
*instance = MyClass();
}
If your assignment operator and constructor are not written correctly, fix them.
The caveat of implementing the re-initialisation as destruction followed by construction is that if the constructor fails and throws an exception, the object will be destructed twice without being constructed again between the first and second destruction (once by your manual destruction, and once by the automatic destruction that occurs when its owner goes out of scope). This has undefined behaviour.
You could use placement-new:
void reinit(MyClass* instance)
{
instance->~MyClass();
new(instance) MyClass();
}
All pointers remain valid.
Or as a member function:
void MyClass::reinit()
{
~MyClass();
new(this) MyClass();
}
This should be used carefully, see http://www.gotw.ca/gotw/023.htm, which is about implementing an assignement operator with this trick, but some points apply here too:
The constructor should not throw
MyClass should not be used as a base class
It interferes with RAII, (but this could be wanted)
Credit to Fred Larson.
Can I do this? If so, are there any risks, caveats, things I need to remember?
No you can't do this. Besides it's technically possible for the destructor call, it will be just undefined behavior.
Supposed you have implemented the assignment operator of your class correctly, you could just write:
void reinit(MyClass* instance) {
*instance = MyClass();
}
You should use a smart pointer and rely on move semantics to get the behavior you want.
auto classObj = std::make_unique<MyClass>();
This creates a wrapped pointer that handles the dynamic memory for you. Suppose you are ready to reset classObj to the factory defaults, all you need is:
classObj = std::make_unique<MyClass>();
This "move-assignment" operation will call the destructor of MyClass, and then reassign the classObj smart pointer to point to a newly constructed instance of MyClass. Lather, rinse, repeat as necessary. In other words, you don't need a reinit function. Then when classObj is destroyed, its memory is cleaned up.
instance->MyClass(); is illegal, you must get a compilation error.
instance->~MyClass(); is possible. This does one of two things:
Nothing, if MyClass has a trivial destructor
Runs the code in the destructor and ends the lifetime of the object, otherwise.
If you use an object after its lifetime is ended, you cause undefined behaviour.
It is rare to write instance->~MyClass(); unless you either created the object with placement new in the first place, or you are about to re-create the object with placement new.
In case you are unaware, placement new creates an object when you already have got storage allocated. For example this is legal:
{
std::string s("hello");
s.~basic_string();
new(&s) std::string("goodbye");
std::cout << s << '\n';
}
You can try using placement new expression
new (&instance) MyClass()
Assume a pointer object is being allocated on one point and it is being returned to different nested functions. At one point, I want to de-allocate this pointer after checking whether it is valid or already de-allocated by someone.
Is there any guarantee that any of these will work?
if(ptr != NULL)
delete ptr;
OR
if(ptr)
delete ptr;
This code does not work. It always gives Segmentation Fault
#include <iostream>
class A
{
public:
int x;
A(int a){ x=a;}
~A()
{
if(this || this != NULL)
delete this;
}
};
int main()
{
A *a = new A(3);
delete a;
a=NULL;
}
EDIT
Whenever we talk about pointers, people start asking, why not use Smart Pointers.
Just because smart pointers are there, everyone cannot use it.
We may be working on systems which use old style pointers. We cannot convert all of them to smart pointers, one fine day.
if(ptr != NULL) delete ptr;
OR
if(ptr) delete ptr;
The two are actually equivalent, and also the same as delete ptr;, because calling delete on a NULL pointer is guaranteed to work (as in, it does nothing).
And they are not guaranteed to work if ptr is a dangling pointer.
Meaning:
int* x = new int;
int* ptr = x;
//ptr and x point to the same location
delete x;
//x is deleted, but ptr still points to the same location
x = NULL;
//even if x is set to NULL, ptr is not changed
if (ptr) //this is true
delete ptr; //this invokes undefined behavior
In your specific code, you get the exception because you call delete this in the destructor, which in turn calls the destructor again. Since this is never NULL, you'll get a STACK OVERFLOW because the destructor will go uncontrollably recursive.
Do not call delete this in the destructor:
5.3.5, Delete: If the value of the operand of the delete-expression is not a null pointer value, the delete-expression will
invoke the destructor (if any) for the object or the elements of the array being deleted.
Therefore, you will have infinite recursion inside the destructor.
Then:
if (p)
delete p;
the check for p being not null (if (x) in C++ means if x != 0) is superfluous. delete does that check already.
This would be valid:
class Foo {
public:
Foo () : p(0) {}
~Foo() { delete p; }
private:
int *p;
// Handcrafting copy assignment for classes that store
// pointers is seriously non-trivial, so forbid copying:
Foo (Foo const&) = delete;
Foo& operator= (Foo const &) = delete;
};
Do not assume any builtin type, like int, float or pointer to something, to be initialized automatically, therefore, do not assume them to be 0 when not explicitly initializing them (only global variables will be zero-initialized):
8.5 Initializers: If no initializer is specified for an object, the object is default-initialized; if no initialization is performed, an
object with automatic or dynamic storage duration has indeterminate value. [ Note: Objects with static or thread storage duration are zero-initialized
So: Always initialize builtin types!
My question is how should I avoid double delete of a pointer and prevent crash.
Destructors are ought to be entered and left exactly once. Not zero times, not two times, once.
And if you have multiple places that can reach the pointer, but are unsure about when you are allowed to delete, i.e. if you find yourself bookkeeping, use a more trivial algorithm, more trivial rules, or smart-pointers, like std::shared_ptr or std::unique_ptr:
class Foo {
public:
Foo (std::shared_ptr<int> tehInt) : tehInt_(tehInt) {}
private:
std::shared_ptr<int> tehInt_;
};
int main() {
std::shared_ptr<int> tehInt;
Foo foo (tehInt);
}
You cannot assume that the pointer will be set to NULL after someone has deleted it. This is certainly the case with embarcadero C++ Builder XE. It may get set to NULL afterwards but do not use the fact that it is not to allow your code to delete it again.
You ask: "At one point, I want to de-allocate this pointer after checking whether it is valid or already de-allocated by someone."
There is no portable way in C/C++ to check if a >naked pointer< is valid or not. That's it. End of story right there. You can't do it. Again: only if you use a naked, or C-style pointer. There are other kinds of pointers that don't have that issue, so why don't use them instead!
Now the question becomes: why the heck do you insist that you should use naked pointers? Don't use naked pointers, use std::shared_ptr and std::weak_ptr appropriately, and you won't even need to worry about deleting anything. It'll get deleted automatically when the last pointer goes out of scope. Below is an example.
The example code shows that there are two object instances allocated on the heap: an integer, and a Holder. As test() returns, the returned std::auto_ptr<Holder> is not used by the caller main(). Thus the pointer gets destructed, thus deleting the instance of the Holder class. As the instance is destructed, it destructs the pointer to the instance of the integer -- the second of the two pointers that point at that integer. Then myInt gets destructed as well, and thus the last pointer alive to the integer is destroyed, and the memory is freed. Automagically and without worries.
class Holder {
std::auto_ptr<int> data;
public:
Holder(const std::auto_ptr<int> & d) : data(d) {}
}
std::auto_ptr<Holder> test() {
std::auto_ptr<int> myInt = new int;
std::auto_ptr<Holder> myHolder = new Holder(myInt);
return myHolder;
}
int main(int, char**) {
test(); // notice we don't do any deallocations!
}
Simply don't use naked pointers in C++, there's no good reason for it. It only lets you shoot yourself in the foot. Multiple times. With abandon ;)
The rough guidelines for smart pointers go as follows:
std::auto_ptr -- use when the scope is the sole owner of an object, and the lifetime of the object ends when the scope dies. Thus, if auto_ptr is a class member, it must make sense that the pointed-to object gets deletes when the instance of the class gets destroyed. Same goes for using it as an automatic variable in a function. In all other cases, don't use it.
std::shared_ptr -- its use implies ownership, potentially shared among multiple pointers. The pointed-to object's lifetime ends when the last pointer to it gets destroyed. Makes managing lifetime of objects quite trivial, but beware of circular references. If Class1 owns an instance of Class2, and that very same instance of Class2 owns the former instance of Class1, then the pointers themselves won't ever delete the classes.
std::weak_ptr -- its use implies non-ownership. It cannot be used directly, but has to be converted back to a shared_ptr before use. A weak_ptr will not prevent an object from being destroyed, so it doesn't present circular reference issues. It is otherwise safe in that you can't use it if it's dangling. It will assert or present you with a null pointer, causing an immediate segfault. Using dangling pointers is way worse, because they often appear to work.
That's in fact the main benefit of weak_ptr: with a naked C-style pointer, you'll never know if someone has deleted the object or not. A weak_ptr knows when the last shared_ptr went out of scope, and it will prevent you from using the object. You can even ask it whether it's still valid: the expired() method returns true if the object was deleted.
You should never use delete this. For two reasons, the destructor is in the process of deleting the memory and is giving you the opportunity to tidy up (release OS resources, do a delete any pointers in the object that the object has created). Secondly, the object may be on the stack.
Basic Question: when does a program call a class' destructor method in C++? I have been told that it is called whenever an object goes out of scope or is subjected to a delete
More specific questions:
1) If the object is created via a pointer and that pointer is later deleted or given a new address to point to, does the object that it was pointing to call its destructor (assuming nothing else is pointing to it)?
2) Following up on question 1, what defines when an object goes out of scope (not regarding to when an object leaves a given {block}). So, in other words, when is a destructor called on an object in a linked list?
3) Would you ever want to call a destructor manually?
1) If the object is created via a pointer and that pointer is later deleted or given a new address to point to, does the object that it was pointing to call its destructor (assuming nothing else is pointing to it)?
It depends on the type of pointers. For example, smart pointers often delete their objects when they are deleted. Ordinary pointers do not. The same is true when a pointer is made to point to a different object. Some smart pointers will destroy the old object, or will destroy it if it has no more references. Ordinary pointers have no such smarts. They just hold an address and allow you to perform operations on the objects they point to by specifically doing so.
2) Following up on question 1, what defines when an object goes out of scope (not regarding to when an object leaves a given {block}). So, in other words, when is a destructor called on an object in a linked list?
That's up to the implementation of the linked list. Typical collections destroy all their contained objects when they are destroyed.
So, a linked list of pointers would typically destroy the pointers but not the objects they point to. (Which may be correct. They may be references by other pointers.) A linked list specifically designed to contain pointers, however, might delete the objects on its own destruction.
A linked list of smart pointers could automatically delete the objects when the pointers are deleted, or do so if they had no more references. It's all up to you to pick the pieces that do what you want.
3) Would you ever want to call a destructor manually?
Sure. One example would be if you want to replace an object with another object of the same type but don't want to free memory just to allocate it again. You can destroy the old object in place and construct a new one in place. (However, generally this is a bad idea.)
// pointer is destroyed because it goes out of scope,
// but not the object it pointed to. memory leak
if (1) {
Foo *myfoo = new Foo("foo");
}
// pointer is destroyed because it goes out of scope,
// object it points to is deleted. no memory leak
if(1) {
Foo *myfoo = new Foo("foo");
delete myfoo;
}
// no memory leak, object goes out of scope
if(1) {
Foo myfoo("foo");
}
Others have already addressed the other issues, so I'll just look at one point: do you ever want to manually delete an object.
The answer is yes. #DavidSchwartz gave one example, but it's a fairly unusual one. I'll give an example that's under the hood of what a lot of C++ programmers use all the time: std::vector (and std::deque, though it's not used quite as much).
As most people know, std::vector will allocate a larger block of memory when/if you add more items than its current allocation can hold. When it does this, however, it has a block of memory that's capable of holding more objects than are currently in the vector.
To manage that, what vector does under the covers is allocate raw memory via the Allocator object (which, unless you specify otherwise, means it uses ::operator new). Then, when you use (for example) push_back to add an item to the vector, internally the vector uses a placement new to create an item in the (previously) unused part of its memory space.
Now, what happens when/if you erase an item from the vector? It can't just use delete -- that would release its entire block of memory; it needs to destroy one object in that memory without destroying any others, or releasing any of the block of memory it controls (for example, if you erase 5 items from a vector, then immediately push_back 5 more items, it's guaranteed that the vector will not reallocate memory when you do so.
To do that, the vector directly destroys the objects in the memory by explicitly calling the destructor, not by using delete.
If, perchance, somebody else were to write a container using contiguous storage roughly like a vector does (or some variant of that, like std::deque really does), you'd almost certainly want to use the same technique.
Just for example, let's consider how you might write code for a circular ring-buffer.
#ifndef CBUFFER_H_INC
#define CBUFFER_H_INC
template <class T>
class circular_buffer {
T *data;
unsigned read_pos;
unsigned write_pos;
unsigned in_use;
const unsigned capacity;
public:
circular_buffer(unsigned size) :
data((T *)operator new(size * sizeof(T))),
read_pos(0),
write_pos(0),
in_use(0),
capacity(size)
{}
void push(T const &t) {
// ensure there's room in buffer:
if (in_use == capacity)
pop();
// construct copy of object in-place into buffer
new(&data[write_pos++]) T(t);
// keep pointer in bounds.
write_pos %= capacity;
++in_use;
}
// return oldest object in queue:
T front() {
return data[read_pos];
}
// remove oldest object from queue:
void pop() {
// destroy the object:
data[read_pos++].~T();
// keep pointer in bounds.
read_pos %= capacity;
--in_use;
}
~circular_buffer() {
// first destroy any content
while (in_use != 0)
pop();
// then release the buffer.
operator delete(data);
}
};
#endif
Unlike the standard containers, this uses operator new and operator delete directly. For real use, you probably do want to use an allocator class, but for the moment it would do more to distract than contribute (IMO, anyway).
When you create an object with new, you are responsible for calling delete. When you create an object with make_shared, the resulting shared_ptr is responsible for keeping count and calling delete when the use count goes to zero.
Going out of scope does mean leaving a block. This is when the destructor is called, assuming that the object was not allocated with new (i.e. it is a stack object).
About the only time when you need to call a destructor explicitly is when you allocate the object with a placement new.
1) Objects are not created 'via pointers'. There is a pointer that is assigned to any object you 'new'. Assuming this is what you mean, if you call 'delete' on the pointer, it will actually delete (and call the destructor on) the object the pointer dereferences. If you assign the pointer to another object there will be a memory leak; nothing in C++ will collect your garbage for you.
2) These are two separate questions. A variable goes out of scope when the stack frame it's declared in is popped off the stack. Usually this is when you leave a block. Objects in a heap never go out of scope, though their pointers on the stack may. Nothing in particular guarantees that a destructor of an object in a linked list will be called.
3) Not really. There may be Deep Magic that would suggest otherwise, but typically you want to match up your 'new' keywords with your 'delete' keywords, and put everything in your destructor necessary to make sure it properly cleans itself up. If you don't do this, be sure to comment the destructor with specific instructions to anyone using the class on how they should clean up that object's resources manually.
Pointers -- Regular pointers don't support RAII. Without an explicit delete, there will be garbage. Fortunately C++ has auto pointers that handle this for you!
Scope -- Think of when a variable becomes invisible to your program. Usually this is at the end of {block}, as you point out.
Manual destruction -- Never attempt this. Just let scope and RAII do the magic for you.
To give a detailed answer to question 3: yes, there are (rare) occasions when you might call the destructor explicitly, in particular as the counterpart to a placement new, as dasblinkenlight observes.
To give a concrete example of this:
#include <iostream>
#include <new>
struct Foo
{
Foo(int i_) : i(i_) {}
int i;
};
int main()
{
// Allocate a chunk of memory large enough to hold 5 Foo objects.
int n = 5;
char *chunk = static_cast<char*>(::operator new(sizeof(Foo) * n));
// Use placement new to construct Foo instances at the right places in the chunk.
for(int i=0; i<n; ++i)
{
new (chunk + i*sizeof(Foo)) Foo(i);
}
// Output the contents of each Foo instance and use an explicit destructor call to destroy it.
for(int i=0; i<n; ++i)
{
Foo *foo = reinterpret_cast<Foo*>(chunk + i*sizeof(Foo));
std::cout << foo->i << '\n';
foo->~Foo();
}
// Deallocate the original chunk of memory.
::operator delete(chunk);
return 0;
}
The purpose of this kind of thing is to decouple memory allocation from object construction.
Remember that Constructor of an object is called immediately after the memory is allocated for that object and whereas the destructor is called just before deallocating the memory of that object.
Whenever you use "new", that is, attach an address to a pointer, or to say, you claim space on the heap, you need to "delete" it.
1.yes, when you delete something, the destructor is called.
2.When the destructor of the linked list is called, it's objects' destructor is called. But if they are pointers, you need to delete them manually.
3.when the space is claimed by "new".
Yes, a destructor (a.k.a. dtor) is called when an object goes out of scope if it is on the stack or when you call delete on a pointer to an object.
If the pointer is deleted via delete then the dtor will be called. If you reassign the pointer without calling delete first, you will get a memory leak because the object still exists in memory somewhere. In the latter instance, the dtor is not called.
A good linked list implementation will call the dtor of all objects in the list when the list is being destroyed (because you either called some method to destory it or it went out of scope itself). This is implementation dependent.
I doubt it, but I wouldn't be surprised if there is some odd circumstance out there.
If the object is created not via a pointer(for example,A a1 = A();),the destructor is called when the object is destructed, always when the function where the object lies is finished.for example:
void func()
{
...
A a1 = A();
...
}//finish
the destructor is called when code is execused to line "finish".
If the object is created via a pointer(for example,A * a2 = new A();),the destructor is called when the pointer is deleted(delete a2;).If the point is not deleted by user explictly or given a new address before deleting it, the memory leak is occured. That is a bug.
In a linked list, if we use std::list<>, we needn't care about the desctructor or memory leak because std::list<> has finished all of these for us. In a linked list written by ourselves, we should write the desctructor and delete the pointer explictly.Otherwise, it will cause memory leak.
We rarely call a destructor manually. It is a function providing for the system.
Sorry for my poor English!
Why one would want to explicitly clear the a vector member variable (of on in a dtor (please see the code below). what are the benefits of clearing the vector, even though it will be destroyed just after the last line of dtor code will get executed?
class A
{
~A()
{
values.clear();
}
private:
std::vector < double > values_;
};
similar question about a the following code:
class B
{
~B()
{
if (NULL != p)
{
delete p_;
p_ = NULL;
}
}
private:
A * p_;
};
Since there is no way the dtor will get called twice, why to nullify p_ then?
In class A, there is absolutely no reason to .clear() the vector-type member variable in the destructor. The vector destructor will .clear() the vector when it is called.
In class B, the cleanup code can simply be written as:
delete p_;
There is no need to test whether p_ != NULL first because delete NULL; is defined to be a no-op. There is also no need to set p_ = NULL after you've deleted it because p_ can no longer be legitimately accessed after the object of which it is a member is destroyed.
That said, you should rarely need to use delete in C++ code. You should prefer to use Scope-Bound Resource Management (SBRM, also called Resource Acquisition Is Initialization) to manage resource lifetimes automatically.
In this case, you could use a smart pointer. boost::scoped_ptr and std::unique_ptr (from C++0x) are both good choices. Neither of them should have any overhead compared to using a raw pointer. In addition, they both suppress generation of the implicitly declared copy constructor and copy assignment operator, which is usually what you want when you have a member variable that is a pointer to a dynamically allocated object.
In your second example there's no reason to set p_ to null whatsoever, specifically because it is done in the destructor, meaning that the lifetime of p_ will end immediately after that.
Moreover, there's no point in comparing p_ to null before calling delete, since delete expression performs this check internally. In your specific artificial example, the destructor should simply contain delete p_ and noting else. No if, no setting p_ to null.
In the case of p_, there is no need to set it equal to null in the destructor, but it can be a useful defensive mechanism. Imagine a case where you have a bug and something still holds a pointer to a B object after it has been deleted:
If it trys to delete the B object, p_ being null will cause that second delete to be innocuous rather than a heap corruptor.
If it trys to call a method on the B object, p_ being null will cause a crash immediately. If p_ is still the old value, the results are undefined and it may be hard to track down the cause of the resulting crash.