Never using
new
delete
release
and preferring to use
std::make_unique
std::unique_ptr
std::move
reset (redundant)
should morally result in no memory leaks: new'ed pointers are only ever created inside smart pointers, from which they can never escape, because we have disallowed use of release.
One may therefore be tempted into using this coding style, and then never bother checking for memory leaks again - no matter where exceptions may be thrown from, the RAII semantics of the smart pointers should always clean up any dangling pointers as the stack is unwound.
Except C++ is full of nasty surprises. From experience of having my assumptions repeatedly smashed by gotw, I can't help but think that there might be some corner-case which manages to cause a memory leak anyway. Even worse, there might be an obvious way of releasing ownership of the pointer other than release itself. Or another smart pointer class without an explicit constructor which could accidentally ingest the raw pointer obtained via get, leading to double frees...
Are there any loopholes? If there are, can they be fixed by adding some more simple restrictions? (not allocating any memory doesn't count!) And if a set of coding guidelines that prevents all types of memory errors can be reached, would it be okay to completely forget about the details of memory management?
I thought cyclic references were only a problem with std::shared_ptr...
struct X
{
std::unique_ptr<X> x;
};
void leak()
{
auto x = std::make_unique<X>();
x->x = std::move(x);
}
This can be fixed by ensuring that there is no cycle in the graph of types formed by adding an edge from A to B if and only if A contains a member std::unique_ptr<C> where C is a base of B.
struct evil {
std::shared_ptr<evil> p; // Alternatively unique_ptr
};
void foo() {
auto e = std::make_shared<evil>(); // Alternatively make_unique
e->p = e; // Alternatively std::move(e)
}
int main() {
for (unsigned i = 1; i != 0; ++i) {
foo();
if (i % 100000000)
std::cout << "I leak\n";
}
}
the above program obeys your restrictions, and leaks like a sieve.
On top of that, undefined behavior can cause leaks.
would it be okay to completely forget about the details of memory management?
I'd say the answer to this is going to be no in programming for the foreseeable future. Even in garbage collected languages today you can't forget about the details of memory management if you want a performant application.
Memory leaks still happen in garbage collected languages when programs accidentally hang onto references that are no longer needed. Following the rules you set out above for C++ would still be prone to the same issues and is even more likely to be an issue with uses of shared_ptr. Common errors of this type are hanging on to objects in a container or through observers for managed references in a garbage collected language or shared_ptr in C++.
There's no guarantee, there's so much that one can screw up ...
unions ... (Example)
union Devil {
std::unique_ptr<int> ptr;
int b;
Devil () {}
~Devil () {
// no idea what I'm doing
}
};
Inheritance ... (Example)
struct Base {};
struct Derived : public Derived {
std::unique_ptr<int> ptr;
};
// later ...
std::unique_ptr<Base> p = std::make_unique<Derived>(42);
// oops
playing on the stack ...
int f1[10];
std::unique_ptr<int> p[2];
int f2[10];
// later ...
p[2] = std:: make_unique<int>(42);
// oops
... or more generally undefined behaviour. The above is pretty sure only the top of the iceberg ...
Related
As such:
class MyClass{
public:
int *property;
MyClass(){
property = (int*)malloc(sizeof(int));
}
~MyClass(){
free(property);
}
};
I understand that there are better ways to do this, but I don't think I understand why exactly this is incorrect.
Would there ever be a reason to initialize a pointer in a constructor with malloc?
At least couple of reasons comes to my mind:
you need to work with C code/library and pass pointer there that expected to be initialized by malloc()
You are limited with resources and want to be able to use realloc() with it
There could be more, but you need to be careful when working with raw pointers, either initialized by new or malloc(). For example your class violates rule of 3/5/0. Best way to handle that - use a smart pointer.
Also you need to remember that with malloc() you need to be sure that memory is properly initialized, it can be done with memset() or simple assignments with POD types or (and this is mandatory for non POD) through placement new. That usage is not trivial so you would want to deal with that when you really need it.
In general you should use new not malloc in c++ code. The only time not to do that is in extreme corner cases where you want to control exact location for some reason, building custom memory pools (and even then you should overload new so as not to call malloc directly in the class definition). In that case you use 'placement new'
The main reason to use new is that it will correctly construct the object that it just made. malloc will return garbage memory. Not relevant (maybe) for ints but certainly important for objects
You have to make sure to disable copy constructor/assignment operator which are generated by default in c++. If you don't, you will have undefined behavior. E.g. Below code will destruct twice.
#include<cstdlib>
static int number_of_constructions = 0;
static int number_of_destructions = 0;
struct S {
int * p;
S() {
p = (int*) malloc(sizeof(int));
number_of_constructions++;
}
~S() {
free(p);
number_of_destructions++;
}
};
void foo() {
S s;
S s2 = s;
}
Link: https://godbolt.org/g/imujg1
What I understand from RAII is whenever you need to allocate memory manually with new etc. you need to free it too. So, instead of freeing it manually, you should create classes with constructor and destructor to do the job.
So, what are the following people talking about?
From: The meaning of the term - Resource Acquisition Is Initialization
The problem is that int * p = malloc(1000); is also initialization of an (integer) object, but it's not the kind of initialization we mean in the context of RAII.
...
#Fred: Indeed. int* is not a RAII type because it doesn't do cleanup. So it's not what RAII means, even though it is what RAII literally says.
Well, I know malloc is used in C, and new is used in C++.
Using malloc per se is not RAII because the resources are not freed when the variable goes out of scope, causing leaks of memory. You can make it RAII if you wrap this inside a class and free the resources in the destructor, because local class instances do die when they go out of scope. However, it should be noted what is being discussed here: the int * type is not RAII, and if you enclose it in a RAII type it still isn't. The wrapper doesn't make it RAII, so the RAII type here is the wrapper, not the pointer itself.
As requested in the comments: RAII stands for Resource Acquisition Is Initialisation and it's a design paradigm that combines the allocation of resources with the initialisation and destruction of objects. You don't seem far from understanding it: when an object is instantiated it allocates all the necessary resources (memory, file descriptors, streams, and so on) and frees them when it goes out of scope or the object is otherwise destructed. This is a common paradigm in C++ because C++ classes are RAII (that is, they die when they go out of scope) and as such it's easy to guarantee proper cleanup. The obvious upside being that you don't need to worry about manual cleanup and tracking variable lifetime.
On a related note, notice that this refers to stack allocation, not heap. What this means is that whatever the means you use for allocation (new/malloc vs delete/free) it still isn't RAII; memory that is allocated dynamically does not get magically freed, that's a given. When a variable is allocated on the stack (local variables) they are destroyed when the scope dies.
Example:
class MyObject
{
public:
MyObject()
{
// At this point resources are allocated (memory, files, and so on)
// In this case a simple allocation.
// malloc would have been just as fine
this->_ptr = new int;
}
~MyObject()
{
// When the object is destructed all resources are freed
delete this->_ptr;
}
private:
int * _ptr;
};
The previous sample code implements a RAII wrapper over a native pointer. Here's how to use it:
void f()
{
MyObject obj;
// Do stuff with obj, not including cleanup
}
In the previous example the int pointer is allocated when the variable is instantiated (at declaration time) and freed when the f call terminates, causing the variable to go out of scope and calling its destructor.
Note: As mentioned in the comments by Jarod42 the given example does not conform to the rule of 3 or the rule of 5, which are common thumb rules in C++. I would rather not add complexity to the given example, and as such I'll complete it here. These rules indicate that, if a method from a given set is implemented, then all methods of the set should be implemented, and those methods are the copy and move constructors, the assignment and move operators, and the destructor. Notice at first that this is a general rule, which means that is not mandatory. For instance, immutable objects should not implement assignment and move operators at all. In this case, if the object is to implement these operators it would probably imply reference counting, as multiple copies of the resource exist the destructor must not free the resources until all copies are destroyed. I believe that such an implementation would fall out of scope and as such I'm leaving it out.
By example
NOT RAII:
void foo()
{
int* p = malloc(sizeof(int) * N);
// do stuff
free(p);
}
also NOT RAII:
void foo()
{
int* p = new int[N];
// do stuff
delete[] p;
}
RAII:
struct MyResourceManager
{
int* p;
MyResourceManager(size_t n) : p(malloc(sizeof(int) * n)) { }
~MyResourceManager() { free(p); }
};
void foo()
{
MyResourceManager resource(N);
// doing stuff with resource.p
}
Also RAII (better):
struct MyResourceManager
{
int* p;
MyResourceManager(size_t n) : p(new int[n]) { }
~MyResourceManager() { delete[] p; }
};
void foo()
{
MyResourceManager resource(N);
// doing stuff with resource.p
}
Also RAII (best for this use case):
void foo()
{
std::unique_ptr<int[]> p(new int[N]);
// doing stuff with p
}
RAII is not use of operator new nor is it use of malloc().
It essentially means that, in the process of initialising an object, all resources that object needs to function are allocated. The counterpart requirement is that, in the process of destroying the object, that the resources it has allocated are released.
The concept applies to memory (most commonly), but also to any other resource that needs to be managed - file handles (opened in initialisation, closed when done), mutexes (grabbed in initialisation, released when done), communication ports, etc etc.
In C++, RAII is typically implemented by performing initialisation in constructors of an object, and the release of resources is done in the destructor. There are wrinkles such as other member functions possibly reallocating (e.g. resizing a dynamically allocated array) - in those cases, the member functions must ensure they do things in a way to ensure all resources allocated are appropriately released when the destructor is done. If there are multiple constructors, they need to do things consistently. You'll see this described as something like the constructor setting a class invariant (i.e. that resources are allocated correctly), member functions maintaining that invariant, and the destructor being able to clean up because the invariant is maintained.
The advantage of RAII - if done right - is that non-static variable lifetime is managed by the compiler (when an object goes out of scope, its destructor is invoked). So, the resources will be cleaned up correctly.
However, the requirement is always that the destructor does the cleanup (or that data members of the class have their own destructors that do the required cleanup). If constructor initialises an int * using malloc(), then it is not enough to assume the destructor will clean up - the destructor must pass that pointer to free(). If you don't do that, the C++ compiler will not magically find some way to release the memory for you (the pointer will no longer exist when the destructor is done, but the allocated memory it pointed to will not be released - so the result is a memory leak). C++ does not inherently use garbage collection (which is a trap for people used to garbage collected languages assuming that garbage collection will occur).
And it is undefined behaviour to use malloc() to allocate memory, and operator delete in any form to release it.
It is generally preferable to not use malloc() and free() in C++, because they do not work well with object construction and destruction (invoking constructors and destructors). Use operator new instead (and for whatever form of operator new you use, use the corresponding operator delete). Or, better yet, use standard C++ containers (std::vector, etc) as much as possible to avoid the need to worry about manually releasing memory you allocate.
Destruction of int* doesn't release the resources. It isn't safe to just let it go out of scope, so it isn't RAII.
The int * could be be a member of a class that deletes the int* in its destructor, which is essentially what unique_ptr of an int does. You make things like this RAII by wrapping them in code that encapsulates the deletion.
The discussion is about code that literally does initialization upon resource acquisition, but doesn't follow the RAII design.
In the shown example with malloc, 1000 bytes of memory are allocated (resource allocation) and a variable p (pointer to int) is initialized with the result (initialization). However, this is obviously not an example of RAII, because the object (of type int *) doesn't take care of the acquired resource in its destructor.
So no, malloc itself can not be RAII in some situations, it is an example of non-RAII code that nevertheless does "Initialization on Resource Acquisition" which might be confusing for new c++ programmers on first glance.
In C++, unique_ptr represents a pointer that "owns" the thing it points to. You can supply the release function as second argument:
std::unique_ptr<int[], std::function<void(void*)>>
p( (int *)malloc(1000 * sizeof(int)), std::free );
Of course, there's not much reason to do this instead of just using new (when the default deleter delete will do the right thing).
Yes, you can deal with int * p = malloc(1000); using the RAII paradigm. Smart pointers and std::vector use a very similar technique though they don't probably use malloc and prefer to use new instead.
Here's a very simplistic look at what one can do with malloc. MyPointer is far from being useful in a real application. Its only purpose is to demonstrate the principle of RAII.
class MyPointer
{
public:
MyPointer(size_t s) : p(malloc(s)) {}
~MyPionter() { free(p); }
int& operator[](size_t index) { return p[index]; }
private:
int* p;
};
int main()
{
// By initializing ptr you are acquiring resources.
// When ptr gets destructed, the resource is released.
MyPointer ptr(1000);
ptr[0] = 10;
std::cout << ptr[0] << std::endl;
}
The core idea behind RAII is:
Treat resource acquisition as though you are initializing an object.
Make sure the acquired resource(s) is(are) released when the object is destructed.
You can read more on RAII at Wikepedia.
So, what are the following people talking about?
What is RAII?
RAII in a nutshell is a very simple idea. It is the idea that no object may exist at all unless it is fully initialised.
Why is that good?
We now have a concrete guarantee that a 'half built' object cannot be accidentally used - because at no point in the logical flow of the program can it possibly exist.
How do we achieve it?
a) always manage resources (memory, files, mutex locks, database connections) in a class of their own which is specifically tailored to only managing that resource.
b) build complex logic out of collections of objects covered by [a]
c) Always throw if anything in the constructor fails (to guarantee that a failed object cannot exist)
d) if we are managing more than one resource in a class, we ensure that a failed construction cleans up the parts that have already been constructed (NOTE: this is hard [sometimes impossible], and why at this point you should be referred back to [a])
Sounds hard?
Initialising your objects completely in the initialiser list, while wrapping all external resources in a manager class (e.g. files, memory) achieves perfect RAII effortlessly.
What's the advantage?
Your program may now contain only logic which makes it easier to reason about and to read. The compiler will take care of all resource management perfectly.
Effortless Compound Resource Management
An example of RAII that's hard without manager classes and easy with them?
struct double_buffer
{
double_buffer()
: buffer1(std::nullptr) // NOTE: redundant zero construction
, buffer2(std::nullptr)
{
buffer1 = new char[100]; // new can throw!
try {
buffer2 = new char[100]; // if this throws we have to clean up buffer1
}
catch(...) {
delete buffer1; // clean up buffer1
throw; // rethrow because failed construction must throw!
}
}
// IMPORTANT!
// you MUST write or delete copy constructors, move constructor,
// plus also maybe move-assignment or move-constructor
// and you MUST write a destructor!
char* buffer1;
char* buffer2;
};
now the RAII version:
struct double_buffer
{
double_buffer()
: buffer1(new char[100]) // memory immediately transferred to manager
, buffer2(new char[100]) // if this throws, compiler will handle the
// correct cleanup of buffer1
{
// nothing to do here
}
// no need to write copy constructors, move constructor,
// move-assignment or move-constructor
// no need to write destructor
std::unique_ptr<char[]> buffer1;
std::unique_ptr<char[]> buffer2;
};
How does it improve my code?
some safe code that uses RAII:
auto t = merge(Something(), SomethingElse()); // pretty clear eh?
t.performAction();
the same code that does not use RAII:
TargetType t; // at this point uninitialised.
Something a;
if(a.construct()) {
SomethingElse b;
if (b.construct()) {
bool ok = merge_onto(t, a, b); // t maybe initialised here
b.destruct();
a.destruct();
if (!ok)
throw std::runtime_error("merge failed");
}
else {
a.destruct();
throw std::runtime_error("failed to create b");
}
}
else {
throw std::runtime_error("failed to create a");
}
// ... finally, we may now use t because we can (just about) prove that it's valid
t.performAction();
The difference
The RAII code is written solely in terms of logic.
The non-RAII code is 40% error handling and 40% lifetime management and only 20% logic. Furthermore, the logic is hidden in amongst all the other garbage, making even these 11 lines of code very hard to reason about.
I want to have a class with a pointer member variable. This pointer should point to an object which may be stack-allocated or heap-allocated. However, this pointer should not have any ownership. In other words, no delete should be called at all when the pointer goes out of scope. I think that a raw pointer could solve the problem... However, I am not sure if there is a better C++11 approach than raw pointers?
Example:
class foo{
public:
bar* pntr
};
int main(){
bar a;
foo b;
b.pntr=&a;
}
Raw pointers are perfectly fine here. C++11 doesn't have any other "dumb" smart pointer that deals with non-owning objects, so you cannot use C++11 smart pointers. There is a proposal for a "stupid" smart pointer for non-owned objects:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4282.pdf
already implemented experimentally as std::experimental::observer_ptr (thanks #T.C. for the hint).
Another alternative is to use a smart pointer with a custom deleter that doesn't do anything:
#include <memory>
int main()
{
int a{42};
auto no_op = [](int*){};
std::unique_ptr<int, decltype(no_op)> up(&a, no_op);
}
or, as mentioned by #T.C. in the comment, a std::reference_wrapper.
As mentioned by #Lightness Races in Orbit, a std::weak_ptr may also be a solution, as the latter is also a non-owning smart pointer. However a std::weak_ptr can only be constructed from a std::shared_ptr or another std::weak_ptr. A serious downside is that the std::shared_ptr is a "heavy" object (because of the internal reference counting mechanism). Note that even in this case the std::shared_ptr must have a trivial custom deleter, otherwise it corrupts the stack for pointers to automatic variables.
Using a raw pointer is perfectly ok here as you don't intend to let the pointer have ownership of the resource pointed to.
The problem with a raw pointer is that there's no way to tell if it still points to a valid object. Fortunately, std::shared_ptr has an aliasing constructor that you can use to effectively make a std::weak_ptr to a class member with automatic storage duration. Example:
#include <iostream>
#include <memory>
using namespace std;
struct A {
int x;
};
void PrintValue(weak_ptr<int> wp) {
if (auto sp = wp.lock())
cout << *sp << endl;
else
cout << "Object is expired." << endl;
}
int main() {
shared_ptr<A> a(new A);
a->x = 42;
weak_ptr<int> wpInt (shared_ptr<int>(a, &a->x));
PrintValue(wpInt);
a.reset(); //a->x has been destroyed, wpInt no longer points to a valid int
PrintValue(wpInt);
return 0;
}
Prints:
42
Object is expired.
The main benefit to this approach is that the weak_ptr does not prevent the object from going out of scope and being deleted, but at the same time it can safely detect when the object is no longer valid. The downsides are the increased overhead of the smart pointer, and the fact that you ultimately need a shared_ptr to an object. I.e. you can't do this exclusively with objects allocated on the stack.
If by "better approach" you mean "safer approach", then yes, I've implemented a "non-owning" smart pointer here: https://github.com/duneroadrunner/SaferCPlusPlus. (Shameless plug alert, but I think it's relevant here.) So your code would look like this:
#include "mseregistered.h"
...
class foo{
public:
mse::TRegisteredPointer<bar> pntr;
};
int main(){
mse::TRegisteredObj<bar> a;
foo b;
b.pntr=&a;
}
TRegisteredPointer is "smarter" than raw pointers in that it knows when the target gets destroyed. For example:
int main(){
foo b;
bar c;
{
mse::TRegisteredObj<bar> a;
b.pntr = &a;
c = *(b.pntr);
}
try {
c = *(b.pntr);
} catch(...) {
// b.pntr "knows" that the object it was pointing to has been deleted so it throws an exception.
};
}
TRegisteredPointer generally has lower performance cost than say, std::shared_ptr. Much lower when you have the opportunity to allocate the target object on the stack. It's still fairly new though and not well documented yet, but the library includes commented examples of it's use (in the file "msetl_example.cpp", the bottom half).
The library also provides TRegisteredPointerForLegacy, which is somewhat slower than TRegisteredPointer but can be used as a drop-in substitute for raw pointers in almost any situation. (In particular it can be used before the target type is completely defined, which is not the case with TRegisteredPointer.)
In terms of the sentiment of your question, I think it's valid. By now C++ programmers should at least have the option of avoiding unnecessary risk of invalid memory access. Raw pointers can be a valid option too, but I think it depends on the context. If it's a complex piece of software where security is more important than performance, a safer alternative might be better.
Simply allocate the object dynamically and use a shared_ptr. Yes, it will actually delete the thing, but only if it's the last one with a reference. Further, it prevents others from deleting it, too. This is exactly the right thing to do, both to avoid memory leaks and dangling pointers. Also check out the related weap_ptr, which you could perhaps use to your advantage, too, if the lifetime requirements for the pointee are different.
My first question is does the memory allocated by new in a function gets automatically deleted(deallocated) when the function ends.
int* foo()
{
int *a = new int; //memory allocated for an int
*a = 3;
return (a);
}//function ends -- is memory for integer still allocated.
If the memory is de-allocated automatically after the function ends, then shouldn't my next code give some error relating to accessing the memory which does not belong to me.
int main()
{
int *x = foo();
cout<<*x;
}
No it certainly does not. Every new has to be balanced with a delete. (And, to avoid any future doubt, any new[] has to be balanced with a delete[]).
There are constructs in C++ that will allow the effective release of memory once a container object has gone out of scope. Have a look at std::shared_ptr and std::unique_ptr.
No, the memory is not deallocated.
You should deallocate it manually with delete a;
In languages like Java or C# there is a so called Garbage Collector that handles memory deallocation when it finds out that some data is not longer needed. Garbage Collection can be used with C++, but it's not standard and in practice rarely used.
There are, however, other mechanisms you could use to automate deallocation of memory. Shared pointers are one of them. They introduce additional overhead. In regular C++ code usually the programmer is responsible for managing the memory (allocating and deallocating) manually. For beginners it's important to learn the basics before switching to more advanced mechanisms.
This is a bad practice to allocate dynamically from within a function and depend on the mercy of some other function to deallocate. Ideally, the caller should allocate space, pass that to the calling function and caller would deallocate when not in use.
void foo(int * a)
{
// a is pre-allocated by caller
*a = 3;
}//function ends -- caller takes care of allocation and deallocation
int main()
{
int *x = new int; // memory allocated for an int by caller
foo(x); // pass x as argument
cout << *x;
delete x; // deallocate, not required any more
return 0;
}
No, it is your responsibility to procure deallocation:
int *i = new int;
delete i;
However, the above code will sooner or later evolve into something that is almost impossible to make exception-safe. Better do not use pointers at all, or if you really must, use a smart pointer which will free the memory for you at the right moment:
std::shared_ptr<int> i (new int);
*i = 0xbeef;
return i;
There exist other smart pointers with different ownership semantics.
For most real world application, any imposed or assumed overhead introduced by smart pointers usually gets weight up easily against more expensive (I really mean money-saving) stuff like maintainability, extensibility, exception safety (where all there intermix into the other two).
Never forget that there exist alternatives to pointers, depending on the situation:
standard containers: If you need things like array
smart pointers: If you really need a pointer
references: Which you cannot forget to deallocate
plain objects: No pointers at all. Rely on copying and moving, which usually makes for a high degree of maintainability, extensibility, exception safety and performance. In C++, this should be your default choice.
As above, no raw pointers are never automatically deleted. This is why we don't use raw pointers for controlling lifetimes. We use smart pointers.
Here's your code snippet written correctly in modern c++:
std::unique_ptr<int> foo()
{
return std::unique_ptr<int>(new int(3));
// or std::make_unique<int>(3) for c++14
// function will either std::move the unique_ptr or emplace it efficiently
}
int main()
{
// x will either be created in-place or move-constructed by foo()
std::unique_ptr<int> x = foo();
// potential bug! pointers can be null
if (x) {
std::cout << *x;
}
else {
std::cout << "x is null\n";
}
}
When I wrap "raw" resources in a C++ class, in destructor code I usually simply release the allocated resource(s), without paying attention to additional steps like zeroing out pointers, etc.
e.g.:
class File
{
public:
...
~File()
{
if (m_file != NULL)
fclose(m_file);
}
private:
FILE * m_file;
};
I wonder if this code style contains a potential bug: i.e. is it possible that a destructor is called more than once? In this case, the right thing to do in the destructor would be to clear pointers to avoid double/multiple destructions:
~File()
{
if (m_file != NULL)
{
fclose(m_file);
m_file = NULL; // avoid double destruction
}
}
A similar example could be made for heap-allocated memory: if m_ptr is a pointer to memory allocated with new[], is the following destructor code OK?
// In destructor:
delete [] m_ptr;
or should the pointer be cleared, too, to avoid double destruction?
// In destructor:
delete [] m_ptr;
m_ptr = NULL; // avoid double destruction
No. It is useful if you have a Close() function or the like:
void Close()
{
if (m_file != NULL)
{
fclose(m_file);
m_file = NULL;
}
}
~File()
{
Close();
}
This way, the Close() function is idempotent (you can call it as many times as you want), and you avoid one extra test in the destructor.
But since destructors in C++ can only be called once, assigning NULL to pointers there is pointless.
Unless, of course, for debuggin-purposes, particularly if you suspect a double-delete.
If a destructor is called more than once, you already have undefined behavior. This will also not affect clients that may have a pointer to the resource themselves, so this is not preventing a double delete. A unique_ptr or scoped_ptr seem to be better solutions to me.
In a buggy application (for example, improper use of std::unique_ptr<> can result in two std::unique_ptr<> holding the same raw pointer), you can end up with a double delete, as the second one goes out of scope.
We care about these bad cases - otherwise, what's the point of discussing setting a pointer to nullptr in the destructor? It's going away anyways!
Hence, in this example, at least, it would be better to let the program seg-fault inside a debugger during a unit-test, so you can trace the real cause of the problem.
So, in general, I don't find setting pointers to nullptr to be particularly useful for memory management.
You could do it, but a more robust alternative is to do unit tests and to judiciously use a memory checker like valgrind.
After all, with some memory errors, your program can seemingly run ok many times, until it crashes unexpectedly - much safer to do quality assurance with a memory checker, especially as your program gets larger, and memory errors become less obvious.