Related
I'll start out by saying, use smart pointers and you'll never have to worry about this.
What are the problems with the following code?
Foo * p = new Foo;
// (use p)
delete p;
p = NULL;
This was sparked by an answer and comments to another question. One comment from Neil Butterworth generated a few upvotes:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when it is a good thing to do, and times when it is pointless and can hide errors.
There are plenty of circumstances where it wouldn't help. But in my experience, it can't hurt. Somebody enlighten me.
Setting a pointer to 0 (which is "null" in standard C++, the NULL define from C is somewhat different) avoids crashes on double deletes.
Consider the following:
Foo* foo = 0; // Sets the pointer to 0 (C++ NULL)
delete foo; // Won't do anything
Whereas:
Foo* foo = new Foo();
delete foo; // Deletes the object
delete foo; // Undefined behavior
In other words, if you don't set deleted pointers to 0, you will get into trouble if you're doing double deletes. An argument against setting pointers to 0 after delete would be that doing so just masks double delete bugs and leaves them unhandled.
It's best to not have double delete bugs, obviously, but depending on ownership semantics and object lifecycles, this can be hard to achieve in practice. I prefer a masked double delete bug over UB.
Finally, a sidenote regarding managing object allocation, I suggest you take a look at std::unique_ptr for strict/singular ownership, std::shared_ptr for shared ownership, or another smart pointer implementation, depending on your needs.
Setting pointers to NULL after you've deleted what it pointed to certainly can't hurt, but it's often a bit of a band-aid over a more fundamental problem: Why are you using a pointer in the first place? I can see two typical reasons:
You simply wanted something allocated on the heap. In which case wrapping it in a RAII object would have been much safer and cleaner. End the RAII object's scope when you no longer need the object. That's how std::vector works, and it solves the problem of accidentally leaving pointers to deallocated memory around. There are no pointers.
Or perhaps you wanted some complex shared ownership semantics. The pointer returned from new might not be the same as the one that delete is called on. Multiple objects may have used the object simultaneously in the meantime. In that case, a shared pointer or something similar would have been preferable.
My rule of thumb is that if you leave pointers around in user code, you're Doing It Wrong. The pointer shouldn't be there to point to garbage in the first place. Why isn't there an object taking responsibility for ensuring its validity? Why doesn't its scope end when the pointed-to object does?
I've got an even better best practice: Where possible, end the variable's scope!
{
Foo* pFoo = new Foo;
// use pFoo
delete pFoo;
}
I always set a pointer to NULL (now nullptr) after deleting the object(s) it points to.
It can help catch many references to freed memory (assuming your platform faults on a deref of a null pointer).
It won't catch all references to free'd memory if, for example, you have copies of the pointer lying around. But some is better than none.
It will mask a double-delete, but I find those are far less common than accesses to already freed memory.
In many cases the compiler is going to optimize it away. So the argument that it's unnecessary doesn't persuade me.
If you're already using RAII, then there aren't many deletes in your code to begin with, so the argument that the extra assignment causes clutter doesn't persuade me.
It's often convenient, when debugging, to see the null value rather than a stale pointer.
If this still bothers you, use a smart pointer or a reference instead.
I also set other types of resource handles to the no-resource value when the resource is free'd (which is typically only in the destructor of an RAII wrapper written to encapsulate the resource).
I worked on a large (9 million statements) commercial product (primarily in C). At one point, we used macro magic to null out the pointer when memory was freed. This immediately exposed lots of lurking bugs that were promptly fixed. As far as I can remember, we never had a double-free bug.
Update: Microsoft believes that it's a good practice for security and recommends the practice in their SDL policies. Apparently MSVC++11 will stomp the deleted pointer automatically (in many circumstances) if you compile with the /SDL option.
Firstly, there are a lot of existing questions on this and closely related topics, for example Why doesn't delete set the pointer to NULL?.
In your code, the issue what goes on in (use p). For example, if somewhere you have code like this:
Foo * p2 = p;
then setting p to NULL accomplishes very little, as you still have the pointer p2 to worry about.
This is not to say that setting a pointer to NULL is always pointless. For example, if p were a member variable pointing to a resource who's lifetime was not exactly the same as the class containing p, then setting p to NULL could be a useful way of indicating the presence or absence of the resource.
If there is more code after the delete, Yes. When the pointer is deleted in a constructor or at the end of method or function, No.
The point of this parable is to remind the programmer, during run-time, that the object has already been deleted.
An even better practice is to use Smart Pointers (shared or scoped) which automagically delete their target objects.
As others have said, delete ptr; ptr = 0; is not going to cause demons to fly out of your nose. However, it does encourage the usage of ptr as a flag of sorts. The code becomes littered with delete and setting the pointer to NULL. The next step is to scatter if (arg == NULL) return; through your code to protect against the accidental usage of a NULL pointer. The problem occurs once the checks against NULL become your primary means of checking for the state of an object or program.
I'm sure that there is a code smell about using a pointer as a flag somewhere but I haven't found one.
I'll change your question slightly:
Would you use an uninitialized
pointer? You know, one that you didn't
set to NULL or allocate the memory it
points to?
There are two scenarios where setting the pointer to NULL can be skipped:
the pointer variable goes out of scope immediately
you have overloaded the semantic of the pointer and are using its value not only as a memory pointer, but also as a key or raw value. this approach however suffers from other problems.
Meanwhile, arguing that setting the pointer to NULL might hide errors to me sounds like arguing that you shouldn't fix a bug because the fix might hide another bug. The only bugs that might show if the pointer is not set to NULL would be the ones that try to use the pointer. But setting it to NULL would actually cause exactly the same bug as would show if you use it with freed memory, wouldn't it?
If you have no other constraint that forces you to either set or not set the pointer to NULL after you delete it (one such constraint was mentioned by Neil Butterworth), then my personal preference is to leave it be.
For me, the question isn't "is this a good idea?" but "what behavior would I prevent or allow to succeed by doing this?" For example, if this allows other code to see that the pointer is no longer available, why is other code even attempting to look at freed pointers after they are freed? Usually, it's a bug.
It also does more work than necessary as well as hindering post-mortem debugging. The less you touch memory after you don't need it, the easier it is to figure out why something crashed. Many times I have relied on the fact that memory is in a similar state to when a particular bug occurred to diagnose and fix said bug.
Explicitly nulling after delete strongly suggests to a reader that the pointer represents something which is conceptually optional. If I saw that being done, I'd start worrying that everywhere in the source the pointer gets used that it should be first tested against NULL.
If that's what you actually mean, it's better to make that explicit in the source using something like boost::optional
optional<Foo*> p (new Foo);
// (use p.get(), but must test p for truth first!...)
delete p.get();
p = optional<Foo*>();
But if you really wanted people to know the pointer has "gone bad", I'll pitch in 100% agreement with those who say the best thing to do is make it go out of scope. Then you're using the compiler to prevent the possibility of bad dereferences at runtime.
That's the baby in all the C++ bathwater, shouldn't throw it out. :)
In a well structured program with appropriate error checking, there is no reason not to assign it null. 0 stands alone as a universally recognized invalid value in this context. Fail hard and Fail soon.
Many of the arguments against assigning 0 suggest that it could hide a bug or complicate control flow. Fundamentally, that is either an upstream error (not your fault (sorry for the bad pun)) or another error on the programmer's behalf -- perhaps even an indication that program flow has grown too complex.
If the programmer wants to introduce the use of a pointer which may be null as a special value and write all the necessary dodging around that, that's a complication they have deliberately introduced. The better the quarantine, the sooner you find cases of misuse, and the less they are able to spread into other programs.
Well structured programs may be designed using C++ features to avoid these cases. You can use references, or you can just say "passing/using null or invalid arguments is an error" -- an approach which is equally applicable to containers, such as smart pointers. Increasing consistent and correct behavior forbids these bugs from getting far.
From there, you have only a very limited scope and context where a null pointer may exist (or is permitted).
The same may be applied to pointers which are not const. Following the value of a pointer is trivial because its scope is so small, and improper use is checked and well defined. If your toolset and engineers cannot follow the program following a quick read or there is inappropriate error checking or inconsistent/lenient program flow, you have other, bigger problems.
Finally, your compiler and environment likely has some guards for the times when you would like to introduce errors (scribbling), detect accesses to freed memory, and catch other related UB. You can also introduce similar diagnostics into your programs, often without affecting existing programs.
Let me expand what you've already put into your question.
Here's what you've put into your question, in bullet-point form:
Setting pointers to NULL following delete is not universal good practice in C++. There are times when:
it is a good thing to do
and times when it is pointless and can hide errors.
However, there is no times when this is bad! You will not introduce more bugs by explicitly nulling it, you will not leak memory, you will not cause undefined behaviour to happen.
So, if in doubt, just null it.
Having said that, if you feel that you have to explicitly null some pointer, then to me this sounds like you haven't split up a method enough, and should look at the refactoring approach called "Extract method" to split up the method into separate parts.
There is always Dangling Pointers to worry about.
Yes.
The only "harm" it can do is to introduce inefficiency (an unnecessary store operation) into your program - but this overhead will be insignificant in relation to the cost of allocating and freeing the block of memory in most cases.
If you don't do it, you will have some nasty pointer derefernce bugs one day.
I always use a macro for delete:
#define SAFEDELETE(ptr) { delete(ptr); ptr = NULL; }
(and similar for an array, free(), releasing handles)
You can also write "self delete" methods that take a reference to the calling code's pointer, so they force the calling code's pointer to NULL. For example, to delete a subtree of many objects:
static void TreeItem::DeleteSubtree(TreeItem *&rootObject)
{
if (rootObject == NULL)
return;
rootObject->UnlinkFromParent();
for (int i = 0; i < numChildren)
DeleteSubtree(rootObject->child[i]);
delete rootObject;
rootObject = NULL;
}
edit
Yes, these techniques do violate some rules about use of macros (and yes, these days you could probably achieve the same result with templates) - but by using over many years I never ever accessed dead memory - one of the nastiest and most difficult and most time consuming to debug problems you can face. In practice over many years they have effectively eliminated a whjole class of bugs from every team I have introduced them on.
There are also many ways you could implement the above - I am just trying to illustrate the idea of forcing people to NULL a pointer if they delete an object, rather than providing a means for them to release the memory that does not NULL the caller's pointer.
Of course, the above example is just a step towards an auto-pointer. Which I didn't suggest because the OP was specifically asking about the case of not using an auto pointer.
"There are times when it is a good thing to do, and times when it is pointless and can hide errors"
I can see two problems:
That simple code:
delete myObj;
myobj = 0
becomes to a for-liner in multithreaded environment:
lock(myObjMutex);
delete myObj;
myobj = 0
unlock(myObjMutex);
The "best practice" of Don Neufeld don't apply always. E.g. in one automotive project we had to set pointers to 0 even in destructors. I can imagine in safety-critical software such rules are not uncommon. It is easier (and wise) to follow them than trying to persuade
the team/code-checker for each pointer use in code, that a line nulling this pointer is redundant.
Another danger is relying on this technique in exceptions-using code:
try{
delete myObj; //exception in destructor
myObj=0
}
catch
{
//myObj=0; <- possibly resource-leak
}
if (myObj)
// use myObj <--undefined behaviour
In such code either you produce resource-leak and postpone the problem or the process crashes.
So, this two problems going spontaneously through my head (Herb Sutter would for sure tell more) make for me all the questions of the kind "How to avoid using smart-pointers and do the job safely with normal pointers" as obsolete.
If you're going to reallocate the pointer before using it again (dereferencing it, passing it to a function, etc.), making the pointer NULL is just an extra operation. However, if you aren't sure whether it will be reallocated or not before it is used again, setting it to NULL is a good idea.
As many have said, it is of course much easier to just use smart pointers.
Edit: As Thomas Matthews said in this earlier answer, if a pointer is deleted in a destructor, there isn't any need to assign NULL to it since it won't be used again because the object is being destroyed already.
I can imagine setting a pointer to NULL after deleting it being useful in rare cases where there is a legitimate scenario of reusing it in a single function (or object). Otherwise it makes no sense - a pointer needs to point to something meaningful as long as it exists - period.
If the code does not belong to the most performance-critical part of your application, keep it simple and use a shared_ptr:
shared_ptr<Foo> p(new Foo);
//No more need to call delete
It performs reference counting and is thread-safe. You can find it in the tr1 (std::tr1 namespace, #include < memory >) or if your compiler does not provide it, get it from boost.
I wrote a project using normal pointers and now I'm fed up with manual memory management.
What are the issues that one could anticipate during refactoring?
Until now, I already spent an hour replacing X* with shared_ptr<X> for types I want to automatically manage memory. Then I changed dynamic_cast to dynamic_pointer_cast. I still see many more errors (comparing with NULL, passing this to a function).
I know the question is a bit vague and subjective, but I think I can benefit from experience of someone who has already done this.
Are there some pitfalls?
Although it's easy to just use boost::shared_pointer everywhere, you should use the correct smart pointer as per ownership semantics.
In most cases, you will want to use std::unique_ptr by default, unless the ownership is shared among multiple object instances.
If you run into cyclical ownership problems, you can break up the cycles with boost::weak_ptr.
Also keep in mind that while passing shared_ptr's around, you should always pass them by const reference for performance reasons (avoids an atomic increment) unless you really want to confer ownership to a different entity.
Are there some pitfalls?
Yes, by murphy's law if you blindly replace every pointer with shared_ptr, it'll turn out that isn't what you wanted, and you'll spend next 6 months hunting bugs you introduced.
What are the issues that one could anticipate during refactoring?
Inefficient memory management, unused resources being stored longer than necessary, memory leaks (circular references), invalid reference counting (same pointer assigned to multiple different shared_pointers).
Do NOT blindly replace everything with shared_ptr. Carefully investigate program structure and make sure that shread_ptr is NEEDED and it represents EXACTLY what you want.
Also, make sure you use version control that supports easy branching (git or mercurial), so when you break something you can revert to previous state or run something similar to "git bisect" to locate problem.
obviously you need to replace X* with shared_ptr
Wrong. It depends on context. If you have a pointer that points into the middle of some array (say, pixel data manipulation), then you won't be able to replace it with shared_ptr (and you won't need to). You need to use shared_ptr only when you need to ensure automatic deallocation of object. Automatic deallocation of object isn't always what you want.
If you wish to stick with boost, you should consider if you want a boost::shared_ptr or a boost::scoped_ptr. A shared_ptr is a resource to be shared between classes, whereas a scoped_ptr sounds more like what you may want (at least in some places). A scoped_ptr will automatically delete the memory when it goes out of scope.
Be wary when passing a shared_ptr to a function. The general rule with shared_ptr is to pass by value so a copy is created. If you pass it by reference then the pointer's reference count will not be incremented. In this case, you might end up deleting a piece of memory that you wanted kept alive.
There is a case, however, when you might want to pass a shared_ptr by reference. That is, if you want the memory to be allocated inside a different function. In this case, just make sure that the caller still holds the pointer for the lifetime of the function it is calling.
void allocPtr( boost::shared_ptr< int >& ptrByRef )
{
ptrByRef.reset( new int );
*ptrByRef = 3;
}
int main()
{
boost::shared_ptr< int >& myPointer;
// I want a function to alloc the memory for this pointer.
allocPtr( myPointer ); // I must be careful that I still hold the pointer
// when the function terminates
std::cout << *ptrByRef << std::endl;
}
I'm listing the steps/issues involved. They worked for me, but I can't vouch that they are 100% correct
0) check if there could be cyclic shared pointers. If so, can this lead to memory leak? I my case, luckily, cycles need not be broken because if I had a cycle, the objects in the cycle are useful and should not be destroyed. use weak pointers to break cycles
1) you need to replace "most" X* with shared_ptr<X> . A shared_ptr is (only?) created immediately after every dynamic allocation of X . At all other times, it is copy constructed , or constructed with an empty pointer(to signal NULL) . To be safe (but a bit inefficient), pass these shared_ptrs only by reference . Anyways, it's likely that you never passed your pointers by reference to begin with => no additional change is required
2) you might have used dynamic_cast<X*>(y) at some places. replace that with
dynamic_pointer_cast<X>(y)
3) wherever you passed NULL(eg. to signal that a computation failed), pass an empty shared pointer.
4) remove all delete statements for the concerned types
5) make your base class B inherit from enable_shared_from_this<B>. Then wherever you passed this , pass, shared_from_this() . You might have to do static casting if the function expected a derived type . keep in mind that when you call shared_from_this(), some shared_ptr must already be owning this . In particular, don't call shared_from_this() in constructor of the class
I'm sure one could semi-automate this process to get a semantically equivalent but not necessarily very-efficient code. The programmer probably only needs to reason about cyclic reference(if any).
I used regexes a lot in many of these steps. It took about 3-4 hours. The code compiles and has executed correctly so far.
There is a tool that tries to automatically convert to smart pointers. I've never tried it. Here is a quote from the abstract of the following paper:
http://www.cs.rutgers.edu/~santosh.nagarakatte/papers/ironclad-oopsla2013.pdf
To enforce safety properties that are difficult to check statically,
Ironclad C++ applies dynamic checks via templated “smart
pointer” classes.
Using a semi-automatic refactoring tool, we have ported
nearly 50K lines of code to Ironclad C++
Right now, object ownership/deletion in my C++ project is manually tracked (via comments mostly). Almost every heap allocated object is created using a factory of sorts
e.g.
auto b = a->createInstanceOfB(); //a owns b
auto c = b->createInstanceOfC(); //b owns c
//auto k = new K(); //not in the code
...
//b is no longer used..
a->destroyInstanceOfB(b); //destroyInstanceOf calls delete on it
What benefits, if any, will smart pointers provide in this sitution?
It's not the creation you should worry about, it's the deletion.
With smart pointers (the reference counting kind), objects can be commonly owned be several other objects, and when the last reference goes out of scope, the object is deleted automatically. This way, you won't have to manually delete anything anymore, you can only leak memory when you have circular dependencies, and your objects are never deleted from elsewhere behind your back.
The single-owner-only type (std::auto_ptr) also relieves you of your deleting duty, but it only allows one owner at a time (though ownership can be transferred). This is useful for objects that you pass around as pointers, but you still want them automatically cleaned up when they go out of scope (so that they work well in containers, and the stack unrolling in the case of an exception works as expected).
In any case, smart pointers make ownership explicit in your code, not only to you and your teammates, but also to the compiler - doing it wrong is likely to produce either a compiler error, or a runtime error that is relatively easy to catch with defensive coding. In manually memory-managed code, it is easy to get the ownership situation wrong somewhere (due to misreading comments, or assuming things the wrong way), and the resulting bug is typically hard to track down - you'll leak memory, overwrite stuff that's not yours, the program crashes at random, etc.; these all have in common that the situation where the bug occurs is unrelated to the offending code section.
Smart pointers enforce ownership semantics- that is, it's guaranteed that the object will be freed correctly even in the case of exceptions. You should always use them purely because of the safety, even if they express only very simple semantics such as std::unique_ptr. Moreover, a pointer that enforces the semantics reduces the need to document it, and less documentation means less documentation to be out of date or incorrect- especially where multiple parts of the same program express the same semantics.
Ultimately, smart pointers reduce many sources of error and there's little reason not to use them.
If an object is only owned by one other object, and dies with it, fine. Still need to make sure there's no dangling references, but this is not the hard case.
The hard case, is where you share ownership. In that case, you will want to have smart-ptrs (or something) to automatically figure out when to actually delete an object.
Note that shared ownership is not necessary everywhere, and avoiding it will likely simplify things down the road when your product goes bloaty. :)
I was recently interviewing for a C++ position, and I was asked how I guard against creating memory leaks. I know I didn't give a satisfactory answer to that question, so I'm throwing it to you guys. What are the best ways to guard against memory leaks?
Thanks!
What all the answers given so far boil down to is this: avoid having to call delete.
Any time the programmer has to call delete, you have a potential memory leak.
Instead, make the delete call happen automatically. C++ guarantees that local objects have their destructors called when they go out of scope. Use that guarantee to ensure your memory allocations are automatically deleted.
At its most general, this technique means that every memory allocation should be wrapped inside a simple class, whose constructor allocates the necessary memory, and destructor releases it.
Because this is such a commonly-used and widely applicable technique, smart pointer classes have been created that reduce the amount of boilerplate code. Rather than allocating memory, their constructors take a pointer to the memory allocation already made, and stores that. When the smart pointer goes out of scope, it is able to delete the allocation.
Of course, depending on usage, different semantics may be called for. Do you just need the simple case, where the allocation should last exactly as long as the wrapper class lives? Then use boost::scoped_ptr or, if you can't use boost, std::auto_ptr. Do you have an unknown number of objects referencing the allocation with no knowledge of how long each of them will live? Then the reference-counted boost::shared_ptr is a good solution.
But you don't have to use smart pointers. The standard library containers do the trick too. They internally allocate the memory required to store copies of the objects you put into them, and they release the memory again when they're deleted. So the user doesn't have to call either new or delete.
There are countless variations of this technique, changing whose responsibility it is to create the initial memory allocation, or when the deallocation should be performed.
But what they all have in common is the answer to your question: The RAII idiom: Resource Acquisition Is Initialization. Memory allocations are a kind of resource. Resources should be acquired when an object is initialized, and released by the object itslef, when it is destroyed.
Make the C++ scope and lifetime rules do your work for you. Never ever call delete outside of a RAII object, whether it is a container class, a smart pointer or some ad-hoc wrapper for a single allocation. Let the object handle the resource assigned to it.
If all delete calls happen automatically, there's no way you can forget them. And then there's no way you can leak memory.
Don't allocate memory on the heap if you don't need to. Most work can be done on the stack, so you should only do heap memory allocations when you absolutely need to.
If you need a heap-allocated object that is owned by a single other object then use std::auto_ptr.
Use standard containers, or containers from Boost instead of inventing your own.
If you have an object that is referred to by several other objects and is owned by no single one in particular then use either std::tr1::shared_ptr or std::tr1::weak_ptr -- whichever suits your use case.
If none of these things match your use case then maybe use delete. If you do end up having to manually manage memory then just use memory leak detection tools to make sure that you aren't leaking anything (and of course, just be careful). You shouldn't ever really get to this point though.
You'd do well to read up on RAII.
replace new with shared_ptr's. Basically RAII. make code exception safe. Use the stl everywhere possible. If you use reference counting pointers make sure that they don't form cycles. SCOPED_EXIT from boost is also very useful.
(Easy) Never ever let a raw pointer own a object (search your code for the regexp "\= *new". Use shared_ptr or scoped_ptr instead, or even better, use real variables instead of pointers as often as you can.
(Hard) Make sure you don't have any circular references, with shared_ptrs pointing to each other, use weak_ptr to break them.
Done!
Use all kind of smart pointers.
Use certain strategy for creation and deletion of objects, like who creates that is responsible for delete.
make sure that you understand exactly how an object will be deleted everytime you create one
make sure you understand who owns the pointer every time one is returned to you
make sure your error paths dispose of objects you have created appropriately
be paranoid about the above
In addition to the advice about RAII, remember to make your base class destructor virtual if there are any virtual functions.
To avoid memory leaks, what you must do is to have a clear and definite notion of who is responsible for deleting any dynamically allocated object.
C++ allows construction of objects on the stack (i.e. as kind-of local variables). This binds creation and destruction the the control flow: an objects is created when program execution reaches its declaration, and the object is destroyed when execution escapes the block in which that declaration was made. Whenever allocation need matches that pattern, then use it. This will save you much of the trouble.
For other usages, if you can define and document a clear notion of responsibility, then this may work fine. For instance, you have a method or a function which returns a pointer to a newly allocated object, and you document that the caller becomes responsible for ultimately deleting that instance. Clear documentation coupled with good programmer discipline (something which is not easily achieved !) can solve many remaining problems of memory management.
In some situations, including undisciplined programmers and complex data structures, you may have to resort to more advanced techniques, such as reference counting. Each object is awarded a "counter" which is the number of other variables which point to it. Whenever a piece of code decides to no longer point to the object, the counter is decreased. When the counter reaches zero, the object is deleted. Reference counting requires strict counter handling. This can be done with so-called "smart pointers": these are object which are functionally pointers, but which automatically adjust the counter upon their own creation and destruction.
Reference counting works quite good in many situations, but they cannot handle cyclic structures. So for the most complex situations, you have to resort to the heavy artillery, i.e. a garbage collector. The one I link to is the GC for C and C++ written by Hans Boehm, and it has been used in some rather big projects (e.g. Inkscape). The point of a garbage collector is to maintain a global view on the complete memory space, to know whether a given instance is still in use or not. This is the right tool when local-view tools, such as reference counting, are not enough. One could argue that, at that point, one should ask oneself whether C++ is the right language for the problem at hand. Garbage collection works best when the language is cooperative (this unlocks a host of optimizations which are not doable when the compiler is unaware of what happens with memory, as a typical C or C++ compiler).
Note that none of the techniques described above allows the programmer to stop thinking. Even a GC can suffer from memory leaks, because it uses reachability as an approximation of future usage (there are theoretical reasons which imply that it is not possible, in full generality, to accurately detect all objects which will not be used thereafter). You may still have to set some fields to NULL to inform the GC that you will no longer access an object through a given variable.
I start by reading the following: https://stackoverflow.com/search?q=%5Bc%2B%2B%5D+memory+leak
A very good way is using Smart Pointers, the boost/tr1::shared_ptr. The memory will be free'd, once the (stack allocated) smart pointer goes out of scope.
You can use the utility.
If you work on Linux - use valgrid (it's free).
Use deleaker on Windows.
Smart pointers.
Memory management.
Override 'new' and 'delete' or use your own macros/templates.
On x86 you can regularly use Valgrind to check your code
I use boost::shared_ptr in my application in C++. The memory problem is really serious, and the application takes large amount of memory.
However, because I put every newed object into a shared_ptr, when the application exits, no memory leaking can be detected.
There must be something like std::vector<shared_ptr<> > pool holding the resource. How can I know who holds the shared_ptr, when debugging?
It is hard to review code line by line. Too much code...
You can't know, by only looking at a shared_ptr, where the "sibling pointers" are. You can test if one is unique() or get the use_count(), among other methods.
The popular widespread use of shared_ptr will almost inevitably cause unwanted and unseen memory occupation.
Cyclic references are a well known cause and some of them can be indirect and difficult to spot especially in complex code that is worked on by more that one programmer; a programmer may decide than one object needs a reference to another as a quick fix and doesn't have time to examine all the code to see if he is closing a cycle. This hazard is hugely underestimated.
Less well understood is the problem of unreleased references. If an object is shared out to many shared_ptrs then it will not be destroyed until every one of them is zeroed or goes out of scope. It is very easy to overlook one of these references and end up with objects lurking unseen in memory that you thought you had finished with.
Although strictly speaking these are not memory leaks (it will all be released before the program exits) they are just as harmful and harder to detect.
These problems are the consequences of expedient false declarations:
Declaring what you really want to be single ownership as shared_ptr. scoped_ptr would be correct but then any other reference to that object will have to be a raw pointer, which could be left dangling.
Declaring what you really want to be a passive observing reference as shared_ptr. weak_ptr would be correct but then you have the hassle of converting it to share_ptr every time you want to use it.
I suspect that your project is a fine example of the kind of trouble that this practice can get you into.
If you have a memory intensive application you really need single ownership so that your design can explicitly control object lifetimes.
With single ownership opObject=NULL; will definitely delete the object and it will do it now.
With shared ownership spObject=NULL; ........who knows?......
One solution to dangling or circular smart pointer references we've done is customize the smart pointer class to add a debug-only bookkeeping function. Whenever a smartpointer adds a reference to an object, it takes a stack trace and puts it in a map whose each entry keeps track of
The address of the object being allocated (what the pointer points to)
The addresses of each smartpointer object holding a reference to the object
The corresponding stacktraces of when each smartpointer was constructed
When a smartpointer goes out of scope, its entry in the map gets deleted. When the last smartpointer to an object gets destroyed, the pointee object gets its entry in the map removed.
Then we have a "track leaks" command with two functions: '[re]start leak tracking' (which clears the whole map and enabled tracking if its not already), and 'print open references', which shows all outstanding smartpointer references created since the 'start leak tracking' command was issued. Since you can see the stack traces of where those smart pointers came into being, you can easily know exactly who's keeping from your object being freed. It slows things down when its on, so we don't leave it on all the time.
It's a fair amount of work to implement, but definitely worth it if you've got a codebase where this happens a lot.
You may be experiencing a shared pointer memory leak via cycles. What happens is your shared objects may hold references to other shared objects which eventually lead back to the original. When this happens the cycle keeps all reference counts at 1 even though no one else can access the objects. The solution is weak pointers.
Try refactoring some of your code so that ownership is more explicitly expressed by the use of weak pointers instead of shared pointers in some places.
When looking at your class hierarchy it's possible to determine which class really should hold a shared pointer and which merely needs only the weak one, so you can avoid cycles if there are any and if the "real" owner object is destructed, "non-owner" objects should have already been gone. If it turns out that some objects lose pointers too early, you have to look into object destruction sequence in your app and fix it.
You're obviously holding onto references to your objects within your application. This means that you are, on purpose, keeping things in memory. That means, you don't have a memory leak. A memory leak is when memory is allocated, and then you do not keep a reference to its address.
Basically, you need to look at your design and figure out why you are keeping so many objects and data in memory, and how can you minimize it.
The one possibility that you have a pseudo-memory leak is that you are creating more objects than you think you are. Try putting breakpoints on all statements containing a 'new'. See if your application is constructing more objects than you thought it should, and then read through that code.
The problem is really not so much a memory-leak as it is an issue of your application's design.
I was going to suggest using UMDH if you are on windows. It is a very powerful tool. Use it find allocations per transaction/time-period that you expect to be freed then find who is holding them.
There is more information on this SO answer
Find memory leaks caused by smart pointers
It is not possible to tell what objects own shared_ptr from within the program. If you are on Linux, one sure way to debug the memory leaks is the Valgrind tool -- while it won't directly answer your question, it will tell where the memory was allocated, which is usually sufficient for fixing the problem. I imagine Windows has comparable tools, but I am do not know which one is best.