memory management issues in C++ - c++

I would like to know what are the common memory management issues associated with C and C++. How can we debug these errors.
Here are few i know
1)uninitialized variable use
2)delete a pointer two times
3)writing array out of bounds
4)failing to deallocate memory
5)race conditions
1) malloc passed back a NULL pointer. You need to cast this pointer to whatever you want.
2) for string, need to allocate an extra byte for the end character.
3) double pointers.
4) (delete and malloc) and (free and new) don't go together
5) see what the actual function returns (return code) on failure and free the memory if it fails.
6) check for size allocating memory malloc(func +1)
7) check how u pass the double pointe **ptr to function
8) check for data size for behaviour undefined function call
9) failure of allocation of memory

Use RAII (Resource Acquisition Is Initialization). You should almost never be using new and delete directly in your code.

Preemptively preventing these errors in the first place:
1) Turn warnings to error levels to overcome the uninitialized errors. Compilers will frequently issue such warnings and by having them accessed as errors you'll be forced to fix the problem.
2) Use Smart pointers. You can find a good versions of such things in Boost.
3) Use vectors or other STL containers. Don't use arrays unless you're using one of the Boost variety.
4) Again, use a container object or smart pointer to handle this issue for you.
5) Use immutable data structures everywhere you can and place locks around modification points for shared mutable objects.
Dealing with legacy applications
1) Same as above.
2) Use integration tests to see how different components of your application play out. This should find many cases of such errors. Seriously consider having a formal peer review done by another group writing a different segment of the application who would come into contact with your naked pointers.
3) You can overload the new operator so that it allocates one extra byte before and after an object. These bytes should then be filled with some easily identifiable value such as 0xDEADBEEF. All you then have to do is check the preceeding byte before and after to witness if and when your memory is being corrupted by such errors.
4) Track your memory usage by running various components of your application many times. If your memory grows, check for missing deallocations.
5) Good luck. Sorry, but this is one of those things that can work 99.9% of the time and then, boom! The customer complains.

In addition to all already said, use valgrind or Bounds Checker to detect all of these errors in your program (except race conditions).

The best technique I know of is to avoid doing pointer operations and dynamic allocation directly. In C++, use reference parameters in preference to pointers. Use stl objects rather than rolling your own lists and containers. Use std::string instead of char *. Failing all that, take Rob K's advice and use RAII wherever you need to do alocations.
For C, there are some simliar things you can try to do, but you are pretty much doomed. Get a copy of Lint and pray for mercy.

Use a good compiler and set warning level to max
Wrap new/malloc and delete/free and bookkeep all allocations/deallocations
Replace raw arrays with an array class that does bounds checking (or use std::vector) (harder to do in C)
See 2.
This is hard, there exists some special debuggers such as jinx that specialize in this but I don't know how good they are.

Make sure you understand when to place object on the heap and when on the stack. As a general rule only put objects on the heap if you must, this will safe you lots of trouble.. Learn STL and use containers provided by the standard lib.

Take a look at my earlier answer to "Any reason to overload global new and delete?" You'll find a number of things here that will help with early detection and diagnosis, as well as a list of helpful tools. Most of the tools and techniques can be applied to either C or C++.
It's worth noting that valgrind's memcheck will spot 4 of your items, and helgrind may help spot the last (data races).

One common pattern I use is the following.
I keep the following three private variables in all allocator classes:
size_t news_;
size_t deletes_;
size_t in_use_;
In the allocator constructor, all of these three are initialized to 0.
Then on,
whenever allocator does a new, it increments news_, and
whenever allocator does a delete, it increments deletes_
Based on that, I put lot of asserts in the allocator code as:
assert( news_ - deletes_ == in_use_ );
This works very good for me.
Addition: I place the assert as precondition and postcondition on all non-trivial methods of the allocator. If the assert blovs, then I know I am doing something wrong. If the assert does not blow, with the all the testing I can do, then I get a reasonably sufficient confidence about the memory management correctness of my program.

1)uninitialized variable use
Automatically detected by compiler (turn warning to full and treat warnings as errors).
2)delete a pointer two times
Don't use RAW pointers. All pointers should by inside either a smart pointer or some form of RAII object that manages the lifetime of the pointer.
3)writing array out of bounds
Don't do it. Thats a logical bug.
You can mitigate it by uisng a container and a method that throws on out of bounds access (vector::at())
4)failing to deallocate memory
Don't use RAW pointers. See (2) above.
5)race conditions
Don't allow them. Allocate resources in priority order to avoid conflicting locks then lock objects when there is a potential for multiple write access (or read access when it is important).

One you forgot:
6) dereferencing pointer after it has been freed.
So far everyone seems to be answering "how to prevent", not "how to debug".
Assuming you're working with code which already has some of these issues, here are some ideas on debugging.
uninitialized variable use
The compiler can detect a lot of this. Initializing RAM to a known value helps in debugging those that escape. In our embedded system, we do a memory test before we leave the bootloader, which leaves all the RAM set to 0x5555. This turns out to be quite useful for debugging: when an integer == 21845, we know it was never initialized.
delete a pointer two times
Visual Studio should detect this at runtime. If you suspect this is happening in other systems, you can debug by replacing the delete call with custom code something like
void delete( void*p){ assert(*(int*)p!=p); _system_delete(p); *(int*)p=p;}
writing array out of bounds
Visual Studio should detect this at runtime. In other systems, add your own sentinels
int headZONE = 0xDEAD;
int array[whatever];
int tailZONE = 0xDEAD;
//add this line to check for overruns
//- place it using binary search to zero in on trouble spot
assert(headZONE==tailZONE&&tailZONE==0xDEAD)
failing to deallocate memory
Watch the stack growth. Record the free heap size before and after points which create and destroy objects; look for unexpected changes. Possibly write your own wrapper around the memory functions to track blocks.
race conditions
aaargh. Make sure you have a logging system with accurate timestamping.

Related

Realloc creates dangling pointers?

I wrote some array code that allocates memory and each value in the array is a type. I then have another array that consists of in to the first array for references.
Both arrays can grow. It uses realloc. Because the 2nd array contains pointers in to the first, they are surely not updated when the first array changes(I don't do it manually and there is no GC). Surely all the pointers in the 2nd array are invalid! (they point to memory that was free'ed by realloc).
This is the case right?
This seems like it would make persistent pointers to blocks of memory that may move very dangerous?
What is the standard solution? Don't use "global" pointers? Using pointers to pointers to pointers? I think I could make the 2nd array use **'s and could probably get things to work.
In a MT environment, things are even worse. Local pointers access may be moved in the middle, then the memory changed, and the local pointer is now wrong. (Which, of course might be solved by preventing the moves by lock, etc...)
Go with functional programming?
Yes, the realloc can invalidate your references. If there is no continuous space for relocating your array will be moved.
Consider using a container as the std::deque.
1) This is the case right?
Yes.
2) This seems like it would make persistent pointers to blocks of memory that may move very dangerous?
Yes.
3) What is the standard solution?
You design your application such that the life-times of your objects is well defined so that you do not refer to them after they are no longer required.
4) In a MT environment, things are even worse. Local pointers access may be moved in the middle, then the memory changed, and the local pointer is now wrong. (Which, of course might be solved by preventing the moves by lock, etc...)
Obviously you should never use a pointer that no-longer pointers to its resource. Managing shared resources in a MT environment is non trivial and there are a whole bunch of tools and techniques to achieve it.
5) Go with functional programming?
It is always advisable to avoid pointers if you can.
Without a specific problem it is hard to give a specific solution. But in order to achieve "not pointing at disappeared resources" we have various tools to employ. We have smart pointers, we have containers and we have value semantics. We need to understand how to use all of those but also we need to design with object lifetime in mind as a major consideration.
Object life-time should always be an important factor. However some languages (like Java for instance) mitigate against bad-design by providing a "safer" environment. C++, on the other hand, is rather less forgiving. However it does have a whole bunch of sophisticated tools for the task. That means a steeper learning curve but more efficiency and better control.

Why should C++ programmers minimize use of 'new'?

I stumbled upon Stack Overflow question Memory leak with std::string when using std::list<std::string>, and one of the comments says this:
Stop using new so much. I can't see any reason you used new anywhere you did. You can create objects by value in C++ and it's one of the huge advantages to using the language. You do not have to allocate everything on the heap. Stop thinking like a Java programmer.
I'm not really sure what he means by that.
Why should objects be created by value in C++ as often as possible, and what difference does it make internally? Did I misinterpret the answer?
There are two widely-used memory allocation techniques: automatic allocation and dynamic allocation. Commonly, there is a corresponding region of memory for each: the stack and the heap.
Stack
The stack always allocates memory in a sequential fashion. It can do so because it requires you to release the memory in the reverse order (First-In, Last-Out: FILO). This is the memory allocation technique for local variables in many programming languages. It is very, very fast because it requires minimal bookkeeping and the next address to allocate is implicit.
In C++, this is called automatic storage because the storage is claimed automatically at the end of scope. As soon as execution of current code block (delimited using {}) is completed, memory for all variables in that block is automatically collected. This is also the moment where destructors are invoked to clean up resources.
Heap
The heap allows for a more flexible memory allocation mode. Bookkeeping is more complex and allocation is slower. Because there is no implicit release point, you must release the memory manually, using delete or delete[] (free in C). However, the absence of an implicit release point is the key to the heap's flexibility.
Reasons to use dynamic allocation
Even if using the heap is slower and potentially leads to memory leaks or memory fragmentation, there are perfectly good use cases for dynamic allocation, as it's less limited.
Two key reasons to use dynamic allocation:
You don't know how much memory you need at compile time. For instance, when reading a text file into a string, you usually don't know what size the file has, so you can't decide how much memory to allocate until you run the program.
You want to allocate memory which will persist after leaving the current block. For instance, you may want to write a function string readfile(string path) that returns the contents of a file. In this case, even if the stack could hold the entire file contents, you could not return from a function and keep the allocated memory block.
Why dynamic allocation is often unnecessary
In C++ there's a neat construct called a destructor. This mechanism allows you to manage resources by aligning the lifetime of the resource with the lifetime of a variable. This technique is called RAII and is the distinguishing point of C++. It "wraps" resources into objects. std::string is a perfect example. This snippet:
int main ( int argc, char* argv[] )
{
std::string program(argv[0]);
}
actually allocates a variable amount of memory. The std::string object allocates memory using the heap and releases it in its destructor. In this case, you did not need to manually manage any resources and still got the benefits of dynamic memory allocation.
In particular, it implies that in this snippet:
int main ( int argc, char* argv[] )
{
std::string * program = new std::string(argv[0]); // Bad!
delete program;
}
there is unneeded dynamic memory allocation. The program requires more typing (!) and introduces the risk of forgetting to deallocate the memory. It does this with no apparent benefit.
Why you should use automatic storage as often as possible
Basically, the last paragraph sums it up. Using automatic storage as often as possible makes your programs:
faster to type;
faster when run;
less prone to memory/resource leaks.
Bonus points
In the referenced question, there are additional concerns. In particular, the following class:
class Line {
public:
Line();
~Line();
std::string* mString;
};
Line::Line() {
mString = new std::string("foo_bar");
}
Line::~Line() {
delete mString;
}
Is actually a lot more risky to use than the following one:
class Line {
public:
Line();
std::string mString;
};
Line::Line() {
mString = "foo_bar";
// note: there is a cleaner way to write this.
}
The reason is that std::string properly defines a copy constructor. Consider the following program:
int main ()
{
Line l1;
Line l2 = l1;
}
Using the original version, this program will likely crash, as it uses delete on the same string twice. Using the modified version, each Line instance will own its own string instance, each with its own memory and both will be released at the end of the program.
Other notes
Extensive use of RAII is considered a best practice in C++ because of all the reasons above. However, there is an additional benefit which is not immediately obvious. Basically, it's better than the sum of its parts. The whole mechanism composes. It scales.
If you use the Line class as a building block:
class Table
{
Line borders[4];
};
Then
int main ()
{
Table table;
}
allocates four std::string instances, four Line instances, one Table instance and all the string's contents and everything is freed automagically.
Because the stack is faster and leak-proof
In C++, it takes but a single instruction to allocate space—on the stack—for every local scope object in a given function, and it's impossible to leak any of that memory. That comment intended (or should have intended) to say something like "use the stack and not the heap".
The reason why is complicated.
First, C++ is not garbage collected. Therefore, for every new, there must be a corresponding delete. If you fail to put this delete in, then you have a memory leak. Now, for a simple case like this:
std::string *someString = new std::string(...);
//Do stuff
delete someString;
This is simple. But what happens if "Do stuff" throws an exception? Oops: memory leak. What happens if "Do stuff" issues return early? Oops: memory leak.
And this is for the simplest case. If you happen to return that string to someone, now they have to delete it. And if they pass it as an argument, does the person receiving it need to delete it? When should they delete it?
Or, you can just do this:
std::string someString(...);
//Do stuff
No delete. The object was created on the "stack", and it will be destroyed once it goes out of scope. You can even return the object, thus transfering its contents to the calling function. You can pass the object to functions (typically as a reference or const-reference: void SomeFunc(std::string &iCanModifyThis, const std::string &iCantModifyThis). And so forth.
All without new and delete. There's no question of who owns the memory or who's responsible for deleting it. If you do:
std::string someString(...);
std::string otherString;
otherString = someString;
It is understood that otherString has a copy of the data of someString. It isn't a pointer; it is a separate object. They may happen to have the same contents, but you can change one without affecting the other:
someString += "More text.";
if(otherString == someString) { /*Will never get here */ }
See the idea?
Objects created by new must be eventually deleted lest they leak. The destructor won't be called, memory won't be freed, the whole bit. Since C++ has no garbage collection, it's a problem.
Objects created by value (i. e. on stack) automatically die when they go out of scope. The destructor call is inserted by the compiler, and the memory is auto-freed upon function return.
Smart pointers like unique_ptr, shared_ptr solve the dangling reference problem, but they require coding discipline and have other potential issues (copyability, reference loops, etc.).
Also, in heavily multithreaded scenarios, new is a point of contention between threads; there can be a performance impact for overusing new. Stack object creation is by definition thread-local, since each thread has its own stack.
The downside of value objects is that they die once the host function returns - you cannot pass a reference to those back to the caller, only by copying, returning or moving by value.
C++ doesn't employ any memory manager by its own. Other languages like C# and Java have a garbage collector to handle the memory
C++ implementations typically use operating system routines to allocate the memory and too much new/delete could fragment the available memory
With any application, if the memory is frequently being used it's advisable to preallocate it and release when not required.
Improper memory management could lead memory leaks and it's really hard to track. So using stack objects within the scope of function is a proven technique
The downside of using stack objects are, it creates multiple copies of objects on returning, passing to functions, etc. However, smart compilers are well aware of these situations and they've been optimized well for performance
It's really tedious in C++ if the memory being allocated and released in two different places. The responsibility for release is always a question and mostly we rely on some commonly accessible pointers, stack objects (maximum possible) and techniques like auto_ptr (RAII objects)
The best thing is that, you've control over the memory and the worst thing is that you will not have any control over the memory if we employ an improper memory management for the application. The crashes caused due to memory corruptions are the nastiest and hard to trace.
I see that a few important reasons for doing as few new's as possible are missed:
Operator new has a non-deterministic execution time
Calling new may or may not cause the OS to allocate a new physical page to your process. This can be quite slow if you do it often. Or it may already have a suitable memory location ready; we don't know. If your program needs to have consistent and predictable execution time (like in a real-time system or game/physics simulation), you need to avoid new in your time-critical loops.
Operator new is an implicit thread synchronization
Yes, you heard me. Your OS needs to make sure your page tables are consistent and as such calling new will cause your thread to acquire an implicit mutex lock. If you are consistently calling new from many threads you are actually serialising your threads (I've done this with 32 CPUs, each hitting on new to get a few hundred bytes each, ouch! That was a royal p.i.t.a. to debug.)
The rest, such as slow, fragmentation, error prone, etc., have already been mentioned by other answers.
Pre-C++17:
Because it is prone to subtle leaks even if you wrap the result in a smart pointer.
Consider a "careful" user who remembers to wrap objects in smart pointers:
foo(shared_ptr<T1>(new T1()), shared_ptr<T2>(new T2()));
This code is dangerous because there is no guarantee that either shared_ptr is constructed before either T1 or T2. Hence, if one of new T1() or new T2() fails after the other succeeds, then the first object will be leaked because no shared_ptr exists to destroy and deallocate it.
Solution: use make_shared.
Post-C++17:
This is no longer a problem: C++17 imposes a constraint on the order of these operations, in this case ensuring that each call to new() must be immediately followed by the construction of the corresponding smart pointer, with no other operation in between. This implies that, by the time the second new() is called, it is guaranteed that the first object has already been wrapped in its smart pointer, thus preventing any leaks in case an exception is thrown.
A more detailed explanation of the new evaluation order introduced by C++17 was provided by Barry in another answer.
Thanks to #Remy Lebeau for pointing out that this is still a problem under C++17 (although less so): the shared_ptr constructor can fail to allocate its control block and throw, in which case the pointer passed to it is not deleted.
Solution: use make_shared.
To a great extent, that's someone elevating their own weaknesses to a general rule. There's nothing wrong per se with creating objects using the new operator. What there is some argument for is that you have to do so with some discipline: if you create an object you need to make sure it's going to be destroyed.
The easiest way of doing that is to create the object in automatic storage, so C++ knows to destroy it when it goes out of scope:
{
File foo = File("foo.dat");
// Do things
}
Now, observe that when you fall off that block after the end-brace, foo is out of scope. C++ will call its destructor automatically for you. Unlike Java, you don't need to wait for the garbage collection to find it.
Had you written
{
File * foo = new File("foo.dat");
you would want to match it explicitly with
delete foo;
}
or even better, allocate your File * as a "smart pointer". If you aren't careful about that it can lead to leaks.
The answer itself makes the mistaken assumption that if you don't use new you don't allocate on the heap; in fact, in C++ you don't know that. At most, you know that a small amount of memory, say one pointer, is certainly allocated on the stack. However, consider if the implementation of File is something like:
class File {
private:
FileImpl * fd;
public:
File(String fn){ fd = new FileImpl(fn);}
Then FileImpl will still be allocated on the stack.
And yes, you'd better be sure to have
~File(){ delete fd ; }
in the class as well; without it, you'll leak memory from the heap even if you didn't apparently allocate on the heap at all.
new() shouldn't be used as little as possible. It should be used as carefully as possible. And it should be used as often as necessary as dictated by pragmatism.
Allocation of objects on the stack, relying on their implicit destruction, is a simple model. If the required scope of an object fits that model then there's no need to use new(), with the associated delete() and checking of NULL pointers.
In the case where you have lots of short-lived objects allocation on the stack should reduce the problems of heap fragmentation.
However, if the lifetime of your object needs to extend beyond the current scope then new() is the right answer. Just make sure that you pay attention to when and how you call delete() and the possibilities of NULL pointers, using deleted objects and all of the other gotchas that come with the use of pointers.
When you use new, objects are allocated to the heap. It is generally used when you anticipate expansion. When you declare an object such as,
Class var;
it is placed on the stack.
You will always have to call destroy on the object that you placed on the heap with new. This opens the potential for memory leaks. Objects placed on the stack are not prone to memory leaking!
One notable reason to avoid overusing the heap is for performance -- specifically involving the performance of the default memory management mechanism used by C++. While allocation can be quite quick in the trivial case, doing a lot of new and delete on objects of non-uniform size without strict order leads not only to memory fragmentation, but it also complicates the allocation algorithm and can absolutely destroy performance in certain cases.
That's the problem that memory pools where created to solve, allowing to to mitigate the inherent disadvantages of traditional heap implementations, while still allowing you to use the heap as necessary.
Better still, though, to avoid the problem altogether. If you can put it on the stack, then do so.
I tend to disagree with the idea of using new "too much". Though the original poster's use of new with system classes is a bit ridiculous. (int *i; i = new int[9999];? really? int i[9999]; is much clearer.) I think that is what was getting the commenter's goat.
When you're working with system objects, it's very rare that you'd need more than one reference to the exact same object. As long as the value is the same, that's all that matters. And system objects don't typically take up much space in memory. (one byte per character, in a string). And if they do, the libraries should be designed to take that memory management into account (if they're written well). In these cases, (all but one or two of the news in his code), new is practically pointless and only serves to introduce confusions and potential for bugs.
When you're working with your own classes/objects, however (e.g. the original poster's Line class), then you have to begin thinking about the issues like memory footprint, persistence of data, etc. yourself. At this point, allowing multiple references to the same value is invaluable - it allows for constructs like linked lists, dictionaries, and graphs, where multiple variables need to not only have the same value, but reference the exact same object in memory. However, the Line class doesn't have any of those requirements. So the original poster's code actually has absolutely no needs for new.
I think the poster meant to say You do not have to allocate everything on the heap rather than the the stack.
Basically, objects are allocated on the stack (if the object size allows, of course) because of the cheap cost of stack-allocation, rather than heap-based allocation which involves quite some work by the allocator, and adds verbosity because then you have to manage data allocated on the heap.
Two reasons:
It's unnecessary in this case. You're making your code needlessly more complicated.
It allocates space on the heap, and it means that you have to remember to delete it later, or it will cause a memory leak.
Many answers have gone into various performance considerations. I want to address the comment which puzzled OP:
Stop thinking like a Java programmer.
Indeed, in Java, as explained in the answer to this question,
You use the new keyword when an object is being explicitly created for the first time.
but in C++, objects of type T are created like so: T{} (or T{ctor_argument1,ctor_arg2} for a constructor with arguments). That's why usually you just have no reason to want to use new.
So, why is it ever used at all? Well, for two reasons:
You need to create many values the number of which is not known at compile time.
Due to limitations of the C++ implementation on common machines - to prevent a stack overflow by allocating too much space creating values the regular way.
Now, beyond what the comment you quoted implied, you should note that even those two cases above are covered well enough without you having to "resort" to using new yourself:
You can use container types from the standard libraries which can hold a runtime-variable number of elements (like std::vector).
You can use smart pointers, which give you a pointer similar to new, but ensure that memory gets released where the "pointer" goes out of scope.
and for this reason, it is an official item in the C++ community Coding Guidelines to avoid explicit new and delete: Guideline R.11.
The core reason is that objects on heap are always difficult to use and manage than simple values. Writing code that are easy to read and maintain is always the first priority of any serious programmer.
Another scenario is the library we are using provides value semantics and make dynamic allocation unnecessary. Std::string is a good example.
For object oriented code however, using a pointer - which means use new to create it beforehand - is a must. In order to simplify the complexity of resource management, we have dozens of tools to make it as simple as possible, such as smart pointers. The object based paradigm or generic paradigm assumes value semantics and requires less or no new, just as the posters elsewhere stated.
Traditional design patterns, especially those mentioned in GoF book, use new a lot, as they are typical OO code.
new is the new goto.
Recall why goto is so reviled: while it is a powerful, low-level tool for flow control, people often used it in unnecessarily complicated ways that made code difficult to follow. Furthermore, the most useful and easiest to read patterns were encoded in structured programming statements (e.g. for or while); the ultimate effect is that the code where goto is the appropriate way to is rather rare, if you are tempted to write goto, you're probably doing things badly (unless you really know what you're doing).
new is similar — it is often used to make things unnecessarily complicated and harder to read, and the most useful usage patterns can be encoded have been encoded into various classes. Furthermore, if you need to use any new usage patterns for which there aren't already standard classes, you can write your own classes that encode them!
I would even argue that new is worse than goto, due to the need to pair new and delete statements.
Like goto, if you ever think you need to use new, you are probably doing things badly — especially if you are doing so outside of the implementation of a class whose purpose in life is to encapsulate whatever dynamic allocations you need to do.
One more point to all the above correct answers, it depends on what sort of programming you are doing. Kernel developing in Windows for example -> The stack is severely limited and you might not be able to take page faults like in user mode.
In such environments, new, or C-like API calls are prefered and even required.
Of course, this is merely an exception to the rule.
new allocates objects on the heap. Otherwise, objects are allocated on the stack. Look up the difference between the two.

Compacting garbage collector implementation in C++0x

I'm implementing a compacting garbage collector for my own personal use in C++0x, and I've got a question. Obviously the mechanics of the collector depend upon moving objects, and I've been wondering how to implement this in terms of the smart pointer types that point to it. I've been thinking about either pointer-to-pointer in the pointer type itself, or, the collector maintains a list of pointers that point to each object so that they can be modified, removing the need for a double de-ref when accessing the pointer but adding some extra overhead during collection and additional memory overhead. What's the best way to go here?
Edit: My primary concern is for speedy allocation and access. I'm not concerned with particularly efficient collections or other maintenance, because that's not really what the GC is intended for.
There's nothing straight forward about grafting on extra GC to C++, let alone a compacting algorithm. It isn't clear exactly what you're trying to do and how it will interact with the rest of the C++ code.
I have actually written a gc in C++ which works with existing C++ code, and it had a compactor at one stage (though I dropped it because it was too slow). But there are many nasty semantic problems. I mentioned to Bjarne only a few weeks ago that C++ lacks the operator required to do it properly and the situation is that it is unlikely to ever exist because it has limited utility..
What you actually need is a "re-addres-me" operator. What happens is that you do not actually move objects around. You just use mmap to change the object address. This is much faster, and, in effect, it is using the VM features to provide handles.
Without this facility you have to have a way to perform an overlapping move of an object, which you cannot do in C++ efficiently: you'd have to move to a temporary first. In C, it is much easier, you can use memmove. At some stage all the pointers to or into the moved objects have to be adjusted.
Using handles does not solve this problem, it just reduces the problem from arbitrary sized objects to constant sized ones: these are easier to manage in an array, but the same problem exists: you have to manage the storage. If you remove lots of handle from the array randomly .. you still have a problem with fragmentation.
So don't bother with handles, they don't work.
This is what I did in Felix: you call new(shape, collector) T(args). Here the shape is a descriptor of the type, including a list of offsets which contain (GC) pointers, and the address of a routine to finalise the object (by default, it calls the destructor).
It also contains a flag saying if the object can be moved with memmove. If the object is big or immobile, it is allocated by malloc. If the object is small and mobile, it is allocated in an arena, provided there is space in the arena.
The arena is compacted by moving all the objects in it, and using the shape information to globally adjust all the pointers to or into these objects. Compaction can be done incrementally.
The downside for a C++ programmer is the need to construct a correct shape object to pass. This doesn't bother me because I'm implementing a language which can generate the shape information automatically.
Now: the key point is: to do compaction, you must use a precise collector. Compaction cannot work with a conservative collector. This is very important. It is fine to allow some leakage if you see an value that looks like a pointer but happens to be an integer: some object won't be collected, but this is usually no big deal. But for compaction you have to adjust the pointers but you'd better not change that integer: so you have to know for sure when something is a pointer, so your collector has to be precise: the shape must be known.
In Ocaml this is relatively simple: everything is either a pointer or integer and the low bit is used at run time to tell. Objects pointed at have a code telling the type, and there are only a few types: either a scalar (don't scan it) or an aggregate (scan it, it only contains integers or pointers).
This is a pretty straight-forward question so here's a straight-forward answer:
Mark-and-sweep (and occasionally mark-and-compact to avoid heap fragmentation) is the fastest when it comes to allocation and access (avoiding double de-refs). It's also very easy to implement. Since you're not worried about collection performance impact (mark-and-sweep tends to freeze up the process in a nondeterministically), this should be the way to go.
Implementation details found at:
http://www.brpreiss.com/books/opus5/html/page424.html#secgarbagemarksweep
http://www.brpreiss.com/books/opus5/html/page428.html
A nursery generation will give you the best possible allocation performance because it is just a pointer bump.
You could implement pointer updates without using double indirection by using techniques like a shadow stack but this will be slow and very error prone if you're writing this C++ code by hand.

Force garbage collection/compaction with malloc()

I have a C++ program that benchmarks various algorithms on input arrays of different length. It looks more or less like this:
# (1)
for k in range(4..20):
# (2)
input = generate 2**k random points
for variant in variants:
benchmark the following call
run variant on input array
# (3)
Is it possible to reset the whole heap management at (2) to the state it had at (1)? All memory allocated on the heap that was allocated during the program is guaranteed to be freed at (3).
I am using g++ 4.3 on Linux.
Edit: I understand that there is no real garbage collection in C/C++. I want to force the memory allocation to join adjacent empty chunks of memory it has in its free list at (2).
If you want the test runs to start in the same heap states, you can run them in their own processes created by fork().
I think there's a simple solution to your problem - you could move the outside loop outside of your application and into a shell script or another application and pass the (k) (and any other) parameters through the command line to the benchmarked app - this way you'll be sure all executions had similar starting conditions.
There is no way of doing this using Standard C++ short of implementing your own versions of new & delete with their own heap management. An alternative would not be to use arrays but use std::vectors instead - you can then use a custom allocator to do the heap management.
What do you mean? There is no garbage collection in C, and certainly no compaction.
To "reset the state of the heap", you have to call free() for every malloc() call. And as I understand your code, you do that already.
Compaction is pretty much impossible. Unlike higher-level languages like Java or C#, you can not change the address of an object, because any pointers to it would be invalidated.
There's no automatic way, you have to manually delete whatever is on the heap to get back to the state of (1).
Their are a few pieces of garbage collection code out their. Look at perl/python/lua/ruby/mono/parrot/boehm/pike/slate/self/io etc etc.
Also look at alloca() and dynamic arrays. Also consider using structs to implement your own destructor or using gcc attributes to call free when a function leaves scope.

General guidelines to avoid memory leaks in C++ [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What are some general tips to make sure I don't leak memory in C++ programs? How do I figure out who should free memory that has been dynamically allocated?
I thoroughly endorse all the advice about RAII and smart pointers, but I'd also like to add a slightly higher-level tip: the easiest memory to manage is the memory you never allocated. Unlike languages like C# and Java, where pretty much everything is a reference, in C++ you should put objects on the stack whenever you can. As I've see several people (including Dr Stroustrup) point out, the main reason why garbage collection has never been popular in C++ is that well-written C++ doesn't produce much garbage in the first place.
Don't write
Object* x = new Object;
or even
shared_ptr<Object> x(new Object);
when you can just write
Object x;
Use RAII
Forget Garbage Collection (Use RAII instead). Note that even the Garbage Collector can leak, too (if you forget to "null" some references in Java/C#), and that Garbage Collector won't help you to dispose of resources (if you have an object which acquired a handle to a file, the file won't be freed automatically when the object will go out of scope if you don't do it manually in Java, or use the "dispose" pattern in C#).
Forget the "one return per function" rule. This is a good C advice to avoid leaks, but it is outdated in C++ because of its use of exceptions (use RAII instead).
And while the "Sandwich Pattern" is a good C advice, it is outdated in C++ because of its use of exceptions (use RAII instead).
This post seem to be repetitive, but in C++, the most basic pattern to know is RAII.
Learn to use smart pointers, both from boost, TR1 or even the lowly (but often efficient enough) auto_ptr (but you must know its limitations).
RAII is the basis of both exception safety and resource disposal in C++, and no other pattern (sandwich, etc.) will give you both (and most of the time, it will give you none).
See below a comparison of RAII and non RAII code:
void doSandwich()
{
T * p = new T() ;
// do something with p
delete p ; // leak if the p processing throws or return
}
void doRAIIDynamic()
{
std::auto_ptr<T> p(new T()) ; // you can use other smart pointers, too
// do something with p
// WON'T EVER LEAK, even in case of exceptions, returns, breaks, etc.
}
void doRAIIStatic()
{
T p ;
// do something with p
// WON'T EVER LEAK, even in case of exceptions, returns, breaks, etc.
}
About RAII
To summarize (after the comment from Ogre Psalm33), RAII relies on three concepts:
Once the object is constructed, it just works! Do acquire resources in the constructor.
Object destruction is enough! Do free resources in the destructor.
It's all about scopes! Scoped objects (see doRAIIStatic example above) will be constructed at their declaration, and will be destroyed the moment the execution exits the scope, no matter how the exit (return, break, exception, etc.).
This means that in correct C++ code, most objects won't be constructed with new, and will be declared on the stack instead. And for those constructed using new, all will be somehow scoped (e.g. attached to a smart pointer).
As a developer, this is very powerful indeed as you won't need to care about manual resource handling (as done in C, or for some objects in Java which makes intensive use of try/finally for that case)...
Edit (2012-02-12)
"scoped objects ... will be destructed ... no matter the exit" that's not entirely true. there are ways to cheat RAII. any flavour of terminate() will bypass cleanup. exit(EXIT_SUCCESS) is an oxymoron in this regard.
– wilhelmtell
wilhelmtell is quite right about that: There are exceptional ways to cheat RAII, all leading to the process abrupt stop.
Those are exceptional ways because C++ code is not littered with terminate, exit, etc., or in the case with exceptions, we do want an unhandled exception to crash the process and core dump its memory image as is, and not after cleaning.
But we must still know about those cases because, while they rarely happen, they can still happen.
(who calls terminate or exit in casual C++ code?... I remember having to deal with that problem when playing with GLUT: This library is very C-oriented, going as far as actively designing it to make things difficult for C++ developers like not caring about stack allocated data, or having "interesting" decisions about never returning from their main loop... I won't comment about that).
Instead of managing memory manually, try to use smart pointers where applicable.
Take a look at the Boost lib, TR1, and smart pointers.
Also smart pointers are now a part of C++ standard called C++11.
You'll want to look at smart pointers, such as boost's smart pointers.
Instead of
int main()
{
Object* obj = new Object();
//...
delete obj;
}
boost::shared_ptr will automatically delete once the reference count is zero:
int main()
{
boost::shared_ptr<Object> obj(new Object());
//...
// destructor destroys when reference count is zero
}
Note my last note, "when reference count is zero, which is the coolest part. So If you have multiple users of your object, you won't have to keep track of whether the object is still in use. Once nobody refers to your shared pointer, it gets destroyed.
This is not a panacea, however. Though you can access the base pointer, you wouldn't want to pass it to a 3rd party API unless you were confident with what it was doing. Lots of times, your "posting" stuff to some other thread for work to be done AFTER the creating scope is finished. This is common with PostThreadMessage in Win32:
void foo()
{
boost::shared_ptr<Object> obj(new Object());
// Simplified here
PostThreadMessage(...., (LPARAM)ob.get());
// Destructor destroys! pointer sent to PostThreadMessage is invalid! Zohnoes!
}
As always, use your thinking cap with any tool...
Read up on RAII and make sure you understand it.
Bah, you young kids and your new-fangled garbage collectors...
Very strong rules on "ownership" - what object or part of the software has the right to delete the object. Clear comments and wise variable names to make it obvious if a pointer "owns" or is "just look, don't touch". To help decide who owns what, follow as much as possible the "sandwich" pattern within every subroutine or method.
create a thing
use that thing
destroy that thing
Sometimes it's necessary to create and destroy in widely different places; i think hard to avoid that.
In any program requiring complex data structures, i create a strict clear-cut tree of objects containing other objects - using "owner" pointers. This tree models the basic hierarchy of application domain concepts. Example a 3D scene owns objects, lights, textures. At the end of the rendering when the program quits, there's a clear way to destroy everything.
Many other pointers are defined as needed whenever one entity needs access another, to scan over arays or whatever; these are the "just looking". For the 3D scene example - an object uses a texture but does not own; other objects may use that same texture. The destruction of an object does not invoke destruction of any textures.
Yes it's time consuming but that's what i do. I rarely have memory leaks or other problems. But then i work in the limited arena of high-performance scientific, data acquisition and graphics software. I don't often deal transactions like in banking and ecommerce, event-driven GUIs or high networked asynchronous chaos. Maybe the new-fangled ways have an advantage there!
Most memory leaks are the result of not being clear about object ownership and lifetime.
The first thing to do is to allocate on the Stack whenever you can. This deals with most of the cases where you need to allocate a single object for some purpose.
If you do need to 'new' an object then most of the time it will have a single obvious owner for the rest of its lifetime. For this situation I tend to use a bunch of collections templates that are designed for 'owning' objects stored in them by pointer. They are implemented with the STL vector and map containers but have some differences:
These collections can not be copied or assigned to. (once they contain objects.)
Pointers to objects are inserted into them.
When the collection is deleted the destructor is first called on all objects in the collection. (I have another version where it asserts if destructed and not empty.)
Since they store pointers you can also store inherited objects in these containers.
My beaf with STL is that it is so focused on Value objects while in most applications objects are unique entities that do not have meaningful copy semantics required for use in those containers.
Great question!
if you are using c++ and you are developing real-time CPU-and-memory boud application (like games) you need to write your own Memory Manager.
I think the better you can do is merge some interesting works of various authors, I can give you some hint:
Fixed size allocator is heavily discussed, everywhere in the net
Small Object Allocation was introduced by Alexandrescu in 2001 in his perfect book "Modern c++ design"
A great advancement (with source code distributed) can be found in an amazing article in Game Programming Gem 7 (2008) named "High Performance Heap allocator" written by Dimitar Lazarov
A great list of resources can be found in this article
Do not start writing a noob unuseful allocator by yourself... DOCUMENT YOURSELF first.
One technique that has become popular with memory management in C++ is RAII. Basically you use constructors/destructors to handle resource allocation. Of course there are some other obnoxious details in C++ due to exception safety, but the basic idea is pretty simple.
The issue generally comes down to one of ownership. I highly recommend reading the Effective C++ series by Scott Meyers and Modern C++ Design by Andrei Alexandrescu.
There's already a lot about how to not leak, but if you need a tool to help you track leaks take a look at:
BoundsChecker under VS
MMGR C/C++ lib from FluidStudio
http://www.paulnettle.com/pub/FluidStudios/MemoryManagers/Fluid_Studios_Memory_Manager.zip (its overrides the allocation methods and creates a report of the allocations, leaks, etc)
User smart pointers everywhere you can! Whole classes of memory leaks just go away.
Share and know memory ownership rules across your project. Using the COM rules makes for the best consistency ([in] parameters are owned by the caller, callee must copy; [out] params are owned by the caller, callee must make a copy if keeping a reference; etc.)
valgrind is a good tool to check your programs memory leakages at runtime, too.
It is available on most flavors of Linux (including Android) and on Darwin.
If you use to write unit tests for your programs, you should get in the habit of systematicaly running valgrind on tests. It will potentially avoid many memory leaks at an early stage. It is also usually easier to pinpoint them in simple tests that in a full software.
Of course this advice stay valid for any other memory check tool.
Also, don't use manually allocated memory if there's a std library class (e.g. vector). Make sure if you violate that rule that you have a virtual destructor.
If you can't/don't use a smart pointer for something (although that should be a huge red flag), type in your code with:
allocate
if allocation succeeded:
{ //scope)
deallocate()
}
That's obvious, but make sure you type it before you type any code in the scope
A frequent source of these bugs is when you have a method that accepts a reference or pointer to an object but leaves ownership unclear. Style and commenting conventions can make this less likely.
Let the case where the function takes ownership of the object be the special case. In all situations where this happens, be sure to write a comment next to the function in the header file indicating this. You should strive to make sure that in most cases the module or class which allocates an object is also responsible for deallocating it.
Using const can help a lot in some cases. If a function will not modify an object, and does not store a reference to it that persists after it returns, accept a const reference. From reading the caller's code it will be obvious that your function has not accepted ownership of the object. You could have had the same function accept a non-const pointer, and the caller may or may not have assumed that the callee accepted ownership, but with a const reference there's no question.
Do not use non-const references in argument lists. It is very unclear when reading the caller code that the callee may have kept a reference to the parameter.
I disagree with the comments recommending reference counted pointers. This usually works fine, but when you have a bug and it doesn't work, especially if your destructor does something non-trivial, such as in a multithreaded program. Definitely try to adjust your design to not need reference counting if it's not too hard.
Tips in order of Importance:
-Tip#1 Always remember to declare your destructors "virtual".
-Tip#2 Use RAII
-Tip#3 Use boost's smartpointers
-Tip#4 Don't write your own buggy Smartpointers, use boost (on a project I'm on right now I can't use boost, and I've suffered having to debug my own smart pointers, I would definately not take the same route again, but then again right now I can't add boost to our dependencies)
-Tip#5 If its some casual/non-performance critical (as in games with thousands of objects) work look at Thorsten Ottosen's boost pointer container
-Tip#6 Find a leak detection header for your platform of choice such as Visual Leak Detection's "vld" header
If you can, use boost shared_ptr and standard C++ auto_ptr. Those convey ownership semantics.
When you return an auto_ptr, you are telling the caller that you are giving them ownership of the memory.
When you return a shared_ptr, you are telling the caller that you have a reference to it and they take part of the ownership, but it isn't solely their responsibility.
These semantics also apply to parameters. If the caller passes you an auto_ptr, they are giving you ownership.
Others have mentioned ways of avoiding memory leaks in the first place (like smart pointers). But a profiling and memory-analysis tool is often the only way to track down memory problems once you have them.
Valgrind memcheck is an excellent free one.
For MSVC only, add the following to the top of each .cpp file:
#ifdef _DEBUG
#define new DEBUG_NEW
#endif
Then, when debugging with VS2003 or greater, you will be told of any leaks when your program exits (it tracks new/delete). It's basic, but it has helped me in the past.
valgrind (only avail for *nix platforms) is a very nice memory checker
If you are going to manage your memory manually, you have two cases:
I created the object (perhaps indirectly, by calling a function that allocates a new object), I use it (or a function I call uses it), then I free it.
Somebody gave me the reference, so I should not free it.
If you need to break any of these rules, please document it.
It is all about pointer ownership.
Try to avoid allocating objects dynamically. As long as classes have appropriate constructors and destructors, use a variable of the class type, not a pointer to it, and you avoid dynamical allocation and deallocation because the compiler will do it for you.
Actually that's also the mechanism used by "smart pointers" and referred to as RAII by some of the other writers ;-) .
When you pass objects to other functions, prefer reference parameters over pointers. This avoids some possible errors.
Declare parameters const, where possible, especially pointers to objects. That way objects can't be freed "accidentially" (except if you cast the const away ;-))).
Minimize the number of places in the program where you do memory allocation and deallocation. E. g. if you do allocate or free the same type several times, write a function for it (or a factory method ;-)).
This way you can create debug output (which addresses are allocated and deallocated, ...) easily, if required.
Use a factory function to allocate objects of several related classes from a single function.
If your classes have a common base class with a virtual destructor, you can free all of them using the same function (or static method).
Check your program with tools like purify (unfortunately many $/€/...).
You can intercept the memory allocation functions and see if there are some memory zones not freed upon program exit (though it is not suitable for all the applications).
It can also be done at compile time by replacing operators new and delete and other memory allocation functions.
For example check in this site [Debugging memory allocation in C++]
Note: There is a trick for delete operator also something like this:
#define DEBUG_DELETE PrepareDelete(__LINE__,__FILE__); delete
#define delete DEBUG_DELETE
You can store in some variables the name of the file and when the overloaded delete operator will know which was the place it was called from. This way you can have the trace of every delete and malloc from your program. At the end of the memory checking sequence you should be able to report what allocated block of memory was not 'deleted' identifying it by filename and line number which is I guess what you want.
You could also try something like BoundsChecker under Visual Studio which is pretty interesting and easy to use.
We wrap all our allocation functions with a layer that appends a brief string at the front and a sentinel flag at the end. So for example you'd have a call to "myalloc( pszSomeString, iSize, iAlignment ); or new( "description", iSize ) MyObject(); which internally allocates the specified size plus enough space for your header and sentinel. Of course, don't forget to comment this out for non-debug builds! It takes a little more memory to do this but the benefits far outweigh the costs.
This has three benefits - first it allows you to easily and quickly track what code is leaking, by doing quick searches for code allocated in certain 'zones' but not cleaned up when those zones should have freed. It can also be useful to detect when a boundary has been overwritten by checking to ensure all sentinels are intact. This has saved us numerous times when trying to find those well-hidden crashes or array missteps. The third benefit is in tracking the use of memory to see who the big players are - a collation of certain descriptions in a MemDump tells you when 'sound' is taking up way more space than you anticipated, for example.
C++ is designed RAII in mind. There is really no better way to manage memory in C++ I think.
But be careful not to allocate very big chunks (like buffer objects) on local scope. It can cause stack overflows and, if there is a flaw in bounds checking while using that chunk, you can overwrite other variables or return addresses, which leads to all kinds security holes.
One of the only examples about allocating and destroying in different places is thread creation (the parameter you pass).
But even in this case is easy.
Here is the function/method creating a thread:
struct myparams {
int x;
std::vector<double> z;
}
std::auto_ptr<myparams> param(new myparams(x, ...));
// Release the ownership in case thread creation is successfull
if (0 == pthread_create(&th, NULL, th_func, param.get()) param.release();
...
Here instead the thread function
extern "C" void* th_func(void* p) {
try {
std::auto_ptr<myparams> param((myparams*)p);
...
} catch(...) {
}
return 0;
}
Pretty easyn isn't it? In case the thread creation fails the resource will be free'd (deleted) by the auto_ptr, otherwise the ownership will be passed to the thread.
What if the thread is so fast that after creation it releases the resource before the
param.release();
gets called in the main function/method? Nothing! Because we will 'tell' the auto_ptr to ignore the deallocation.
Is C++ memory management easy isn't it?
Cheers,
Ema!
Manage memory the same way you manage other resources (handles, files, db connections, sockets...). GC would not help you with them either.
Exactly one return from any function. That way you can do deallocation there and never miss it.
It's too easy to make a mistake otherwise:
new a()
if (Bad()) {delete a; return;}
new b()
if (Bad()) {delete a; delete b; return;}
... // etc.