C++ memory management paradigms - c++

I'm moving from C to C++11 and trying to figure out the memory management paradigm for C++11 programs (or any modern languages with built-in exceptions). Specifically, I'm having a crack at game development where running out of memory is a real concern.
In C, I'm using to checking the return value of malloc; and generally use custom allocators.
With C++, I'm quite confused; though I like how the STL containers were built allowing custom allocators. Since the STL containers all manage their own memory, simply adding an element to a vector could throw a std::bad_alloc. How do I guard against such things? I have heard wrapping all throwing calls in try/catch blocks can be prohibitively expensive.
However, allowing the exception to travel up the callstack would allow in a bunch of functions that would not fully execute, and would lead to some really tricky code. i.e. if A->B->C->D is a callstack, D throws, and A catches, then B, C, and D could have potentially created some weird problems by not being able to finish execution normally.
Additionally, the nothrow argument seems to allow very C-like code; though I now don't see the benefit over a plain malloc.
What are some best practices out there for writing exception-safe C++ code that guards against out-of-memory issues?
edit: A relevant answer on progammers.stackexchange arguing for exception-less C++ design in consoles. Not sure if these arguments still apply to the 8th generation consoles

My answer will be geared more towards game development since that is my background and that is part of what you are interested in. Different types of applications will have different requirements.
Games generally allocate all dynamic memory up front and stick within that budget. Consoles in particular have hard memory limits and most games will want to use all of it.
There's a few reasons for allocating everything up front.
One, performance. Memory allocation is slow. You want to avoid it at all costs. If you allocate everything up front, you can then write custom, high performance memory allocators like Pool Allocators, Stack Allocators, etc, that just grab memory from your pre-allocated buffer. It's important to choose the best allocator for the task at hand.
Two, you'll know quickly if there isn't enough memory for your game. In development, you'll crash if you run out of memory and will need to adjust usage, but the final release shouldn't crash because you've allocated up front and stuck within your memory budgets.
For exceptions, many (but not all) games disable exceptions, again for performance reasons. In fact some console compilers don't even support exceptions. Then you will either need to use an STL library with no exceptions, or implement your own containers. Many game teams choose to implement their own for performance reasons as well as to better integrate them with custom memory allocators.
That said, dynamic memory allocation, STL, and exceptions are probably perfectly fine for smaller personal projects/games, but keep in mind what will be necessary for large, high performance, real-time games.
For exception safety, I would definitely use RAII. That is its purpose. Also I would recommend using smart pointers like std::unique_ptr and std::shared_ptr for memory management. Coupled with RAII, if your constructors throw, memory will be freed.

Use destructors to clean up automatically when scopes are exited. That's called RAII, Resource Acquisition Is Initializion, although the acronym is not exactly the best one could devise. All the standard containers etc. clean up automatically.
In languages like C# and Java which are based on garbage collection, you instead pepper the code with try blocks and “using” statements. Java just got that (part of the try syntax IIRC); it's been in C# from the beginning (keyword using); in Python it's called with; and C++ doesn't have it and doesn't need it. I once created a WITH macro for C++, a clever little hack, thinking I would be using it all the time, but I haven't used it once except just to try it out right after creating it: in C++ RAII does it all.
Summing up: use RAII, i.e., use destructors, and just let those exceptions propagate.
Regarding memory exhaustion, ordinarily that's just regarded as “we're done for”, nothing to do except terminate as orderly as possible.
But it doesn't hurt to maybe set aside a little buffer which can deallocated when memory is exhausted, so as to have some working memory for cleanup-work.
C++ doesn't differentiate between hard exceptions (fatal like e.g. memory exhaustion) and soft exceptions (general failures that are not fatal).

Related

RAII vs. Garbage Collector

I recently watched a great talk by Herb Sutter about "Leak Free C++..." at CppCon 2016 where he talked about using smart pointers to implement RAII (Resource acquisition is initialization) - Concepts and how they solve most of the memory leaks issues.
Now I was wondering. If I strictly follow RAII rules, which seems to be a good thing, why would that be any different from having a garbage collector in C++? I know that with RAII the programmer is in full control of when the resources are freed again, but is that in any case beneficial to just having a garbage collector? Would it really be less efficient? I even heard that having a garbage collector can be more efficient, as it can free larger chunks of memory at a time instead of freeing small memory pieces all over the code.
If I strictly follow RAII rules, which seems to be a good thing, why would that be any different from having a garbage collector in C++?
While both deal with allocations, they do so in completely different manners. If you are reffering to a GC like the one in Java, that adds its own overhead, removes some of the determinism from the resource release process and handles circular references.
You can implement GC though for particular cases, with much different performance characteristics. I implemented one once for closing socket connections, in a high-performance/high-throughput server (just calling the socket close API took too long and borked the throughput performance). This involved no memory, but network connections, and no cyclic dependency handling.
I know that with RAII the programmer is in full control of when the resources are freed again, but is that in any case beneficial to just having a garbage collector?
This determinism is a feature that GC simply doesn't allow. Sometimes you want to be able to know that after some point, a cleanup operation has been performed (deleting a temporary file, closing a network connection, etc).
In such cases GC doesn't cut it which is the reason in C# (for example) you have the IDisposable interface.
I even heard that having a garbage collector can be more efficient, as it can free larger chunks of memory at a time instead of freeing small memory pieces all over the code.
Can be ... depends on the implementation.
Garbage collection solves certain classes of resource problems that RAII cannot solve. Basically, it boils down to circular dependencies where you do not identify the cycle before hand.
This gives it two advantages. First, there are going to be certain types of problem that RAII cannot solve. These are, in my experience, rare.
The bigger one is that it lets the programmer be lazy and not care about memory resource lifetimes and certain other resources you don't mind delayed cleanup on. When you don't have to care about certain kinds of problems, you can care more about other problems. This lets you focus on the parts of your problem you want to focus on.
The downside is that without RAII, managing resources whose lifetime you want constrained is hard. GC languages basically reduce you to either having extremely simple scope-bound lifetimes or require you to do resource management manually, like in C, with manually stating you are done with a resource. Their object lifetime system is strongly tied to GC, and doesn't work well for tight lifetime management of large complex (yet cycle-free) systems.
To be fair, resource management in C++ takes a lot of work to do properly in such large complex (yet cycle-free) systems. C# and similar languages just make it a touch harder, in exchange they make the easy case easy.
Most GC implementations also forces non-locality full fledged classes; creating contiguous buffers of general objects, or composing general objects into one larger object, is not something that most GC implementations make easy. On the other hand, C# permits you to create value type structs with somewhat limited capabilities. In the current era of CPU architecture, cache friendliness is key, and the lack of locality GC forces is a heavy burden. As these languages have a bytecode runtime for the most part, in theory the JIT environment could move commonly used data together, but more often than not you just get a uniform performance loss due to frequent cache misses compared to C++.
The last problem with GC is that deallocation is indeterminate, and can sometimes cause performance problems. Modern GCs make this less of a problem than it has been in the past.
RAII and GC solve problems in completely different directions. They are completely different, despite what some would say.
Both address the issue that managing resources is hard. Garbage Collection solves it by making it so that the developer doesn't need to pay as much attention to managing those resources. RAII solves it by making it easier for developers to pay attention to their resource management. Anyone who says they do the same thing has something to sell you.
If you look at recent trends in languages, you're seeing both approaches being used in the same language because, frankly, you really need both sides of the puzzle. You're seeing lots of languages which use garbage collection of sorts so that you don't have to pay attention to most objects, and those languages also offer RAII solutions (such as python's with operator) for the times you really want to pay attention to them.
C++ offers RAII through constructors/destructors and GC through shared_ptr (If I may make the argument that refcounting and GC are in the same class of solutions because they're both designed to help you not need to pay attention to lifespan)
Python offers RAII through with and GC through a refcounting system plus a garbage collector
C# offers RAII through IDisposable and using and GC through a generational garbage collector
The patterns are cropping up in every language.
Notice that RAII is a programming idiom, while GC is a memory management technique. So we are comparing apples with oranges.
But we can restrict RAII to its memory management aspects only and compare that to GC techniques.
The main difference between so called RAII based memory management techniques (which really means reference counting, at least when you consider memory resources and ignore the other ones such as files) and genuine garbage collection techniques is the handling of circular references (for cyclic graphs).
With reference counting, you need to code specially for them (using weak references or other stuff).
In many useful cases (think of std::vector<std::map<std::string,int>>) the reference counting is implicit (since it can only be 0 or 1) and is practically omitted, but the contructor and destructor functions (essential to RAII) behave as if there was a reference counting bit (which is practically absent). In std::shared_ptr there is a genuine reference counter. But memory is still implicitly manually managed (with new and delete triggered inside constructors and destructors), but that "implicit" delete (in destructors) gives the illusion of automatic memory management. However, calls to new and delete still happen (and they cost time).
BTW the GC implementation may (and often does) handle circularity in some special way, but you leave that burden to the GC (e.g. read about the Cheney's algorithm).
Some GC algorithms (notably generational copying garbage collector) don't bother releasing memory for individual objects, it is release en masse after the copy. In practice the Ocaml GC (or the SBCL one) can be faster than a genuine C++ RAII programming style (for some, not all, kind of algorithms).
Some GC provide finalization (mostly used to manage non-memory external resources like files), but you'll rarely use it (since most values consume only memory resources). The disadvantage is that finalization does not offer any timing guarantee. Practically speaking, a program using finalization is using it as a last resort (e.g. closing of files should still happen more or less explicitly outside of finalization, and also with them).
You still can have memory leaks with GC (and also with RAII, at least when used improperly), e.g. when a value is kept in some variable or some field but will never be used in the future. They just happen less often.
I recommend reading the garbage collection handbook.
In your C++ code, you might use Boehm's GC or Ravenbrook's MPS or code your own tracing garbage collector. Of course using a GC is a tradeoff (there are some inconvenience, e.g. non-determinism, lack of timing guarantees, etc...).
I don't think that RAII is the ultimate way of dealing with memory in all cases. In several occasions, coding your program in a genuinely and efficiently GC implementations (think of Ocaml or SBCL) can be simpler (to develop) and faster (to execute) than coding it with fancy RAII style in C++17. In other cases it is not. YMMV.
As an example, if you code a Scheme interpreter in C++17 with the fanciest RAII style, you would still need to code (or use) a explicit GC inside it (because a Scheme heap has circularities). And most proof assistants are coded in GC-ed languages, often functional ones, (the only one I know which is coded in C++ is Lean) for good reasons.
BTW, I'm interested in finding such a C++17 implementation of Scheme (but less interested in coding it myself), preferably with some multi-threading ability.
One of the problem about garbage collectors is that it's hard to predict program performance.
With RAII you know that in exact time resource will go out of scope you will clear some memory and it will take some time. But if you are not a master of garbage collector settings you cannot predict when cleanup will happen.
For example: cleaning a bunch of small objects can be done more effectively with GC because it can free large chunk, but it will be not fast operation, and it's hard to predict when in will occur and because of "large chunk cleanup" it will take some processor time and can affect your program performance.
Roughly speaking. The RAII idiom may be better for the latency and jitter. A garbage collector may be better for the system's throughput.
"Efficient" is a very broad term, in sense of development efforts RAII is typically less efficient than GC, but in terms of performance GC is typically less efficient than RAII. However it is possible to provide contr-examples for both cases. Dealing with generic GC when you have very clear resource (de)allocation patters in managed languages can be rather troublesome, just like the code using RAII can be surprisingly inefficient when shared_ptr is used for everything for no reason.
Garbage collection and RAII each support one common construct for which the other is not really suitable.
In a garbage-collected system, code may efficiently treat references to immutable objects (such as strings) as proxies for the data contained therein; passing around such references is almost as cheap as passing around "dumb" pointers, and is faster than making a separate copy of the data for each owner, or trying to track ownership of a shared copy of the data. In addition, garbage-collected systems make it easy to create immutable object types by writing a class which creates a mutable object, populating it as desired, and providing accessor methods, all while refraining from leaking references to anything that might mutate it once the constructor finishes. In cases where references to immutable objects need to be widely copied but the objects themselves don't, GC beats RAII hands down.
On the other hand, RAII is excellent at handling situations where an object needs to acquire exclusive services from outside entities. While many GC systems allow objects to define "Finalize" methods and request notification when they are found to be abandoned, and such methods may sometimes manage to release outside services that are no longer needed, they are seldom reliable enough to provide a satisfactory way of ensuring timely release of outside services. For management of non-fungible outside resources, RAII beats GC hands down.
The key difference between the cases where GC wins versus those where RAII wins is that GC is good at managing fungible memory that can be freed on an as-needed basis, but poor at handling non-fungible resources. RAII is good at handling objects with clear ownership, but bad at handling ownerless immutable data holders which have no real identity apart from the data they contain.
Because neither GC nor RAII handles all scenarios well, it would be helpful for languages to provide good support for both of them. Unfortunately, languages which focus on one tend to treat the other as an afterthought.
The main part of the question about whether one or the other is "beneficial" or more "efficient" cannot be answered without giving lots of context and arguing about the definitions of these terms.
Beyond that, you can basically feel the tension of the ancient "Is Java or C++ the better language?" flamewar crackling in the comments. I wonder what an "acceptable" answer to this question could look like, and am curious to see it eventually.
But one point about a possibly important conceptual difference has not yet been pointed out: With RAII, you are tied to the thread that calls the destructor. If your application is single threaded (and even though it was Herb Sutter who stated that The Free Lunch Is Over: Most software today effectively still is single-threaded), then a single core may be busy with handling the cleanups of objects that are no longer relevant for the actual program...
In contrast to that, the garbage collector usually runs in its own thread, or even multiple threads, and is thus (to some extent) decoupled from the execution of the other parts.
(Note: Some answers already tried to point out application patterns with different characteristics, mentioned efficiency, performance, latency and throughput - but this specific point was not mentioned yet)
RAII uniformly deals with anything that is describable as a resource. Dynamic allocations are one such resource, but they are by no means the only one, and arguably not the most important one. Files, sockets, database connections, gui feedback and more are all things that can be managed deterministically with RAII.
GCs only deal with dynamic allocations, relieving the programmer of worrying about the total volume of allocated objects over the lifetime of the program (they only have to care about the peak concurrent allocation volume fitting)
RAII and garbage collection are intended to solve different problems.
When you use RAII you leave an object on the stack which sole purpose is to clean up whatever it is you want managed (sockets, memory, files, etc.) on leaving the scope of the method. This is for exception-safety, not just garbage collection, which is why you get responses about closing sockets and freeing mutexes and the like. (Okay, so no one mentioned mutexes besides me.) If an exception is thrown, stack-unwinding naturally cleans up the resources used by a method.
Garbage collection is the programmatic management of memory, though you could "garbage-collect" other scarce resources if you'd like. Explicitly freeing them makes more sense 99% of the time. The only reason to use RAII for something like a file or socket is you expect the use of the resource to be complete when the method returns.
Garbage collection also deals with objects that are heap-allocated, when for instance a factory constructs an instance of an object and returns it. Having persistent objects in situations where control must leave a scope is what makes garbage collection attractive. But you could use RAII in the factory so if an exception is thrown before you return, you don't leak resources.
I even heard that having a garbage collector can be more efficient, as it can free larger chunks of memory at a time instead of freeing small memory pieces all over the code.
That's perfectly doable - and, in fact, is actually done - with RAII (or with plain malloc/free). You see, you don't necessarily always use the default allocator, which deallocates piecemeal only. In certain contexts you use custom allocators with different kinds of functionality. Some allocators have the in-built ability of freeing everything in some allocator region, all at once, without having to iterate individual allocated elements.
Of course, you then get into the question of when to deallocate everything - whether the use of those allocators (or the slab of memory with which they're associated has to be RAIIed or not, and how.

C++ classes with dynamic allocation in cuda?

I have a basic doubt on porting C++ classes to CUDA and I can not find a direct, clear answer about what it seems to be a pain in the end.
I think one would agree that C++ code for the host will very often use new/delete operators in the constructor and destructor. Thinking about easily porting C++ code to CUDA, there are few postings claiming that it is 'easy', or say easier and easier, and the main reason given are examples with __host__ __device__ decorators. It is not difficult to find out in some postings that dynamical allocation in the device usually implies a serious penalty in performance. So, what is one supposed to do with the C++ classes in CUDA?
Adding decorators is not going to change the dynamical allocation of memory that happens in the core of the constructors and destructors. It seems one does need to rewrite the C++ classes without new/delete. In my experience it was really impressive how bad does a new/delete class behave compared with a static allocation, for obvious reasons, but it is really bad, like going to a processor 20 years old ... So, what do people who have ported C++ applications with dynamical allocation do? (for more than very few doubles in an array that can be counted with the hands)
The standard approach is to change the scope and life cycle of objects within the code so that it isn't necessary to continuously create and destroy objects as part of computations on the device. Memory allocation in most distributed memory architectures (CUDA, HPC clusters, etc) is expensive, and the usual solution is to use it as sparingly as possible and amortise the cost of the operation by extending the lifetime of objects.
Ideally, create all the objects you need at the beginning of the programming, even if that means pre-allocating a pool of objects which will be consumed as the program runs. That is more efficient that ad-hoc memory allocation and deallocation. It also avoids problems with memory fragmentation, which can get to be an issue on GPU hardware where pages sizes are rather large.

Implementing a memory manager in multithreaded C/C++ with dynamically sized memory pool?

Background: I'm developing a multiplatform framework of sorts that will be used as base for both game and util/tool creation. The basic idea is to have a pool of workers, each executing in its own thread. (Furthermore, workers will also be able to spawn at runtime.) Each thread will have it's own memory manager.
I have long thought about creating my own memory management system, and I think this project will be perfect to finally give it a try. I find such a system fitting due to the types of usages of this framework will often require memory allocations in realtime (games and texture edition tools).
Problems:
No generally applicable solution(?) - The framework will be used for both games/visualization (not AAA, but indie/play) and tool/application creation. My understanding is that for game development it is usual (at least for console games) to allocate a big chunk of memory only once in the initialization, and then use this memory internally in the memory manager. But is this technique applicable in a more general application?
In a game you could theoretically know how much memory your scenes and resources will need, but for example, a photo editing application will load resources of all different sizes... So in the latter case a more dynamic memory "chunk size" would be needed? Which leads me to the next problem:
Moving already allocated data and keeping valid pointers - Normally when allocating on the heap, you will acquire a simple pointer to the memory chunk. In a custom memory manager, as far as I understand it, a similar approach is then to return a pointer to somewhere free in the pre-allocated chunk. But what happens if the pre-allocated chunk is too small and needs to be resized or even defragmentated? The data would be needed to be moved around in the memory and the old pointers would be invalid. Is there a way to transparently wrap these pointers in some way, but still use them as normally "outside" the memory management as if they were usual C++ pointers?
Third party libraries - If there is no way to transparently use a custom memory management system for all memory allocation in the application, every third party library I'm linking with, will still use the "old" OS memory allocations internally. I have learned that it is common for libraries to expose functions to set custom allocation functions that the library will use, but it is not guaranteed every library I will use will have this ability.
Questions: Is it possible and feasible to implement a memory manager that can use a dynamically sized memory chunk pool? If so, how would defragmentation and memory resize work, without breaking currently in-use pointers? And finally, how is such a system best implemented to work with third party libraries?
I'm also thankful for any related reading material, papers, articles and whatnot! :-)
As someone who has previously written many memory managers and heap implementations for AAA games for the last few generations of consoles let me tell you its simply not worth it anymore.
Your information is old - back in the gamecube era [circa 2003] we used to do what you said- allocate a large chunk and carve out that chunk manually using custom algorithms tweaked for each game.
Once virtual memory came along (xbox era), games got more complicated [and so made more allocations and became multimthreaded] address fragmentation made this untenable. So we switched to custom allocators to handle certain types of requests only - for instance physical memory, or lock free small block low fragmentation heaps or thread local cache of recently used blocks.
As built in memory managers become better it gets harder to do better than those - certainly in the general case and a close thing for a specific use cases. Doug Lea Allocator [or whatever the mainstream c++ linux compilers come with now] and the latest Windows low fragmentation heaps are really very good, and you'd do far better investing your time elsewhere.
I've got spreadsheets at work measuring all kinds of metrics for a whole load of allocators - all the big name ones and a fair few I've collected over the years. And basically whilst the specialist allocators can win on a few metrics [lowest overhead per alloc, spacial proximity, lowest fragmentation, etc] for overall metrics the mainstream ones are simply the best.
As a user of your library, my personal preferred option is you just allocate memory when you need it. Use operator new/the new operator and I can use the standard C++ mechanisms to replace those and use my custom heap (if I indeed have one), or alternatively I can use platform specific ways of replacing your allocations (e.g. XMemAlloc on Xbox). I don't need tagging [capturing callstacks is far superior which I can do if I want]. Lower down that list comes you giving me an interface that you'll call when you need to allocate memory - this is just a pain for you to implement and I'll probably just pass it onto operator new anyway. The worst thing you can do is 'know best' and create your own custom heaps. If memory allocation performance is a problem, I'd much rather you share the solution the whole game uses than roll your own.
If you're looking to write your own malloc()/free(), etc., you probably should start by checking out the source code for existing systems such as dlmalloc. This is a hard problem, though, for what it's worth. Writing your own malloc library is Hard. Beating existing general purpose malloc libraries will be Even Harder.
And now, here is the correct answer: DON'T IMPLEMENT YET ANOTHER MEMORY MANAGER.
It is incredibly hard to implement a memory manager that does not fail under different kinds of usage patterns and events. You may be able to build a specific manager that works well under YOUR usage patterns, but to write one which works well for MANY users is a full-time job that almost no one has really done well. Worse, it is fantastically easy to implement a memory manager that works great 99% of the time and then 1% of the time crash or suddenly consume most or all available memory on your system due to unexpected heap fragmentation.
I say this as someone who has written multiple memory managers, watched multiple people write their own memory managers, and watched even more people attempt to write memory managers and fail. This problem is deceptively difficult, not because it's hard to write templated allocators and generic types with inheritance and such, but because the other solutions given in this thread tend to fail under corner types of load behavior. Once you start supporting byte alignments (as all real-world allocators must) then heap fragmentation rears its ugly head. Cute heuristics that work great for small test programs, fail miserably when subjected to large, real-world programs.
And once you get it working, someone else will need: cookies to verify against memory stomps; heap usage reporting; memory pools; pools of pools; memory leak tracking and reporting; heap auditing; chunk splitting and coalescing; thread-local storage; lookasides; CPU and process-level page faulting and protection; setting and checking and clearing "free-memory" patterns aka 0xdeadbeef; and whatever else I can't think of off the top of my head.
Writing yet another memory manager falls squarely under the heading of Premature Optimization. Since there are multiple free, good, memory managers with thousands of hours of development and testing behind them, you have to justify spending the cost of your own time in such a way that the result would provide some sort of measurable improvement over what other people have done, and you can use, for free.
If you are SURE you want to implement your own memory manager (and hopefully you are NOT sure after reading this message), read through the dlmalloc sources in detail, then read through the tcmalloc sources in detail as well, THEN make sure you understand the performance trade-offs in implementing a thread-safe versus a thread-unsafe memory manager, and why the naive implementations tend to give poor performance results.
Prepare more than one solution and let the user of the framework adopt any particular one. Policy classes to the generic allocator you develop would do this nicely.
A nice way to get around this is to wrap up pointers in a class with overloaded * operator. Make the internal data of that class only an index to the memory pool. Now, you can just change the index quickly after a background thread copies the data over.
Most good C++ libraries support allocators and you should implement one. You can also overload the global new so your version gets used. And keep in mind that you generally won't need to think about a library allocating or deallocating a large amount of data, which is generally a responsibility of client code.

What is the philosophy of managing memory in C++? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
What is the design factor in managing memory in C++?
For example: why is there a memory leak when a program does not release a memory object before it exits? Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ? I know I am being a bit naive, but what is the design philosophy of memory management in C++ with respect to classes, structs, methods, interfaces, abstract classes?
Certainly one cannot humanely remember every spec of C++. What is the core driving design of memory management?
What is the core driving design of memory management ?
In almost all cases, you should use automatic resource management. Basically:
Wherever it is practical to do so, prefer creating objects with automatic storage duration (that is, on the stack, or function-local)
Whenever you must use dynamic allocation, use Scope-Bound Resource Management (SBRM; more commonly called Resource Acquisition is Initialization or RAII).
Rarely do you have to write your own RAII container: the C++ standard library provides a whole set of containers (e.g., vector and map) and smart pointers like shared_ptr (from C++ TR1, C++0x, and Boost) work very well for most common situations.
Basically, in really good C++ code, you should never call delete yourself1 to clean up memory that you've allocated: memory management and resource cleanup should always be encapsulated in a container of some kind.
1. Obviously, the exception here is when you implement an RAII container yourself, since that container must be responsible for cleaning up whatever it owns.
It's not entirely clear whether you're asking about the philosophy of what's built into C++, or how to use it in a way that prevents memory leaks.
The primary way to prevent memory leaks (and other resource leaks) is known as either RAII (Resource Acquisition Is Initialization) or SBRM (Scope Bound Resource Management). Either way, the basic idea is pretty simple: since objects with auto storage duration are automatically destroyed on exit from their scope, you allocate memory in the ctor of such an object, and free the memory in its dtor.
As far as C++ itself goes, it doesn't really have a philosophy. It provides mechanisms, but leaves it up to the programmer to decide which mechanism is appropriate for the situation at hand. That's often RAII. Sometimes it might be a garbage collector. Still other times, other times it might be various sorts of custom memory managers. Of course, sometimes it's a combination of two or all three of those, or something else entirely.
Edit: As to why C++ does things this way, it's fairly simple: almost any other choice will render the language unsuited to at least some kinds of problems -- including a number for which C++ was quite clearly intended to be suitable. One of the most obvious of these was being able to run on a "bare" machine with a minimum of support structure (e.g., no OS)
C and C++ take the position that you, the programmer, know when you are done with memory you have allocated. This avoids the need for the language runtime to know much of anything about what has been allocated, and the associated tasks (reference counting, garbage collection, etc.) needed to "clean up" when necessary.
At the crux is the idea that: if you allocate it, you must free it. (malloc/free, new/delete)
There are several methods of helping manage this so that you don't have to explicitly remember. RAII and the smart pointer implementations that provide containers that do it is extremely useful and powerful for managing memory based on object creation and destruction. They will save you hours of time.
why is there a memory leak when a program does not release a memory object before it exits ?
Well, the OS typically clean up your mess for you. However, what happens when your program is running for an arbitrary amount of time and you have leaked so much memory that you can't allocate anymore? You crash, and that's not good.
Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ?
No. Some programming languages have automated memory management, some do not. There are benefits and drawbacks to both models. Languages with manual memory management allow you to say when and where resources are allocated and released, i.e., it is very deterministic. A relative beginner will however inevitably write code that leaks while they are getting used to dealing with memory management.
Automated schemes are great for the programmer, but you don't get the same level of determinism. If I am writing a hardware driver, this may not be a good model for me. If I were writing a simple GUI, then I probably don't care about some objects persisting for a bit longer than they need to, so I will take an automated management scheme every time. That's not to say that GC'd languages are only for 'simple' tasks, some tasks just require a tighter control over your resources. Not all platforms have 4GB+ memory for you to play around in).
There are patterns that you can use to help you with memory management. The canonical example would be RAII (Resource Allocation is Initialization)
Philosophy-wise, I think there are two things that lead to C++ not having a garbage collector (which seems to be what you're getting at):
Compatibility with C. C++ tries to be very compatible with C, for better or worse. C didn't have garbage collection, so C++ doesn't, at least not by default. I guess you could sum this up as "historical reasons".
The "you only pay for what you use" philosophy. C++ tries to avoid imposing any overhead above C unless you explicitly ask for it. So you only pay the price of exceptions if you actually throw one, etc. There's an argument that garbage collection would impose a cost whenever an object is allocated on the heap so it couldn't be the default behavior in C++.
Note that there is actually quite a bit of debate about whether garbage collection is actually more or less efficient than manual memory management. The better garbage collectors generally want to be able to move stuff around though, and C++ has pointer arithmetic (again, inherited from C) that makes it very hard to make such a collector work with C++.
Here's Stroustrup's (not really direct) answer to "Why doesn't C++ have garbage collection?":
If you want automatic garbage collection, there are good commercial and public-domain garbage collectors for C++. For applications where garbage collection is suitable, C++ is an excellent garbage collected language with a performance that compares favorably with other garbage collected languages. See The C++ Programming Language (3rd Edition) for a discussion of automatic garbage collection in C++. See also, Hans-J. Boehm's site for C and C++ garbage collection.
Also, C++ supports programming techniques that allows memory management to be safe and implicit without a garbage collector.
C++0x offers a GC ABI.
Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ?
Is it? why? A good programming language is one that lets you solve problems, no more, no less.
A garbage collector certainly lowers the barrier of entry, but it also takes control away from the programmer, which might be a problem in some cases. True, on a modern 2.5GHz quad-core computer, and with today's advanced and efficient garbage collectors, we can live with that. But C++ had to work with much more limited hardware, ranging from desktop computers with a whopping 16MB of RAM down to embedded platforms with 16KB, and everything in between. It has to be usable in realtime code, where you can not just pause the program for 0.5 seconds to run a garbage collection.
C++ isn't just designed to be the language used on desktop computers. It's meant to be usable everywhere, on memory-limited systems, in hard realtime scenarios, on large supercomputers and everywhere else.
C++'s guiding principle is "you don't pay for what you don't use". If you don't want a garbage collector, you shouldn't have to pay the (steep) price of one.
There are very powerful techniques to manage memory in C++ and avoid memory leaks, even without a garbage collector. If a garbage collector was the only way to avoid memory leaks, then there'd be a strong argument in favor of adding one to the language. But it isn't. You just have to learn how to properly manage memory yourself in C++.
What is the core driving design of memory management?
The driving design (no pun intended) is a bit like that of stick-shift transmission cars, as opposed to automatic transmission cars. Like a stick-shift car, C++ gives you freedom and control over the machine, but it's not as easy to use as automatic ones that take care of many things for you.
The following could easily have been written about C++ versus Java:
People who drive stick shift cars know the difference and the advantages of
having total control of your car engine; people who drive cars with automatic
transmissions do not. (...) Race cars, for example, do not use automatic
transmissions. (...) People who are used to shifting gears will focus more
on their driving making it more efficient and safe.
http://www.eslbee.com/contrast_stick_shift_or_automatic.htm
I should add, though, that C++ does have some mechanism that handle the memory for you, like others have mentioned, e.g. RAII, smart pointers etc.
C++ has no design philosophy wrt memory. All it has are two functions for allocating memory (new malloc) and two functions for freeing memory ( delete free ) and a frew related functions. Even those can be replaced by the programmer.
This is because C++ aims to run on generic computers. Generic computers are a CPU, memory (RAM, varieties of ROM) and a bus/busses for peripherals. There is no built in memory management on generic computers.
Now most computers come with memory ( typically a ROM variant ) which contains a bios/monitor. There you might see some rudimentary form of memory management --probably not.
Some computers come with OSes which will have memory management, but even there it is often primitive, and it is easy for me to claim that most computers running a C++ program have no OS at all.
If you expect C++ to run on any computer, it cannot have a memory management philosophy.

Garbage Collection in C++ -- why?

I keep hearing people complaining that C++ doesn't have garbage collection. I also hear that the C++ Standards Committee is looking at adding it to the language. I'm afraid I just don't see the point to it... using RAII with smart pointers eliminates the need for it, right?
My only experience with garbage collection was on a couple of cheap eighties home computers, where it meant that the system would freeze up for a few seconds every so often. I'm sure it has improved since then, but as you can guess, that didn't leave me with a high opinion of it.
What advantages could garbage collection offer an experienced C++ developer?
I keep hearing people complaining that C++ doesn't have garbage collection.
I am so sorry for them. Seriously.
C++ has RAII, and I always complain to find no RAII (or a castrated RAII) in Garbage Collected languages.
What advantages could garbage collection offer an experienced C++ developer?
Another tool.
Matt J wrote it quite right in his post (Garbage Collection in C++ -- why?): We don't need C++ features as most of them could be coded in C, and we don't need C features as most of them could coded in Assembly, etc.. C++ must evolve.
As a developer: I don't care about GC. I tried both RAII and GC, and I find RAII vastly superior. As said by Greg Rogers in his post (Garbage Collection in C++ -- why?), memory leaks are not so terrible (at least in C++, where they are rare if C++ is really used) as to justify GC instead of RAII. GC has non deterministic deallocation/finalization and is just a way to write a code that just don't care with specific memory choices.
This last sentence is important: It is important to write code that "juste don't care". In the same way in C++ RAII we don't care about ressource freeing because RAII do it for us, or for object initialization because constructor do it for us, it is sometimes important to just code without caring about who is owner of what memory, and what kind pointer (shared, weak, etc.) we need for this or this piece of code. There seems to be a need for GC in C++. (even if I personaly fail to see it)
An example of good GC use in C++
Sometimes, in an app, you have "floating data". Imagine a tree-like structure of data, but no one is really "owner" of the data (and no one really cares about when exactly it will be destroyed). Multiple objects can use it, and then, discard it. You want it to be freed when no one is using it anymore.
The C++ approach is using a smart pointer. The boost::shared_ptr comes to mind. So each piece of data is owned by its own shared pointer. Cool. The problem is that when each piece of data can refer to another piece of data. You cannot use shared pointers because they are using a reference counter, which won't support circular references (A points to B, and B points to A). So you must know think a lot about where to use weak pointers (boost::weak_ptr), and when to use shared pointers.
With a GC, you just use the tree structured data.
The downside being that you must not care when the "floating data" will really be destroyed. Only that it will be destroyed.
Conclusion
So in the end, if done properly, and compatible with the current idioms of C++, GC would be a Yet Another Good Tool for C++.
C++ is a multiparadigm language: Adding a GC will perhaps make some C++ fanboys cry because of treason, but in the end, it could be a good idea, and I guess the C++ Standards Comitee won't let this kind of major feature break the language, so we can trust them to make the necessary work to enable a correct C++ GC that won't interfere with C++: As always in C++, if you don't need a feature, don't use it and it will cost you nothing.
The short answer is that garbage collection is very similar in principle to RAII with smart pointers. If every piece of memory you ever allocate lies within an object, and that object is only referred to by smart pointers, you have something close to garbage collection (potentially better). The advantage comes from not having to be so judicious about scoping and smart-pointering every object, and letting the runtime do the work for you.
This question seems analogous to "what does C++ have to offer the experienced assembly developer? instructions and subroutines eliminate the need for it, right?"
With the advent of good memory checkers like valgrind, I don't see much use to garbage collection as a safety net "in case" we forgot to deallocate something - especially since it doesn't help much in managing the more generic case of resources other than memory (although these are much less common). Besides, explicitly allocating and deallocating memory (even with smart pointers) is fairly rare in the code I've seen, since containers are a much simpler and better way usually.
But garbage collection can offer performance benefits potentially, especially if alot of short lived objects are being heap allocated. GC also potentially offers better locality of reference for newly created objects (comparable to objects on the stack).
The motivating factor for GC support in C++ appears to be lambda programming, anonymous functions etc. It turns out that lambda libraries benefit from the ability to allocate memory without caring about cleanup. The benefit for ordinary developers would be simpler, more reliable and faster compiling lambda libraries.
GC also helps simulate infinite memory; the only reason you need to delete PODs is that you need to recycle memory. If you have either GC or infinite memory, there is no need to delete PODs anymore.
I don't understand how one can argue that RAII replaces GC, or is vastly superior. There are many cases handled by a gc that RAII simply cannot deal with at all. They are different beasts.
First, RAII is not bullet proof: it works against some common failures which are pervasive in C++, but there are many cases where RAII does not help at all; it is fragile to asynchronous events (like signals under UNIX). Fundamentally, RAII relies on scoping: when a variable is out of scope, it is automatically freed (assuming the destructor is correctly implemented of course).
Here is a simple example where neither auto_ptr or RAII can help you:
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <memory>
using namespace std;
volatile sig_atomic_t got_sigint = 0;
class A {
public:
A() { printf("ctor\n"); };
~A() { printf("dtor\n"); };
};
void catch_sigint (int sig)
{
got_sigint = 1;
}
/* Emulate expensive computation */
void do_something()
{
sleep(3);
}
void handle_sigint()
{
printf("Caught SIGINT\n");
exit(EXIT_FAILURE);
}
int main (void)
{
A a;
auto_ptr<A> aa(new A);
signal(SIGINT, catch_sigint);
while (1) {
if (got_sigint == 0) {
do_something();
} else {
handle_sigint();
return -1;
}
}
}
The destructor of A will never be called. Of course, it is an artificial and somewhat contrived example, but a similar situation can actually happen; for example when your code is called by another code which handles SIGINT and which you have no control over at all (concrete example: mex extensions in matlab). It is the same reason why finally in python does not guarantee execution of something. Gc can help you in this case.
Other idioms do not play well with this: in any non trivial program, you will need stateful objects (I am using the word object in a very broad sense here, it can be any construction allowed by the language); if you need to control the state outside one function, you can't easily do that with RAII (which is why RAII is not that helpful for asynchronous programming). OTOH, gc have a view of the whole memory of your process, that is it knows about all the objects it allocated, and can clean asynchronously.
It can also be much faster to use gc, for the same reasons: if you need to allocate/deallocate many objects (in particular small objects), gc will vastly outperform RAII, unless you write a custom allocator, since the gc can allocate/clean many objects in one pass. Some well known C++ projects use gc, even where performance matter (see for example Tim Sweenie about the use of gc in Unreal Tournament: http://lambda-the-ultimate.org/node/1277). GC basically increases throughput at the cost of latency.
Of course, there are cases where RAII is better than gc; in particular, the gc concept is mostly concerned with memory, and that's not the only ressource. Things like file, etc... can be well handled with RAII. Languages without memory handling like python or ruby do have something like RAII for those cases, BTW (with statement in python). RAII is very useful when you precisely need to control when the ressource is freed, and that's quite often the case for files or locks for example.
The committee isn't adding garbage-collection, they are adding a couple of features that allow garbage collection to be more safely implemented. Only time will tell whether they actually have any effect whatsoever on future compilers. The specific implementations could vary widely, but will most likely involve reachability-based collection, which could involve a slight hang, depending on how it's done.
One thing is, though, no standards-conformant garbage collector will be able to call destructors - only to silently reuse lost memory.
What advantages could garbage collection offer an experienced C++ developer?
Not having to chase down resource leaks in your less-experienced colleagues' code.
It's an all-to-common error to assume that because C++ does not have garbage collection baked into the language, you can't use garbage collection in C++ period. This is nonsense. I know of elite C++ programmers who use the Boehm collector as a matter of course in their work.
Garbage collection allows to postpone the decision about who owns an object.
C++ uses value semantics, so with RAII, indeed, objects are recollected when going out of scope. This is sometimes referred to as "immediate GC".
When your program starts using reference-semantics (through smart pointers etc...), the language does no longer support you, you're left to the wit of your smart pointer library.
The tricky thing about GC is deciding upon when an object is no longer needed.
Garbage collection makes RCU lockless synchronization much easier to implement correctly and efficiently.
Easier thread safety and scalability
There is one property of GC which may be very important in some scenarios. Assignment of pointer is naturally atomic on most platforms, while creating thread-safe reference counted ("smart") pointers is quite hard and introduces significant synchronization overhead. As a result, smart pointers are often told "not to scale well" on multi-core architecture.
Garbage collection is really the basis for automatic resource management. And having GC changes the way you tackle problems in a way that is hard to quantify. For example when you are doing manual resource management you need to:
Consider when an item can be freed (are all modules/classes finished with it?)
Consider who's responsibility it is to free a resource when it is ready to be freed (which class/module should free this item?)
In the trivial case there is no complexity. E.g. you open a file at the start of a method and close it at the end. Or the caller must free this returned block of memory.
Things start to get complicated quickly when you have multiple modules that interact with a resource and it is not as clear who needs to clean up. The end result is that the whole approach to tackling a problem includes certain programming and design patterns which are a compromise.
In languages that have garbage collection you can use a disposable pattern where you can free resources you know you've finished with but if you fail to free them the GC is there to save the day.
Smart pointers which is actually a perfect example of the compromises I mentioned. Smart pointers can't save you from leaking cyclic data structures unless you have a backup mechanism. To avoid this problem you often compromise and avoid using a cyclic structure even though it may otherwise be the best fit.
I, too, have doubts that C++ commitee is adding a full-fledged garbage collection to the standard.
But I would say that the main reason for adding/having garbage collection in modern language is that there are too few good reasons against garbage collection. Since eighties there were several huge advances in the field of memory management and garbage collection and I believe there are even garbage collection strategies that could give you soft-real-time-like guarantees (like, "GC won't take more than .... in the worst case").
using RAII with smart pointers eliminates the need for it, right?
Smart pointers can be used to implement reference counting in C++ which is a form of garbage collection (automatic memory management) but production GCs no longer use reference counting because it has some important deficiencies:
Reference counting leaks cycles. Consider A↔B, both objects A and B refer to each other so they both have a reference count of 1 and neither is collected but they should both be reclaimed. Advanced algorithms like trial deletion solve this problem but add a lot of complexity. Using weak_ptr as a workaround is falling back to manual memory management.
Naive reference counting is slow for several reasons. Firstly, it requires out-of-cache reference counts to be bumped often (see Boost's shared_ptr up to 10× slower than OCaml's garbage collection). Secondly, destructors injected at the end of scope can incur unnecessary-and-expensive virtual function calls and inhibit optimizations such as tail call elimination.
Scope-based reference counting keeps floating garbage around as objects are not recycled until the end of scope whereas tracing GCs can reclaim them as soon as they become unreachable, e.g. can a local allocated before a loop be reclaimed during the loop?
What advantages could garbage collection offer an experienced C++ developer?
Productivity and reliability are the main benefits. For many applications, manual memory management requires significant programmer effort. By simulating an infinite-memory machine, garbage collection liberates the programmer from this burden which allows them to focus on problem solving and evades some important classes of bugs (dangling pointers, missing free, double free). Furthermore, garbage collection facilitates other forms of programming, e.g. by solving the upwards funarg problem (1970).
In a framework that supports GC, a reference to an immutable object such as a string may be passed around in the same way as a primitive. Consider the class (C# or Java):
public class MaximumItemFinder
{
String maxItemName = "";
int maxItemValue = -2147483647 - 1;
public void AddAnother(int itemValue, String itemName)
{
if (itemValue >= maxItemValue)
{
maxItemValue = itemValue;
maxItemName = itemName;
}
}
public String getMaxItemName() { return maxItemName; }
public int getMaxItemValue() { return maxItemValue; }
}
Note that this code never has to do anything with the contents of any of the strings, and can simply treat them as primitives. A statement like maxItemName = itemName; will likely generate two instructions: a register load followed by a register store. The MaximumItemFinder will have no way of knowing whether callers of AddAnother are going to retain any reference to the passed-in strings, and callers will have no way of knowing how long MaximumItemFinder will retain references to them. Callers of getMaxItemName will have no way of knowing if and when MaximumItemFinder and the original supplier of the returned string have abandoned all references to it. Because code can simply pass string references around like primitive values, however, none of those things matter.
Note also that while the class above would not be thread-safe in the presence of simultaneous calls to AddAnother, any call to GetMaxItemName would be guaranteed to return a valid reference to either an empty string or one of the strings that had been passed to AddAnother. Thread synchronization would be required if one wanted to ensure any relationship between the maximum-item name and its value, but memory safety is assured even in its absence.
I don't think there's any way to write a method like the above in C++ which would uphold memory safety in the presence of arbitrary multi-threaded usage without either using thread synchronization or else requiring that every string variable have its own copy of its contents, held in its own storage space, which may not be released or relocated during the lifetime of the variable in question. It would certainly not be possible to define a string-reference type which could be defined, assigned, and passed around as cheaply as an int.
Garbage Collection Can Make Leaks Your Worst Nightmare
Full-fledged GC that handles things like cyclic references would be somewhat of an upgrade over a ref-counted shared_ptr. I would somewhat welcome it in C++, but not at the language level.
One of the beauties about C++ is that it doesn't force garbage collection on you.
I want to correct a common misconception: a garbage collection myth that it somehow eliminates leaks. From my experience, the worst nightmares of debugging code written by others and trying to spot the most expensive logical leaks involved garbage collection with languages like embedded Python through a resource-intensive host application.
When talking about subjects like GC, there's theory and then there's practice. In theory it's wonderful and prevents leaks. Yet at the theoretical level, so is every language wonderful and leak-free since in theory, everyone would write perfectly correct code and test every single possible case where a single piece of code could go wrong.
Garbage collection combined with less-than-ideal team collaboration caused the worst, hardest-to-debug leaks in our case.
The problem still has to do with ownership of resources. You have to make clear design decisions here when persistent objects are involved, and garbage collection makes it all too easy to think that you don't.
Given some resource, R, in a team environment where the developers aren't constantly communicating and reviewing each other's code carefully at alll times (something a little too common in my experience), it becomes quite easy for developer A to store a handle to that resource. Developer B does as well, perhaps in an obscure way that indirectly adds R to some data structure. So does C. In a garbage-collected system, this has created 3 owners of R.
Because developer A was the one that created the resource originally and thinks he's the owner of it, he remembers to release the reference to R when the user indicates that he no longer wants to use it. After all, if he fails to do so, nothing would happen and it would be obvious from testing that the user-end removal logic did nothing. So he remembers to release it, as any reasonably competent developer would do. This triggers an event for which B handles it and also remembers to release the reference to R.
However, C forgets. He's not one of the stronger developers on the team: a somewhat fresh recruit who has only worked in the system for a year. Or maybe he's not even on the team, just a popular third party developer writing plugins for our product that many users add to the software. With garbage collection, this is when we get those silent logical resource leaks. They're the worst kind: they do not necessarily manifest in the user-visible side of the software as an obvious bug besides the fact that over durations of running the program, the memory usage just continues to rise and rise for some mysterious purpose. Trying to narrow down these issues with a debugger can be about as fun as debugging a time-sensitive race condition.
Without garbage collection, developer C would have created a dangling pointer. He may try to access it at some point and cause the software to crash. Now that's a testing/user-visible bug. C gets embarrassed a bit and corrects his bug. In the GC scenario, just trying to figure out where the system is leaking may be so difficult that some of the leaks are never corrected. These are not valgrind-type physical leaks that can be detected easily and pinpointed to a specific line of code.
With garbage collection, developer C has created a very mysterious leak. His code may continue to access R which is now just some invisible entity in the software, irrelevant to the user at this point, but still in a valid state. And as C's code creates more leaks, he's creating more hidden processing on irrelevant resources, and the software is not only leaking memory but also getting slower and slower each time.
So garbage collection does not necessarily mitigate logical resource leaks. It can, in less than ideal scenarios, make leaks far easier to silently go unnoticed and remain in the software. The developers might get so frustrated trying to trace down their GC logical leaks that they simply tell their users to restart the software periodically as a workaround. It does eliminate dangling pointers, and in a safety-obsessed software where crashing is completely unacceptable under any scenario, then I would prefer GC. But I'm often working in less safety-critical but resource-intensive, performance-critical products where a crash that can be fixed promptly is preferable to a really obscure and mysterious silent bug, and resource leaks are not trivial bugs there.
In both of these cases, we're talking about persistent objects not residing on the stack, like a scene graph in a 3D software or the video clips available in a compositor or the enemies in a game world. When resources tie their lifetimes to the stack, both C++ and any other GC language tend to make it trivial to manage resources properly. The real difficulty lies in persistent resources referencing other resources.
In C or C++, you can have dangling pointers and crashes resulting from segfaults if you fail to clearly designate who owns a resource and when handles to them should be released (ex: set to null in response to an event). Yet in GC, that loud and obnoxious but often easy-to-spot crash is exchanged for a silent resource leak that may never be detected.