It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
What is the design factor in managing memory in C++?
For example: why is there a memory leak when a program does not release a memory object before it exits? Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ? I know I am being a bit naive, but what is the design philosophy of memory management in C++ with respect to classes, structs, methods, interfaces, abstract classes?
Certainly one cannot humanely remember every spec of C++. What is the core driving design of memory management?
What is the core driving design of memory management ?
In almost all cases, you should use automatic resource management. Basically:
Wherever it is practical to do so, prefer creating objects with automatic storage duration (that is, on the stack, or function-local)
Whenever you must use dynamic allocation, use Scope-Bound Resource Management (SBRM; more commonly called Resource Acquisition is Initialization or RAII).
Rarely do you have to write your own RAII container: the C++ standard library provides a whole set of containers (e.g., vector and map) and smart pointers like shared_ptr (from C++ TR1, C++0x, and Boost) work very well for most common situations.
Basically, in really good C++ code, you should never call delete yourself1 to clean up memory that you've allocated: memory management and resource cleanup should always be encapsulated in a container of some kind.
1. Obviously, the exception here is when you implement an RAII container yourself, since that container must be responsible for cleaning up whatever it owns.
It's not entirely clear whether you're asking about the philosophy of what's built into C++, or how to use it in a way that prevents memory leaks.
The primary way to prevent memory leaks (and other resource leaks) is known as either RAII (Resource Acquisition Is Initialization) or SBRM (Scope Bound Resource Management). Either way, the basic idea is pretty simple: since objects with auto storage duration are automatically destroyed on exit from their scope, you allocate memory in the ctor of such an object, and free the memory in its dtor.
As far as C++ itself goes, it doesn't really have a philosophy. It provides mechanisms, but leaves it up to the programmer to decide which mechanism is appropriate for the situation at hand. That's often RAII. Sometimes it might be a garbage collector. Still other times, other times it might be various sorts of custom memory managers. Of course, sometimes it's a combination of two or all three of those, or something else entirely.
Edit: As to why C++ does things this way, it's fairly simple: almost any other choice will render the language unsuited to at least some kinds of problems -- including a number for which C++ was quite clearly intended to be suitable. One of the most obvious of these was being able to run on a "bare" machine with a minimum of support structure (e.g., no OS)
C and C++ take the position that you, the programmer, know when you are done with memory you have allocated. This avoids the need for the language runtime to know much of anything about what has been allocated, and the associated tasks (reference counting, garbage collection, etc.) needed to "clean up" when necessary.
At the crux is the idea that: if you allocate it, you must free it. (malloc/free, new/delete)
There are several methods of helping manage this so that you don't have to explicitly remember. RAII and the smart pointer implementations that provide containers that do it is extremely useful and powerful for managing memory based on object creation and destruction. They will save you hours of time.
why is there a memory leak when a program does not release a memory object before it exits ?
Well, the OS typically clean up your mess for you. However, what happens when your program is running for an arbitrary amount of time and you have leaked so much memory that you can't allocate anymore? You crash, and that's not good.
Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ?
No. Some programming languages have automated memory management, some do not. There are benefits and drawbacks to both models. Languages with manual memory management allow you to say when and where resources are allocated and released, i.e., it is very deterministic. A relative beginner will however inevitably write code that leaks while they are getting used to dealing with memory management.
Automated schemes are great for the programmer, but you don't get the same level of determinism. If I am writing a hardware driver, this may not be a good model for me. If I were writing a simple GUI, then I probably don't care about some objects persisting for a bit longer than they need to, so I will take an automated management scheme every time. That's not to say that GC'd languages are only for 'simple' tasks, some tasks just require a tighter control over your resources. Not all platforms have 4GB+ memory for you to play around in).
There are patterns that you can use to help you with memory management. The canonical example would be RAII (Resource Allocation is Initialization)
Philosophy-wise, I think there are two things that lead to C++ not having a garbage collector (which seems to be what you're getting at):
Compatibility with C. C++ tries to be very compatible with C, for better or worse. C didn't have garbage collection, so C++ doesn't, at least not by default. I guess you could sum this up as "historical reasons".
The "you only pay for what you use" philosophy. C++ tries to avoid imposing any overhead above C unless you explicitly ask for it. So you only pay the price of exceptions if you actually throw one, etc. There's an argument that garbage collection would impose a cost whenever an object is allocated on the heap so it couldn't be the default behavior in C++.
Note that there is actually quite a bit of debate about whether garbage collection is actually more or less efficient than manual memory management. The better garbage collectors generally want to be able to move stuff around though, and C++ has pointer arithmetic (again, inherited from C) that makes it very hard to make such a collector work with C++.
Here's Stroustrup's (not really direct) answer to "Why doesn't C++ have garbage collection?":
If you want automatic garbage collection, there are good commercial and public-domain garbage collectors for C++. For applications where garbage collection is suitable, C++ is an excellent garbage collected language with a performance that compares favorably with other garbage collected languages. See The C++ Programming Language (3rd Edition) for a discussion of automatic garbage collection in C++. See also, Hans-J. Boehm's site for C and C++ garbage collection.
Also, C++ supports programming techniques that allows memory management to be safe and implicit without a garbage collector.
C++0x offers a GC ABI.
Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ?
Is it? why? A good programming language is one that lets you solve problems, no more, no less.
A garbage collector certainly lowers the barrier of entry, but it also takes control away from the programmer, which might be a problem in some cases. True, on a modern 2.5GHz quad-core computer, and with today's advanced and efficient garbage collectors, we can live with that. But C++ had to work with much more limited hardware, ranging from desktop computers with a whopping 16MB of RAM down to embedded platforms with 16KB, and everything in between. It has to be usable in realtime code, where you can not just pause the program for 0.5 seconds to run a garbage collection.
C++ isn't just designed to be the language used on desktop computers. It's meant to be usable everywhere, on memory-limited systems, in hard realtime scenarios, on large supercomputers and everywhere else.
C++'s guiding principle is "you don't pay for what you don't use". If you don't want a garbage collector, you shouldn't have to pay the (steep) price of one.
There are very powerful techniques to manage memory in C++ and avoid memory leaks, even without a garbage collector. If a garbage collector was the only way to avoid memory leaks, then there'd be a strong argument in favor of adding one to the language. But it isn't. You just have to learn how to properly manage memory yourself in C++.
What is the core driving design of memory management?
The driving design (no pun intended) is a bit like that of stick-shift transmission cars, as opposed to automatic transmission cars. Like a stick-shift car, C++ gives you freedom and control over the machine, but it's not as easy to use as automatic ones that take care of many things for you.
The following could easily have been written about C++ versus Java:
People who drive stick shift cars know the difference and the advantages of
having total control of your car engine; people who drive cars with automatic
transmissions do not. (...) Race cars, for example, do not use automatic
transmissions. (...) People who are used to shifting gears will focus more
on their driving making it more efficient and safe.
http://www.eslbee.com/contrast_stick_shift_or_automatic.htm
I should add, though, that C++ does have some mechanism that handle the memory for you, like others have mentioned, e.g. RAII, smart pointers etc.
C++ has no design philosophy wrt memory. All it has are two functions for allocating memory (new malloc) and two functions for freeing memory ( delete free ) and a frew related functions. Even those can be replaced by the programmer.
This is because C++ aims to run on generic computers. Generic computers are a CPU, memory (RAM, varieties of ROM) and a bus/busses for peripherals. There is no built in memory management on generic computers.
Now most computers come with memory ( typically a ROM variant ) which contains a bios/monitor. There you might see some rudimentary form of memory management --probably not.
Some computers come with OSes which will have memory management, but even there it is often primitive, and it is easy for me to claim that most computers running a C++ program have no OS at all.
If you expect C++ to run on any computer, it cannot have a memory management philosophy.
Related
I recently watched a great talk by Herb Sutter about "Leak Free C++..." at CppCon 2016 where he talked about using smart pointers to implement RAII (Resource acquisition is initialization) - Concepts and how they solve most of the memory leaks issues.
Now I was wondering. If I strictly follow RAII rules, which seems to be a good thing, why would that be any different from having a garbage collector in C++? I know that with RAII the programmer is in full control of when the resources are freed again, but is that in any case beneficial to just having a garbage collector? Would it really be less efficient? I even heard that having a garbage collector can be more efficient, as it can free larger chunks of memory at a time instead of freeing small memory pieces all over the code.
If I strictly follow RAII rules, which seems to be a good thing, why would that be any different from having a garbage collector in C++?
While both deal with allocations, they do so in completely different manners. If you are reffering to a GC like the one in Java, that adds its own overhead, removes some of the determinism from the resource release process and handles circular references.
You can implement GC though for particular cases, with much different performance characteristics. I implemented one once for closing socket connections, in a high-performance/high-throughput server (just calling the socket close API took too long and borked the throughput performance). This involved no memory, but network connections, and no cyclic dependency handling.
I know that with RAII the programmer is in full control of when the resources are freed again, but is that in any case beneficial to just having a garbage collector?
This determinism is a feature that GC simply doesn't allow. Sometimes you want to be able to know that after some point, a cleanup operation has been performed (deleting a temporary file, closing a network connection, etc).
In such cases GC doesn't cut it which is the reason in C# (for example) you have the IDisposable interface.
I even heard that having a garbage collector can be more efficient, as it can free larger chunks of memory at a time instead of freeing small memory pieces all over the code.
Can be ... depends on the implementation.
Garbage collection solves certain classes of resource problems that RAII cannot solve. Basically, it boils down to circular dependencies where you do not identify the cycle before hand.
This gives it two advantages. First, there are going to be certain types of problem that RAII cannot solve. These are, in my experience, rare.
The bigger one is that it lets the programmer be lazy and not care about memory resource lifetimes and certain other resources you don't mind delayed cleanup on. When you don't have to care about certain kinds of problems, you can care more about other problems. This lets you focus on the parts of your problem you want to focus on.
The downside is that without RAII, managing resources whose lifetime you want constrained is hard. GC languages basically reduce you to either having extremely simple scope-bound lifetimes or require you to do resource management manually, like in C, with manually stating you are done with a resource. Their object lifetime system is strongly tied to GC, and doesn't work well for tight lifetime management of large complex (yet cycle-free) systems.
To be fair, resource management in C++ takes a lot of work to do properly in such large complex (yet cycle-free) systems. C# and similar languages just make it a touch harder, in exchange they make the easy case easy.
Most GC implementations also forces non-locality full fledged classes; creating contiguous buffers of general objects, or composing general objects into one larger object, is not something that most GC implementations make easy. On the other hand, C# permits you to create value type structs with somewhat limited capabilities. In the current era of CPU architecture, cache friendliness is key, and the lack of locality GC forces is a heavy burden. As these languages have a bytecode runtime for the most part, in theory the JIT environment could move commonly used data together, but more often than not you just get a uniform performance loss due to frequent cache misses compared to C++.
The last problem with GC is that deallocation is indeterminate, and can sometimes cause performance problems. Modern GCs make this less of a problem than it has been in the past.
RAII and GC solve problems in completely different directions. They are completely different, despite what some would say.
Both address the issue that managing resources is hard. Garbage Collection solves it by making it so that the developer doesn't need to pay as much attention to managing those resources. RAII solves it by making it easier for developers to pay attention to their resource management. Anyone who says they do the same thing has something to sell you.
If you look at recent trends in languages, you're seeing both approaches being used in the same language because, frankly, you really need both sides of the puzzle. You're seeing lots of languages which use garbage collection of sorts so that you don't have to pay attention to most objects, and those languages also offer RAII solutions (such as python's with operator) for the times you really want to pay attention to them.
C++ offers RAII through constructors/destructors and GC through shared_ptr (If I may make the argument that refcounting and GC are in the same class of solutions because they're both designed to help you not need to pay attention to lifespan)
Python offers RAII through with and GC through a refcounting system plus a garbage collector
C# offers RAII through IDisposable and using and GC through a generational garbage collector
The patterns are cropping up in every language.
Notice that RAII is a programming idiom, while GC is a memory management technique. So we are comparing apples with oranges.
But we can restrict RAII to its memory management aspects only and compare that to GC techniques.
The main difference between so called RAII based memory management techniques (which really means reference counting, at least when you consider memory resources and ignore the other ones such as files) and genuine garbage collection techniques is the handling of circular references (for cyclic graphs).
With reference counting, you need to code specially for them (using weak references or other stuff).
In many useful cases (think of std::vector<std::map<std::string,int>>) the reference counting is implicit (since it can only be 0 or 1) and is practically omitted, but the contructor and destructor functions (essential to RAII) behave as if there was a reference counting bit (which is practically absent). In std::shared_ptr there is a genuine reference counter. But memory is still implicitly manually managed (with new and delete triggered inside constructors and destructors), but that "implicit" delete (in destructors) gives the illusion of automatic memory management. However, calls to new and delete still happen (and they cost time).
BTW the GC implementation may (and often does) handle circularity in some special way, but you leave that burden to the GC (e.g. read about the Cheney's algorithm).
Some GC algorithms (notably generational copying garbage collector) don't bother releasing memory for individual objects, it is release en masse after the copy. In practice the Ocaml GC (or the SBCL one) can be faster than a genuine C++ RAII programming style (for some, not all, kind of algorithms).
Some GC provide finalization (mostly used to manage non-memory external resources like files), but you'll rarely use it (since most values consume only memory resources). The disadvantage is that finalization does not offer any timing guarantee. Practically speaking, a program using finalization is using it as a last resort (e.g. closing of files should still happen more or less explicitly outside of finalization, and also with them).
You still can have memory leaks with GC (and also with RAII, at least when used improperly), e.g. when a value is kept in some variable or some field but will never be used in the future. They just happen less often.
I recommend reading the garbage collection handbook.
In your C++ code, you might use Boehm's GC or Ravenbrook's MPS or code your own tracing garbage collector. Of course using a GC is a tradeoff (there are some inconvenience, e.g. non-determinism, lack of timing guarantees, etc...).
I don't think that RAII is the ultimate way of dealing with memory in all cases. In several occasions, coding your program in a genuinely and efficiently GC implementations (think of Ocaml or SBCL) can be simpler (to develop) and faster (to execute) than coding it with fancy RAII style in C++17. In other cases it is not. YMMV.
As an example, if you code a Scheme interpreter in C++17 with the fanciest RAII style, you would still need to code (or use) a explicit GC inside it (because a Scheme heap has circularities). And most proof assistants are coded in GC-ed languages, often functional ones, (the only one I know which is coded in C++ is Lean) for good reasons.
BTW, I'm interested in finding such a C++17 implementation of Scheme (but less interested in coding it myself), preferably with some multi-threading ability.
One of the problem about garbage collectors is that it's hard to predict program performance.
With RAII you know that in exact time resource will go out of scope you will clear some memory and it will take some time. But if you are not a master of garbage collector settings you cannot predict when cleanup will happen.
For example: cleaning a bunch of small objects can be done more effectively with GC because it can free large chunk, but it will be not fast operation, and it's hard to predict when in will occur and because of "large chunk cleanup" it will take some processor time and can affect your program performance.
Roughly speaking. The RAII idiom may be better for the latency and jitter. A garbage collector may be better for the system's throughput.
"Efficient" is a very broad term, in sense of development efforts RAII is typically less efficient than GC, but in terms of performance GC is typically less efficient than RAII. However it is possible to provide contr-examples for both cases. Dealing with generic GC when you have very clear resource (de)allocation patters in managed languages can be rather troublesome, just like the code using RAII can be surprisingly inefficient when shared_ptr is used for everything for no reason.
Garbage collection and RAII each support one common construct for which the other is not really suitable.
In a garbage-collected system, code may efficiently treat references to immutable objects (such as strings) as proxies for the data contained therein; passing around such references is almost as cheap as passing around "dumb" pointers, and is faster than making a separate copy of the data for each owner, or trying to track ownership of a shared copy of the data. In addition, garbage-collected systems make it easy to create immutable object types by writing a class which creates a mutable object, populating it as desired, and providing accessor methods, all while refraining from leaking references to anything that might mutate it once the constructor finishes. In cases where references to immutable objects need to be widely copied but the objects themselves don't, GC beats RAII hands down.
On the other hand, RAII is excellent at handling situations where an object needs to acquire exclusive services from outside entities. While many GC systems allow objects to define "Finalize" methods and request notification when they are found to be abandoned, and such methods may sometimes manage to release outside services that are no longer needed, they are seldom reliable enough to provide a satisfactory way of ensuring timely release of outside services. For management of non-fungible outside resources, RAII beats GC hands down.
The key difference between the cases where GC wins versus those where RAII wins is that GC is good at managing fungible memory that can be freed on an as-needed basis, but poor at handling non-fungible resources. RAII is good at handling objects with clear ownership, but bad at handling ownerless immutable data holders which have no real identity apart from the data they contain.
Because neither GC nor RAII handles all scenarios well, it would be helpful for languages to provide good support for both of them. Unfortunately, languages which focus on one tend to treat the other as an afterthought.
The main part of the question about whether one or the other is "beneficial" or more "efficient" cannot be answered without giving lots of context and arguing about the definitions of these terms.
Beyond that, you can basically feel the tension of the ancient "Is Java or C++ the better language?" flamewar crackling in the comments. I wonder what an "acceptable" answer to this question could look like, and am curious to see it eventually.
But one point about a possibly important conceptual difference has not yet been pointed out: With RAII, you are tied to the thread that calls the destructor. If your application is single threaded (and even though it was Herb Sutter who stated that The Free Lunch Is Over: Most software today effectively still is single-threaded), then a single core may be busy with handling the cleanups of objects that are no longer relevant for the actual program...
In contrast to that, the garbage collector usually runs in its own thread, or even multiple threads, and is thus (to some extent) decoupled from the execution of the other parts.
(Note: Some answers already tried to point out application patterns with different characteristics, mentioned efficiency, performance, latency and throughput - but this specific point was not mentioned yet)
RAII uniformly deals with anything that is describable as a resource. Dynamic allocations are one such resource, but they are by no means the only one, and arguably not the most important one. Files, sockets, database connections, gui feedback and more are all things that can be managed deterministically with RAII.
GCs only deal with dynamic allocations, relieving the programmer of worrying about the total volume of allocated objects over the lifetime of the program (they only have to care about the peak concurrent allocation volume fitting)
RAII and garbage collection are intended to solve different problems.
When you use RAII you leave an object on the stack which sole purpose is to clean up whatever it is you want managed (sockets, memory, files, etc.) on leaving the scope of the method. This is for exception-safety, not just garbage collection, which is why you get responses about closing sockets and freeing mutexes and the like. (Okay, so no one mentioned mutexes besides me.) If an exception is thrown, stack-unwinding naturally cleans up the resources used by a method.
Garbage collection is the programmatic management of memory, though you could "garbage-collect" other scarce resources if you'd like. Explicitly freeing them makes more sense 99% of the time. The only reason to use RAII for something like a file or socket is you expect the use of the resource to be complete when the method returns.
Garbage collection also deals with objects that are heap-allocated, when for instance a factory constructs an instance of an object and returns it. Having persistent objects in situations where control must leave a scope is what makes garbage collection attractive. But you could use RAII in the factory so if an exception is thrown before you return, you don't leak resources.
I even heard that having a garbage collector can be more efficient, as it can free larger chunks of memory at a time instead of freeing small memory pieces all over the code.
That's perfectly doable - and, in fact, is actually done - with RAII (or with plain malloc/free). You see, you don't necessarily always use the default allocator, which deallocates piecemeal only. In certain contexts you use custom allocators with different kinds of functionality. Some allocators have the in-built ability of freeing everything in some allocator region, all at once, without having to iterate individual allocated elements.
Of course, you then get into the question of when to deallocate everything - whether the use of those allocators (or the slab of memory with which they're associated has to be RAIIed or not, and how.
I'm moving from C to C++11 and trying to figure out the memory management paradigm for C++11 programs (or any modern languages with built-in exceptions). Specifically, I'm having a crack at game development where running out of memory is a real concern.
In C, I'm using to checking the return value of malloc; and generally use custom allocators.
With C++, I'm quite confused; though I like how the STL containers were built allowing custom allocators. Since the STL containers all manage their own memory, simply adding an element to a vector could throw a std::bad_alloc. How do I guard against such things? I have heard wrapping all throwing calls in try/catch blocks can be prohibitively expensive.
However, allowing the exception to travel up the callstack would allow in a bunch of functions that would not fully execute, and would lead to some really tricky code. i.e. if A->B->C->D is a callstack, D throws, and A catches, then B, C, and D could have potentially created some weird problems by not being able to finish execution normally.
Additionally, the nothrow argument seems to allow very C-like code; though I now don't see the benefit over a plain malloc.
What are some best practices out there for writing exception-safe C++ code that guards against out-of-memory issues?
edit: A relevant answer on progammers.stackexchange arguing for exception-less C++ design in consoles. Not sure if these arguments still apply to the 8th generation consoles
My answer will be geared more towards game development since that is my background and that is part of what you are interested in. Different types of applications will have different requirements.
Games generally allocate all dynamic memory up front and stick within that budget. Consoles in particular have hard memory limits and most games will want to use all of it.
There's a few reasons for allocating everything up front.
One, performance. Memory allocation is slow. You want to avoid it at all costs. If you allocate everything up front, you can then write custom, high performance memory allocators like Pool Allocators, Stack Allocators, etc, that just grab memory from your pre-allocated buffer. It's important to choose the best allocator for the task at hand.
Two, you'll know quickly if there isn't enough memory for your game. In development, you'll crash if you run out of memory and will need to adjust usage, but the final release shouldn't crash because you've allocated up front and stuck within your memory budgets.
For exceptions, many (but not all) games disable exceptions, again for performance reasons. In fact some console compilers don't even support exceptions. Then you will either need to use an STL library with no exceptions, or implement your own containers. Many game teams choose to implement their own for performance reasons as well as to better integrate them with custom memory allocators.
That said, dynamic memory allocation, STL, and exceptions are probably perfectly fine for smaller personal projects/games, but keep in mind what will be necessary for large, high performance, real-time games.
For exception safety, I would definitely use RAII. That is its purpose. Also I would recommend using smart pointers like std::unique_ptr and std::shared_ptr for memory management. Coupled with RAII, if your constructors throw, memory will be freed.
Use destructors to clean up automatically when scopes are exited. That's called RAII, Resource Acquisition Is Initializion, although the acronym is not exactly the best one could devise. All the standard containers etc. clean up automatically.
In languages like C# and Java which are based on garbage collection, you instead pepper the code with try blocks and “using” statements. Java just got that (part of the try syntax IIRC); it's been in C# from the beginning (keyword using); in Python it's called with; and C++ doesn't have it and doesn't need it. I once created a WITH macro for C++, a clever little hack, thinking I would be using it all the time, but I haven't used it once except just to try it out right after creating it: in C++ RAII does it all.
Summing up: use RAII, i.e., use destructors, and just let those exceptions propagate.
Regarding memory exhaustion, ordinarily that's just regarded as “we're done for”, nothing to do except terminate as orderly as possible.
But it doesn't hurt to maybe set aside a little buffer which can deallocated when memory is exhausted, so as to have some working memory for cleanup-work.
C++ doesn't differentiate between hard exceptions (fatal like e.g. memory exhaustion) and soft exceptions (general failures that are not fatal).
I've recently seen two really nice and educating languages talks:
This first one by Herb Sutter, presents all the nice and cool features of C++0x, why C++'s future seems brighter than ever, and how M$ is said to be a good guy in this game. The talk revolves around efficiency and how minimizing heap activity very often improves performance.
This other one, by Andrei Alexandrescu, motivates a transition from C/C++ to his new game-changer D. Most of D's stuff seems really well motivated and designed. One thing, however, surprised me, namely that D pushes for garbage collection and that all classes are created solely by reference. Even more confusing, the book The D Programming Language Ref Manual specifically in the section about Resource Management states the following, quote:
Garbage collection eliminates the tedious, error prone memory allocation tracking code
necessary in C and C++. This not only means much faster development time and lower
maintenance costs, but the resulting program frequently runs faster!
This conflicts with Sutter's constant talk about minimizing heap activity. I strongly respect both Sutter's and Alexandrescou's insights, so I feel a bit confused about these two key questions
Doesn't creating class instances solely by reference result in a lot of unnecesseary heap activity?
In which cases can we use Garbage Collection without sacrificing run-time performance?
To directly answer your two questions:
Yes, creating class instances by reference does result in a lot of heap activity, but:
a. In D, you have struct as well as class. A struct has value semantics and can do everything a class can, except polymorphism.
b. Polymorphism and value semantics have never worked well together due to the slicing problem.
c. In D, if you really need to allocate a class instance on the stack in some performance-critical code and don't care about the loss of safety, you can do so without unreasonable hassle via the scoped function.
GC can be comparable to or faster than manual memory management if:
a. You still allocate on the stack where possible (as you typically do in D) instead of relying on the heap for everything (as you often do in other GC'd languages).
b. You have a top-of-the-line garbage collector (D's current GC implementation is admittedly somewhat naive, though it has seen some major optimizations in the past few releases, so it's not as bad as it was).
c. You're allocating mostly small objects. If you allocate mostly large arrays and performance ends up being a problem, you may want to switch a few of these to the C heap (you have access to C's malloc and free in D) or, if it has a scoped lifetime, some other allocator like RegionAllocator. (RegionAllocator is currently being discussed and refined for eventual inclusion in D's standard library).
d. You don't care that much about space efficiency. If you make the GC run too frequently to keep the memory footprint ultra-low, performance will suffer.
The reason creating an object on the heap is slower than creating it on the stack is that the memory allocation methods need to deal with things like heap fragmentation. Allocating memory on the stack is as simple as incrementing the stack pointer (a constant-time operation).
Yet, with a compacting garbage collector, you don't have to worry about heap fragmentation, heap allocations can be as fast as stack allocations. The Garbage Collection page for the D Programming Language explains this in more detail.
The assertion that GC'd languages run faster is probably assuming that many programs allocate memory on the heap much more often than on the stack. Assuming that heap allocation could be faster in a GC'd language, then it follows that you have just optimized a huge part of most programs (heap allocation).
An answer to 1):
As long as your heap is contiguous, allocating on it is just as cheap as allocating on the stack.
On top of that, while you allocate objects that lie next to each other, your memory caching performance will be great.
As long as you don't have to run the garbage collector, no performance is lost, and the heap stays contiguous.
That's the good news :)
Answer to 2):
GC technology has advanced greatly; they even come in real-time flavors nowadays. That means that guaranteeing contiguous memory is a policy-driven, implementation-dependent issue.
So if
you can afford a real-time gc
there are enough allocation-pauses in your application
it can keep your free-list a free-block
You may end up with better performance.
Answer to unasked question:
If developers are freed from memory-management issues, they may have more time to spend on real performance and scalability aspects in their code. That's a non-technical factor coming into play, too.
It's not either "garbage collection" or "tedious error prone" handwritten code. Smart pointers that are truly smart can give you stack semantics and mean you never type "delete" but you aren't paying for garbage collection. Here's another video by Herb that makes the point - safe and fast - that's what we want.
Another point to consider is the 80:20 rule. It is likely that that vast majority of the places you allocate are irrelevant and you won't gain much over a GC even if you could push the cost there to zero. If you accept that, then the simplicity you can gain by using a GC can displace the cost of using it. This is particularly true if you can avoid doing copies. What D provides is a GC for the 80% cases and access to stack allocation and malloc for the 20%.
Even if you had ideal garbage collector, it still would have been slower than creating things on stack. So you have to have a language that allows both at the same time. Furthermore, the only way to achieve the same performance with garbage collector as with manually managed memory allocations (done the right way), is to make it do the same things with memory as experienced developer would have had done, and that in many cases would require a garbage collector decisions to be made in compile-time and executed in run-time. Usually, garbage collection makes things slower, languages working with dynamic memory only are slower, and predictability of execution of programs written in those languages is low while latency of execution is higher. Frankly, I personally don't see why one would need a garbage collector. Managing memory manually is not hard. At least not in C++. Of course, I won't mind compiler generate code that clean-ups all things for me as I would have done, but this doesn't seem possible at the moment.
In many cases a compiler can optimize heap-allocation back to stack allocation. This is the case if your object doesn't escape the local scope.
A decent compiler will almost certainly make x stack-allocated in the following example:
void f() {
Foo* x = new Foo();
x->doStuff(); // Assuming doStuff doesn't assign 'this' anywhere
// delete x or assume the GC gets it
}
What the compiler does is called escape analysis.
Also, D could in theory have a moving GC, which means potential performance improvements by improved cache usage when the GC compacts your heap objects together. It also combats heap fragmentation as explained in Jack Edmonds' answer. Similar things can be done with manual memory management, but it's extra work.
A incremental low priority GC will collect garbage when high priority task are not running. The high priority threads will run faster since no memory deallocation will be done.
This is the idea of Henriksson's RT Java GC see http://www.oracle.com/technetwork/articles/javase/index-138577.html
Garbage collection does in fact slow code down. It's adding extra functionality to the program that has to run in addition to your code. There are other problems with it as well, such as for example, the GC not running until memory is actually needed. This can result in small memory leaks. Another issue is if a reference is not removed properly, the GC will not pick it up, and once again result in a leak. My other issue with GC is that it kind of promotes lazyness in programmers. I'm an advocate of learning the low level concepts of memory management before jumping into higher level. It's like Mathematics. You learn how to solve for the roots of a quadratic, or how to take a derivative by hand first, then you learn how to do it on the calculator. Use these things as tools, not crutches.
If you don't want to hit your performance, be smart about the GC and your heap vs stack usage.
My point is that GC is inferior to malloc when you do normal procedural programming. You just go from procedure to procedure, allocate and free, use global variables, and declare some functions _inline or _register. This is C style.
But once you go higher abstraction layer, you need at least reference counting. So you can pass by reference, count them and free once the counter is zero. This is good, and superior to malloc after the amount and hierarchy of objects become too difficult to manage manually. This is C++ style. You will define constructors and destructors to increment counters, you will copy-on-modify, so the shared object will split in two, once some part of it is modified by one party, but another party still needs the original value. So you can pass huge amount of data from function to function without thinking whether you need to copy data here or just send a pointer there. The ref-counting does those decisions for you.
Then comes the whole new world, closures, functional programming, duck typing, circular references, asynchronouse execution. Code and data start mixing, you find yourself passing function as parameter more often than normal data. You realize that metaprogramming can be done without macros or templates. Your code starts to soak in the sky and loosing solid ground, because you are executing something inside callbacks of callbacks of callbacks, data becomes unrooted, things become asynchronous, you get addicted to closure variables. So this is where timer based, memory-walking GC is the only possible solution, otherwise closures and circular references are not possible at all. This is JavaScript way.
You mentioned D, but D is still improved C++ so malloc or ref counting in constructors, stack allocations, global variables (even if they are compicated trees of entities of all kinds) is probably what you choose.
I had been wondering for quite some time on how to manager memory in my next project. Which is writing a DSL in C/C++.
It can be done in any of the three ways.
Reference counted C or C++.
Garbage collected C.
In C++, copying class and structures from stack to stack and managing strings separately with some kind of GC.
The community probably already has a lot of experience on each of these methods. Which one will be faster? What are the pros and cons for each?
A related side question. Will malloc/free be slower than allocating a big chunk at the beginning of the program and running my own memory manager over it? .NET seems to do it. But I am confused why we can't count on OS to do this job better and faster than what we can do ourselves.
It all depends! That's a pretty open question. It needs an essay to answer it!
Hey.. here's one somebody prepared earlier:
http://lambda-the-ultimate.org/node/2552
http://www.hpl.hp.com/personal/Hans_Boehm/gc/issues.html
It depends how big your objects are, how many of them there are, how fast they're being allocated and discarded, how much time you want to invest optimizing and tweaking to make optimizations. If you know the limits of how much memory you need, for fast performance, I would think you can't really beat grabbing all the memory you need from the OS up front, and then managing it yourself.
The reason it can be slow allocating memory from the OS is that it deals with lots of processes and memory on disk and in ram, so to get memory it's got to decide if there is enough. Possibly, it might have to page another processes memory out from ram to disk so it can give you enough. There's lots going on. So managing it yourself (or with a GC collected heap) can be far quicker than going to the OS for each request. Also, the OS usually deals with bigger chunks of memory, so it might round up the size of requests you make meaning you could waste memory.
Have you got a real hard requirement for going super quick? A lot of DSL applications don't need raw performance. I'd suggest going with whatever's simplest to code. You could spend a lifetime writing memory management systems and worrying which is best.
Why would garbage collected C be faster than C++? The only garbage collectors available for C are pretty inefficient things, more designed to plug memory leaks than to actually improve the quality of your code.
In any case, C++ has the potential for reaching better performance with less code (note that it's only a potential. It's also very possible to write C++ code that is far slower than the equivalent C).
Considering the current state of both languages, GC's are not currently going to improve performance in your code. GC's can be made very efficient in languages designed for it. C/C++ are not among those. ;)
Apart from that, it's impossible to say. Languages don't have a speed. It doesn't make sense to ask which language is faster. It depends on 1) the specific code, 2) the compiler that compiles it, and 3) the system it's running on (hardware as well as OS).
malloc is a fairly slow operation, far slower than the .NET equivalents, so yes, if you are performing a lot of small allocations, you may be better off allocating a large pool of memory once, and then using chunks of that.
The reason is that the OS has to find a free chunk of memory, basically by following a linked list of all free memory areas. In .NET, a new() call is basically nothing more than moving the heap pointer as many bytes as required by the allocation.
uh ... It depends how you write the garbage collection system for your DSL. Neither C or C++ comes with a garbage collection facility built-in but either could be used to write a very efficient or a very inefficient garbage collector. Writing such a thing, by the way, is a non-trivial task.
DSLs are often written in higher level languages such as Ruby or Python specifically because the language writer can leverage the garbage collection and other facilities of the language. C and C++ are great for writing full, industrial strength languages but you certainly need to know what you are doing to use them - knowledge of yacc and lex is especially useful here but a good understanding of dynamic memory management is important also, as you say. You could also check out keykit, an open source music DSL written in C, if you still like the idea of a DSL in C/C++.
With most garbage collection implementations, allocation can see a speed improvement, but then you have the additional cost of the collection phase which can be triggered at any point in your program's execution, leading to a sudden (seemingly random) delay.
As for your second question, it depends on your memory management algorithms. You'd be safe sticking with your library's default malloc implementation, but there are alternatives which boast better performance.
A related side question. Will malloc/free be slower than allocating a big chuck at the begining of the program and running my own memory manager over it? .NET seems to do it. But I am confused why we can't count on OS to do this job better and faster than what we can do ourselves.
The problem with letting the OS handle memory allocation is that it introduces indeterministic behaviour. There's no way for the programmer to know how long the OS will take to return a new chunk of memory - an allocation may be quite costly if memory has to be paged out to disk.
Preallocating therefore might be a good idea, especially when using a copying garbage collector. It'll increase memory consumption, but allocation will be fast because in most cases it'll just be a pointer increment.
As people have pointed out - GC is faster to allocate (because it just gives you the next block on its list), but slower overall (because it has to compact the heap regularly, in order for allocs to be fast).
so - go for the compromise solution (which is actually pretty damn good):
You create your own heaps, one for each size of object you generally allocate (or 4-byte, 8 byte, 16-byte, 32-byte, etc) then, when you want a new piece of memory you grab the last 'block' on the appropriate heap. Because you pre-allocate from these heaps, all you need to do when allocating is grab the next free block. This works better than the standard allocator because you are happily wasting memory - if you want to allocate 12 bytes, you'll give up a whole 16 byte block from the 16-byte heap. You keep a bitmap of used v free blocks so you can allocate quickly without wasting loads of memory or needing to compact.
Also, because you're running several heaps, highly-parallel systems work much better as you don't need to lock so often (ie you have multiple locks for each heap so you don't get contention nearly as much)
Try it - we used it to replace the standard heap on a very intensive application, performance went up by quite a lot.
BTW. the reason the standard allocators are slow is that they try not to waste memory - so if you allocate a 5 byte, 7 byte and 32 bytes from the standard heap, it'll keep those 'boundaries'. Next time you need to allocate, it'll walk through those looking for enough space to give you what you asked for. That worked well for low-memory systems, but you only have to look at how much memory most apps use today to see that GC systems go the other way, and try to make allocations as fast as possible whilst caring nothing for how much memory is wasted.
The problem has a lot of variables, but if your application is written with garbage collection in mind, and if you exploit the special features of the Boehm collector, such as different allocation calls for blocks that don't contain pointers, then as a general rule your application
- Will have simpler interfaces
- Will run somewhat faster
- Will require from 1.2x to 2x the space
than a similar application using explicit memory management.
For documentation and evidence supporting these claims, you can see the information on Boehm's web site, and also Ben Zorn's several papers on the measured cost of conservative garbage collection.
Most importantly you'll save a ton of effort and won't have to worry about a significant class of memory-management bugs.
The issue of C vs C++ is orthogonal, but GC will definitely be faster than reference counting, especially when there's no compiler support for reference counting.
Neither C nor C++ will give you garbage for free. What they will give you is memory allocation libraries (which provide malloc/free, etc). There are many online resources to algorithms for writing garbage collection libraries. A good start is link text
Most non GC languages will allocate and de-allocate the memory as needed and no longer needed. GC'd languages usually allocate large chunks of memory before hand and only free the memory when idle and not in the middle of a intensive task so I am going to yes if GC kicks in at correct time.
The D programming language is a garbage collected language and ABI compatible with C and partly ABI compatible with C++. This Page shows some benchmarks between string performance in C++ and D.
I suggest that if you have written a program where memory allocation and deallocation (explicitly or GC'ed) is the bottleneck, then you should re-think your architecture, design and implementation.
If you don't want to explicitly manage memory, don't use C/C++. There are plenty of languages with either reference counting or compiler-supported garbage collectors that will probably work much better for you.
C/C++ are designed in an environment where the programmer manages their own memory. Trying to retrofit GC or ref counting onto them may help some, but you'll find that you either have to compromise the performance of the GC (because it doesn't have any compiler hinting as to where pointers might be), or you'll find new and fascinating ways that you can screw up the reference counts or the GC or whatever.
I know it sounds like a good idea, but really, you should just grab a language more suited to the task.
I keep hearing people complaining that C++ doesn't have garbage collection. I also hear that the C++ Standards Committee is looking at adding it to the language. I'm afraid I just don't see the point to it... using RAII with smart pointers eliminates the need for it, right?
My only experience with garbage collection was on a couple of cheap eighties home computers, where it meant that the system would freeze up for a few seconds every so often. I'm sure it has improved since then, but as you can guess, that didn't leave me with a high opinion of it.
What advantages could garbage collection offer an experienced C++ developer?
I keep hearing people complaining that C++ doesn't have garbage collection.
I am so sorry for them. Seriously.
C++ has RAII, and I always complain to find no RAII (or a castrated RAII) in Garbage Collected languages.
What advantages could garbage collection offer an experienced C++ developer?
Another tool.
Matt J wrote it quite right in his post (Garbage Collection in C++ -- why?): We don't need C++ features as most of them could be coded in C, and we don't need C features as most of them could coded in Assembly, etc.. C++ must evolve.
As a developer: I don't care about GC. I tried both RAII and GC, and I find RAII vastly superior. As said by Greg Rogers in his post (Garbage Collection in C++ -- why?), memory leaks are not so terrible (at least in C++, where they are rare if C++ is really used) as to justify GC instead of RAII. GC has non deterministic deallocation/finalization and is just a way to write a code that just don't care with specific memory choices.
This last sentence is important: It is important to write code that "juste don't care". In the same way in C++ RAII we don't care about ressource freeing because RAII do it for us, or for object initialization because constructor do it for us, it is sometimes important to just code without caring about who is owner of what memory, and what kind pointer (shared, weak, etc.) we need for this or this piece of code. There seems to be a need for GC in C++. (even if I personaly fail to see it)
An example of good GC use in C++
Sometimes, in an app, you have "floating data". Imagine a tree-like structure of data, but no one is really "owner" of the data (and no one really cares about when exactly it will be destroyed). Multiple objects can use it, and then, discard it. You want it to be freed when no one is using it anymore.
The C++ approach is using a smart pointer. The boost::shared_ptr comes to mind. So each piece of data is owned by its own shared pointer. Cool. The problem is that when each piece of data can refer to another piece of data. You cannot use shared pointers because they are using a reference counter, which won't support circular references (A points to B, and B points to A). So you must know think a lot about where to use weak pointers (boost::weak_ptr), and when to use shared pointers.
With a GC, you just use the tree structured data.
The downside being that you must not care when the "floating data" will really be destroyed. Only that it will be destroyed.
Conclusion
So in the end, if done properly, and compatible with the current idioms of C++, GC would be a Yet Another Good Tool for C++.
C++ is a multiparadigm language: Adding a GC will perhaps make some C++ fanboys cry because of treason, but in the end, it could be a good idea, and I guess the C++ Standards Comitee won't let this kind of major feature break the language, so we can trust them to make the necessary work to enable a correct C++ GC that won't interfere with C++: As always in C++, if you don't need a feature, don't use it and it will cost you nothing.
The short answer is that garbage collection is very similar in principle to RAII with smart pointers. If every piece of memory you ever allocate lies within an object, and that object is only referred to by smart pointers, you have something close to garbage collection (potentially better). The advantage comes from not having to be so judicious about scoping and smart-pointering every object, and letting the runtime do the work for you.
This question seems analogous to "what does C++ have to offer the experienced assembly developer? instructions and subroutines eliminate the need for it, right?"
With the advent of good memory checkers like valgrind, I don't see much use to garbage collection as a safety net "in case" we forgot to deallocate something - especially since it doesn't help much in managing the more generic case of resources other than memory (although these are much less common). Besides, explicitly allocating and deallocating memory (even with smart pointers) is fairly rare in the code I've seen, since containers are a much simpler and better way usually.
But garbage collection can offer performance benefits potentially, especially if alot of short lived objects are being heap allocated. GC also potentially offers better locality of reference for newly created objects (comparable to objects on the stack).
The motivating factor for GC support in C++ appears to be lambda programming, anonymous functions etc. It turns out that lambda libraries benefit from the ability to allocate memory without caring about cleanup. The benefit for ordinary developers would be simpler, more reliable and faster compiling lambda libraries.
GC also helps simulate infinite memory; the only reason you need to delete PODs is that you need to recycle memory. If you have either GC or infinite memory, there is no need to delete PODs anymore.
I don't understand how one can argue that RAII replaces GC, or is vastly superior. There are many cases handled by a gc that RAII simply cannot deal with at all. They are different beasts.
First, RAII is not bullet proof: it works against some common failures which are pervasive in C++, but there are many cases where RAII does not help at all; it is fragile to asynchronous events (like signals under UNIX). Fundamentally, RAII relies on scoping: when a variable is out of scope, it is automatically freed (assuming the destructor is correctly implemented of course).
Here is a simple example where neither auto_ptr or RAII can help you:
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <memory>
using namespace std;
volatile sig_atomic_t got_sigint = 0;
class A {
public:
A() { printf("ctor\n"); };
~A() { printf("dtor\n"); };
};
void catch_sigint (int sig)
{
got_sigint = 1;
}
/* Emulate expensive computation */
void do_something()
{
sleep(3);
}
void handle_sigint()
{
printf("Caught SIGINT\n");
exit(EXIT_FAILURE);
}
int main (void)
{
A a;
auto_ptr<A> aa(new A);
signal(SIGINT, catch_sigint);
while (1) {
if (got_sigint == 0) {
do_something();
} else {
handle_sigint();
return -1;
}
}
}
The destructor of A will never be called. Of course, it is an artificial and somewhat contrived example, but a similar situation can actually happen; for example when your code is called by another code which handles SIGINT and which you have no control over at all (concrete example: mex extensions in matlab). It is the same reason why finally in python does not guarantee execution of something. Gc can help you in this case.
Other idioms do not play well with this: in any non trivial program, you will need stateful objects (I am using the word object in a very broad sense here, it can be any construction allowed by the language); if you need to control the state outside one function, you can't easily do that with RAII (which is why RAII is not that helpful for asynchronous programming). OTOH, gc have a view of the whole memory of your process, that is it knows about all the objects it allocated, and can clean asynchronously.
It can also be much faster to use gc, for the same reasons: if you need to allocate/deallocate many objects (in particular small objects), gc will vastly outperform RAII, unless you write a custom allocator, since the gc can allocate/clean many objects in one pass. Some well known C++ projects use gc, even where performance matter (see for example Tim Sweenie about the use of gc in Unreal Tournament: http://lambda-the-ultimate.org/node/1277). GC basically increases throughput at the cost of latency.
Of course, there are cases where RAII is better than gc; in particular, the gc concept is mostly concerned with memory, and that's not the only ressource. Things like file, etc... can be well handled with RAII. Languages without memory handling like python or ruby do have something like RAII for those cases, BTW (with statement in python). RAII is very useful when you precisely need to control when the ressource is freed, and that's quite often the case for files or locks for example.
The committee isn't adding garbage-collection, they are adding a couple of features that allow garbage collection to be more safely implemented. Only time will tell whether they actually have any effect whatsoever on future compilers. The specific implementations could vary widely, but will most likely involve reachability-based collection, which could involve a slight hang, depending on how it's done.
One thing is, though, no standards-conformant garbage collector will be able to call destructors - only to silently reuse lost memory.
What advantages could garbage collection offer an experienced C++ developer?
Not having to chase down resource leaks in your less-experienced colleagues' code.
It's an all-to-common error to assume that because C++ does not have garbage collection baked into the language, you can't use garbage collection in C++ period. This is nonsense. I know of elite C++ programmers who use the Boehm collector as a matter of course in their work.
Garbage collection allows to postpone the decision about who owns an object.
C++ uses value semantics, so with RAII, indeed, objects are recollected when going out of scope. This is sometimes referred to as "immediate GC".
When your program starts using reference-semantics (through smart pointers etc...), the language does no longer support you, you're left to the wit of your smart pointer library.
The tricky thing about GC is deciding upon when an object is no longer needed.
Garbage collection makes RCU lockless synchronization much easier to implement correctly and efficiently.
Easier thread safety and scalability
There is one property of GC which may be very important in some scenarios. Assignment of pointer is naturally atomic on most platforms, while creating thread-safe reference counted ("smart") pointers is quite hard and introduces significant synchronization overhead. As a result, smart pointers are often told "not to scale well" on multi-core architecture.
Garbage collection is really the basis for automatic resource management. And having GC changes the way you tackle problems in a way that is hard to quantify. For example when you are doing manual resource management you need to:
Consider when an item can be freed (are all modules/classes finished with it?)
Consider who's responsibility it is to free a resource when it is ready to be freed (which class/module should free this item?)
In the trivial case there is no complexity. E.g. you open a file at the start of a method and close it at the end. Or the caller must free this returned block of memory.
Things start to get complicated quickly when you have multiple modules that interact with a resource and it is not as clear who needs to clean up. The end result is that the whole approach to tackling a problem includes certain programming and design patterns which are a compromise.
In languages that have garbage collection you can use a disposable pattern where you can free resources you know you've finished with but if you fail to free them the GC is there to save the day.
Smart pointers which is actually a perfect example of the compromises I mentioned. Smart pointers can't save you from leaking cyclic data structures unless you have a backup mechanism. To avoid this problem you often compromise and avoid using a cyclic structure even though it may otherwise be the best fit.
I, too, have doubts that C++ commitee is adding a full-fledged garbage collection to the standard.
But I would say that the main reason for adding/having garbage collection in modern language is that there are too few good reasons against garbage collection. Since eighties there were several huge advances in the field of memory management and garbage collection and I believe there are even garbage collection strategies that could give you soft-real-time-like guarantees (like, "GC won't take more than .... in the worst case").
using RAII with smart pointers eliminates the need for it, right?
Smart pointers can be used to implement reference counting in C++ which is a form of garbage collection (automatic memory management) but production GCs no longer use reference counting because it has some important deficiencies:
Reference counting leaks cycles. Consider A↔B, both objects A and B refer to each other so they both have a reference count of 1 and neither is collected but they should both be reclaimed. Advanced algorithms like trial deletion solve this problem but add a lot of complexity. Using weak_ptr as a workaround is falling back to manual memory management.
Naive reference counting is slow for several reasons. Firstly, it requires out-of-cache reference counts to be bumped often (see Boost's shared_ptr up to 10× slower than OCaml's garbage collection). Secondly, destructors injected at the end of scope can incur unnecessary-and-expensive virtual function calls and inhibit optimizations such as tail call elimination.
Scope-based reference counting keeps floating garbage around as objects are not recycled until the end of scope whereas tracing GCs can reclaim them as soon as they become unreachable, e.g. can a local allocated before a loop be reclaimed during the loop?
What advantages could garbage collection offer an experienced C++ developer?
Productivity and reliability are the main benefits. For many applications, manual memory management requires significant programmer effort. By simulating an infinite-memory machine, garbage collection liberates the programmer from this burden which allows them to focus on problem solving and evades some important classes of bugs (dangling pointers, missing free, double free). Furthermore, garbage collection facilitates other forms of programming, e.g. by solving the upwards funarg problem (1970).
In a framework that supports GC, a reference to an immutable object such as a string may be passed around in the same way as a primitive. Consider the class (C# or Java):
public class MaximumItemFinder
{
String maxItemName = "";
int maxItemValue = -2147483647 - 1;
public void AddAnother(int itemValue, String itemName)
{
if (itemValue >= maxItemValue)
{
maxItemValue = itemValue;
maxItemName = itemName;
}
}
public String getMaxItemName() { return maxItemName; }
public int getMaxItemValue() { return maxItemValue; }
}
Note that this code never has to do anything with the contents of any of the strings, and can simply treat them as primitives. A statement like maxItemName = itemName; will likely generate two instructions: a register load followed by a register store. The MaximumItemFinder will have no way of knowing whether callers of AddAnother are going to retain any reference to the passed-in strings, and callers will have no way of knowing how long MaximumItemFinder will retain references to them. Callers of getMaxItemName will have no way of knowing if and when MaximumItemFinder and the original supplier of the returned string have abandoned all references to it. Because code can simply pass string references around like primitive values, however, none of those things matter.
Note also that while the class above would not be thread-safe in the presence of simultaneous calls to AddAnother, any call to GetMaxItemName would be guaranteed to return a valid reference to either an empty string or one of the strings that had been passed to AddAnother. Thread synchronization would be required if one wanted to ensure any relationship between the maximum-item name and its value, but memory safety is assured even in its absence.
I don't think there's any way to write a method like the above in C++ which would uphold memory safety in the presence of arbitrary multi-threaded usage without either using thread synchronization or else requiring that every string variable have its own copy of its contents, held in its own storage space, which may not be released or relocated during the lifetime of the variable in question. It would certainly not be possible to define a string-reference type which could be defined, assigned, and passed around as cheaply as an int.
Garbage Collection Can Make Leaks Your Worst Nightmare
Full-fledged GC that handles things like cyclic references would be somewhat of an upgrade over a ref-counted shared_ptr. I would somewhat welcome it in C++, but not at the language level.
One of the beauties about C++ is that it doesn't force garbage collection on you.
I want to correct a common misconception: a garbage collection myth that it somehow eliminates leaks. From my experience, the worst nightmares of debugging code written by others and trying to spot the most expensive logical leaks involved garbage collection with languages like embedded Python through a resource-intensive host application.
When talking about subjects like GC, there's theory and then there's practice. In theory it's wonderful and prevents leaks. Yet at the theoretical level, so is every language wonderful and leak-free since in theory, everyone would write perfectly correct code and test every single possible case where a single piece of code could go wrong.
Garbage collection combined with less-than-ideal team collaboration caused the worst, hardest-to-debug leaks in our case.
The problem still has to do with ownership of resources. You have to make clear design decisions here when persistent objects are involved, and garbage collection makes it all too easy to think that you don't.
Given some resource, R, in a team environment where the developers aren't constantly communicating and reviewing each other's code carefully at alll times (something a little too common in my experience), it becomes quite easy for developer A to store a handle to that resource. Developer B does as well, perhaps in an obscure way that indirectly adds R to some data structure. So does C. In a garbage-collected system, this has created 3 owners of R.
Because developer A was the one that created the resource originally and thinks he's the owner of it, he remembers to release the reference to R when the user indicates that he no longer wants to use it. After all, if he fails to do so, nothing would happen and it would be obvious from testing that the user-end removal logic did nothing. So he remembers to release it, as any reasonably competent developer would do. This triggers an event for which B handles it and also remembers to release the reference to R.
However, C forgets. He's not one of the stronger developers on the team: a somewhat fresh recruit who has only worked in the system for a year. Or maybe he's not even on the team, just a popular third party developer writing plugins for our product that many users add to the software. With garbage collection, this is when we get those silent logical resource leaks. They're the worst kind: they do not necessarily manifest in the user-visible side of the software as an obvious bug besides the fact that over durations of running the program, the memory usage just continues to rise and rise for some mysterious purpose. Trying to narrow down these issues with a debugger can be about as fun as debugging a time-sensitive race condition.
Without garbage collection, developer C would have created a dangling pointer. He may try to access it at some point and cause the software to crash. Now that's a testing/user-visible bug. C gets embarrassed a bit and corrects his bug. In the GC scenario, just trying to figure out where the system is leaking may be so difficult that some of the leaks are never corrected. These are not valgrind-type physical leaks that can be detected easily and pinpointed to a specific line of code.
With garbage collection, developer C has created a very mysterious leak. His code may continue to access R which is now just some invisible entity in the software, irrelevant to the user at this point, but still in a valid state. And as C's code creates more leaks, he's creating more hidden processing on irrelevant resources, and the software is not only leaking memory but also getting slower and slower each time.
So garbage collection does not necessarily mitigate logical resource leaks. It can, in less than ideal scenarios, make leaks far easier to silently go unnoticed and remain in the software. The developers might get so frustrated trying to trace down their GC logical leaks that they simply tell their users to restart the software periodically as a workaround. It does eliminate dangling pointers, and in a safety-obsessed software where crashing is completely unacceptable under any scenario, then I would prefer GC. But I'm often working in less safety-critical but resource-intensive, performance-critical products where a crash that can be fixed promptly is preferable to a really obscure and mysterious silent bug, and resource leaks are not trivial bugs there.
In both of these cases, we're talking about persistent objects not residing on the stack, like a scene graph in a 3D software or the video clips available in a compositor or the enemies in a game world. When resources tie their lifetimes to the stack, both C++ and any other GC language tend to make it trivial to manage resources properly. The real difficulty lies in persistent resources referencing other resources.
In C or C++, you can have dangling pointers and crashes resulting from segfaults if you fail to clearly designate who owns a resource and when handles to them should be released (ex: set to null in response to an event). Yet in GC, that loud and obnoxious but often easy-to-spot crash is exchanged for a silent resource leak that may never be detected.