Dynamic allocation and large data structures in C++ - c++

Lately, I learned there is a consensus among C++ programmers that the new, delete and delete[] operators should be avoided as often as possible, as already discussed here, here or here. While searching, I even stumbled upon an April Fools' joke stating that these operators would become deprecated in C++20.
I happen to write and maintain a C/C++ program, written in such language in order to carry on useful libraries and classes written by other programmers. As it must run in quite limited environments (i.e., old Linux distributions with the bare minimum in terms of programs), I can't rely on features brought C++11 and later versions (such as smart pointers), and I sticked so far to a mix of C and Java programming habits while expanding my program. Among others, I used quite often dynamic allocation with new and delete - which sounds, of course, to be a problem.
To ease the maintenance of my code by future programmer(s), I would like to minimize dynamic allocation with said keywords in my code. The problem is that my program has to manage some quite large data structures used for (almost) the entire execution. As a consequence, I struggle to figure out why I should avoid dynamic allocation in these situations.
To simplify, consider I have a data structure (modeled as an object) worth 10 Megaoctets which is used for the entire execution of the program and which the size in memory can increase over time. My questions are the following:
Is dynamic allocation of the object with new still a bad practice in this particular context ? What are the better alternatives ?
Suppose now I instantiate somewhere in my program a new object without using new, do some operations on it which could slightly change its size, then use a method to insert it in my data structure. How does it work, memory-wise, if automatic allocation (as mentioned here) is used ?
Many thanks in advance.

You could try using boost smart pointers.
https://www.boost.org/doc/libs/1_71_0/libs/smart_ptr/doc/html/smart_ptr.html
They are very similar to the C++ 11 smart pointers, but available as a library which should work in pre C++ 11 environments. If you decide to go this way, you may also want to look at this
How to include only BOOST smart pointer codes into a project?
Now, I'm wondering if you are confusing the new/delete functions with dynamic allocation in general. Smart Pointers are still dynamic allocation, however they are able to clean themselves up so that you don't have to remember too. That's why they are preferred over using new/delete (malloc/free, etc), they are much less likely to lead to memory leaks.
Automatic allocation is fine when the object lifespan allows it, but if you need the object to persist outside of the function it was declared in, you'll need dynamically allocated memory.

Is dynamic allocation of the object with new still a bad practice in this particular context?
Yes, it's bad to have a new in your code mainly because of memory leaks if delete or delete[] isn't called, especially in the face of exceptions.
What are the better alternatives?
Write smart pointers and have them either call new themselves or be called with a new expression:
myUniquePtr<myObj> obj_ptr(new myObj);
and then use RAII to have their destructors call delete or delete[].
Make your smart pointer behave as naked pointer in every other way (overload operator-> etc). Then it'll work in your code with the same syntax as your raw pointers did without having to worry about whether a heap object is deleted at the end of a it's life-cycle no matter what paths the code can take.

Related

What does it take to write memory safe C++ applications?

Is it possible to either create a coding standard or use of a library that can be proved to eliminate any memory management errors in C++?
I'm thinking of something like Java, it is just impossible to for example have dangling pointers in Java applications.
Is it possible to either create a coding standard or use of a library that can be proved to eliminate any memory management errors in C++?
Yes and no.
Even if you use a very strict standard, doing so will limit you to a very narrow subset of the C++ language. For example, the Power of Ten (Rules for Developing Safety-Critical Code) says that you should disable heap usage entirely. However that alone doesn't stop you from creating memory corruption.
I think if there were an exact answer to this question, the industry would've solved this decades ago, but here we are...
I don't believe that there is a definite way to make sure your code is totally safe, but there are best practices which will help you make sure there are as few problems as possible.
Here are some suggestions:
As mentioned earlier, disallowing heap usage entirely might help you get rid of all the memory management problems, but it doesn't solve the problem completely because it doesn't save you from eg. stray pointer writes.
I recommend you read the about The Rule of Three, Five and Zero which explain some of the stuff you need to take care of.
Instead of managing memory on your own, use smart pointers like shared_ptr, unique_ptr etc. But course, you could still abuse these if you wanted to. (For example shared_ptr will not help you if you have circular references...)
Use memory checker tools like valgrind, which can help you discover problems and verify that your code is error-free.
Even if you keep to any coding standard or best practice, errors can and will happen. Nobody guarantees that you will be safe. However, by keeping to these suggestions you can minimize the chance and impact of errors.
Is it possible to either create a coding standard or use of a library
that can be proved to eliminate any memory management errors in C++?
No.
But the same is true for Java. While Java does not technically allow memory leaks, it does have them (and other resource leaks) in practice if you do not pay attention.
A classical example, known particularly well in the Android world, is when a collection of listener instances keeps growing the longer a program runs, because listeners forget to unregister themselves. This can quickly cause hundreds of MBs leaking in a GUI application when the listener is an instance of some window or view class that keeps itself references to big graphics. Since all of that memory is still reachable, garbage collection cannot clean it up.
The fact that you have not technically lost the pointer (it's still in the collection) does not help you at all. Quite the contrary; it's the cause of the leak because it prevents garbage collection.
In the same vein as above, while Java does not technically allow dangling pointers, similar bugs can cause you to access a pointer to some window or view object which is still in a valid memory area for your program but which should no longer exist and is no longer visible. While the pointer access itself does not cause any crash or problem, other kinds of errors or crashes (like a NullPointerException) usually follow soon enough because the program logic is messed up.
So much for the bad news.
The good news is that both languages allow you to reduce memory-management problems if you follow simple guidelines. As far as C++ is concerned, this means:
Use standard collections (like std::vector or std::set) whenever you can.
Make dynamic allocation your second choice. The first choice should always be to create a local object.
If you must use dynamic allocation, use std::unique_ptr.
If all else fails, consider std::shared_ptr.
Use a bare new only if you implement some low-level container class because the existing standard container classes (likestd::vector or std::set) do not work for your use case. This should be an extremely rare case, though.
There is also the Boehm garbage collector for C++, but I've never used it personally.

What are the advantages of Cocos2d-x custom memory model?

As I recently began developing in Cocos2d, one of the first features that I found very peculiar was the Objective-C style autorelease pool memory model. In all my experience with c++, I have avoided using any form of dynamic memory allocation unless ABSOLUTELY necessary (which is actually very rare).
At first, I was puzzled as to why Cocos2D did not take advantage of safer alternatives for creating pointer objects (e.g. smart pointers), but then I came across this thread, which discussed the disadvantages of shared_ptr<class T> (the most significant of which was speed) over the current memory paradigm with respect to manual retain / release methods.
Then I thought, "why not simply allocate an object regularly and pass and store its reference when necessary?" I understand that it would be extremely time-consuming to port Cocos2d-x's entire memory system to a new paradigm, but in the long run isn't it worth using more stable idiomatic c++ code?
What it all comes down to is what are the advantages of the current memory model as opposed to regular object allocation?
Memory management in game engines is a very specific topic, especially if you want to keep your engine simple to use. If you'll take a look at the Unreal Engine 4 they've gone much more further in their memory management approach with generating a reflection code. Generally speaking it is possible to create a cocos2d-x game without ever explicitly calling retain or release. Those methods first of all used when you need to manually extend lifetime of the object to avoid deleting it and creating it again (caching).
Shared pointers will make syntax much more complex and will cause extra difficulties with dynamic casts and binding pointers as arguments. And what is more important you will have to use a weak_ptr along with shared to avoid cross references which also will take extra effort.
Basically shared_ptr is a reference counting technique like intrusive_ptr which can be more naturally integrated into cocos2d-x. And functions like addChild/removeChild in cocos are incrementing and decrementing counter of the child object, so these paradigms are not as diverse as may seem at the first glance.
I do not believe that Cocos2d-x manages Ref derived objects using an Objective-C paradigm for any technical reason. c++ smart pointers (or "regular object allocation") would probably work just as well if not better.
However, historically, Cocos2d-x is a c++ port of the Cocos2d project - which was native Objective-C (which was, iirc, a port of an earlier Python based game engine). So the use of an AutoReleasePool, and 'retain' and 'release' methods is historical.
Changing it now would break a lot of code and annoy a lot of developers and for what end? A slightly more idiomatic use of c++ stl?
The important thing is that the management of Ref objects is easy, arguably a lot easier than trying to teach developers to wrap things in the correct *_ptr<>, and allows the Cocos2d-x project - by concealing most of the memory management - to maintain feature parity with Cocos2d-js. (I think there is also an attempt to keep api compatibility with Cocos2d).

How should C++ libraries allow custom allocators?

In C, it's simple for a library to allow the user to customize memory allocation by using global function pointers to a function that should behave similarly to malloc() and to a function that should behave similarly to free(). SQLite, for example, uses this approach.
C++ complicates things a bit because allocation and initialization are usually fused. Essentially we want to get the behavior of having overridden operator new and operator delete for only a library but there's no way to actually do that (I'm fairly certain but not quite 100%).
How should this be done in C++?
Here's a first stab at something that replicates some of the semantics of new expressions with a function Lib::make<T>.
I don't know if this is so useful, but just for fun, here's a more complicated version that also tries to replicate the semantics of new[] expressions.
This is a goal oriented question so I'm not necessarily looking for code review. If there's some better way to do this just say so and ignore the links.
(By "allocator" I only mean something that allocates memory. I'm not referring to the STL allocator concept or even necessarily allocating memory for containers.)
Why this might be desirable:
Here's a blog post from a Mozilla dev arguing that libraries should do this. He gives a few examples of C libraries that allow the library user to customize allocation for the library. I checked out the source code for one of the examples, SQLite, and see that this feature is also used internally for testing via fault injection. I'm not writing anything that needs to be as bulletproof as SQLite but it still seems like a sensible idea. If nothing else, it allows client code to figure out, "Which library is hogging my memory and when?".
Simple answer: don't use C++. Sorry, joke.
But if you want to take this kind of absolute control over memory management in C++, across libraries/module boundaries, and in a completely generalized way, you can be in for some terrible grief. I'd suggest to most to look for reasons not to do it more than ways to do it.
I've gone through many iterations of this same basic idea over the years (actually decades), from trying to naively overload operator new/new[]/delete/delete[] at a global level to linker-based solutions to platform-specific solutions, and I'm actually at the desired point you are at now: I have a system that allows me to see the amount of memory allocated per plugin. But I didn't reach this point through the kind of generalized way that you desire (and me as well, originally).
C++ complicates things a bit because allocation and initialization are
usually fused.
I would offer a slight twist to this statement: C++ complicates things because initialization and allocation are usually fused. All I did was swap the order here, but the most complicating part is not that allocation wants to initialize, but because initialization often wants to allocate.
Take this basic example:
struct Foo
{
std::vector<Bar> stuff;
};
In this case, we can easily allocate Foo through a custom memory allocator:
void* mem = custom_malloc(sizeof(Foo));
Foo* foo = new(foo_mem) Foo;
...
foo->~Foo();
custom_free(foo);
... and of course we can wrap this all we like to conform to RAII, achieve exception-safety, etc.
Except now the problem cascades. That stuff member using std::vector will want to use std::allocator, and now we have a second problem to solve. We could use a template instantiation of std::vector using our own allocator, and if you need runtime information passed to the allocator, you can override Foo's constructors to pass that information along with the allocator to the vector constructor.
But what about Bar? Its constructor may also want to allocate memory for a variety of disparate objects, and so the problem cascades and cascades and cascades.
Given the difficulty of this problem, and the alternative, generalized solutions I've tried and the grief associated when porting, I've settled on a completely de-generalized, somewhat pragmatic approach.
The solution I settled on is to effectively reinvent the entire C and C++ standard library. Disgusting, I know, but I had a bit more of an excuse to do it in my case. The product I'm working on is effectively an engine and software development kit, designed to allow people to write plugins for it using any compiler, C runtime, C++ standard library implementation, and build settings they desire. To allow things like vectors or sets or maps to be passed through these central APIs in an ABI-compatible way required rolling our own standard-compliant containers in addition to a lot of C standard functions.
The entire implementation of this devkit then revolves around these allocation functions:
EP_API void* ep_malloc(int lib_id, int size);
EP_API void ep_free(int lib_id, void* mem);
... and the entirety of the SDK revolves around these two, including memory pools and "sub-allocators".
For third party libraries outside of our control, we're just SOL. Some of those libraries have equally ambitious things they want to do with their memory management, and to try to override that would just lead to all kinds of clashes and open up all kinds of cans of worms. There are also very low-level drivers when using things like OGL that want to allocate a lot of system memory, and we can't do anything about it.
Yet I've found this solution to work well enough to answer the basic question: "who/what is hogging up all this memory?" very quickly: a question which is often much more difficult to answer than a similar one related to clock cycles (for which we can just fire up any profiler). It only applies for code under our control, using this SDK, but we can get a very thorough memory breakdown using this system on a per-module basis. We can also set superficial caps on memory use to make sure that out of memory errors are actually being handled correctly without actually trying to exhaust all contiguous pages available in the system.
So in my case this problem was solved via policy: by building a uniform coding standard and a central library conforming to it that's used throughout the codebase (and by third parties writing plugins for our system). It's probably not the answer you are looking for, but this ended up being the most practical solution we've found yet.

Why can't C++ have an optional transparent garbage collector

There's a related question but this one is slightly different and I'm not happy with any of the answers to the related question :)
I'm going to ask this question in the negative by asserting it is not possible to have an optional transparent garbage collector for C++ and hoping someone will prove me wrong. Yes, Stroustrup tried this on and has repeatedly failed not because of technical issues but because of conformance issues. Performance is not an issue here.
The reason C++ will never have such a collector is that being optional a program which runs without the collector must implement all the required memory management manually. Adding a collector may then provide some performance benefits, but it isn't clear they're worthwhile (yes, a collector can be faster).
What you cannot obtain is automatic memory management, which is the principal reason for desiring a collector. You would get this with mandatory collection (without necessarily sacrificing RAII or other things if you choose to do correct manual management). A mandatory collector with optional manual memory management is tenable.
Unfortunately, the only way to get a mandatory collector creates an incompatibility with earlier versions of C++ not using a collector: in other words we have to define a new language if we want automatic transparent memory management.
So my contention is: C++ will never have garbage collection because it is locked into a historical development which requires upward compatibility: mandatory collection with optional manual memory management is viable but transparent optional garbage collection is not.
Prove me wrong by exhibiting a tenable optional transparent garbage collection model!
EDIT:
Oooo .. I think I have the answer. Can someone quote the Standard where it requires programs to delete heap allocated objects?
Because: that clause, if it exists, is the only thing stopping optional transparent garbage collection. There may even be enough time to get that clause removed from C++1x.
Without such a clause, a program can leak memory without the behaviour being undefined: the behaviour when out of memory is just the same as it usually is. And so tacking on a garbage collector will do nothing to the specified semantics: they're well defined or not, independently of whether the collector is used or not.
Prove me wrong by exhibiting a tenable optional transparent garbage collection model!
See: C++/CLI.
The difficulty with putting a garbage collector with existing C++ code is that C++ often relies on deterministic object destruction in order to make things happen; as is done in RAII. Sure, the garbage collector would be able to make most kinds of memory RAII transparent, but plenty of RAII related concepts don't have anything to do with memory. For example, sockets, streams, and locks all are amenable to some form of RAII management, and none of these would work well if deterministic destruction was not preserved.
Therefore, it probably won't be "transparent" -- it'd have to be something like C++/CLI where you have to say "I want this to be garbage collected" -- but it's by all means reasonable and possible.
This may be better suited as a comment rather than an answer, and may draw many downvotes. So be it.
Whenever someone asks a question like "Why can't C++ have GC?" I think to myself "because I don't want your damned garbage collection. I want to control when objects live. I want to control when objects die. I want destruction and deallocation to be deterministic, not based on some hokus pokus black magic. I don't need GC to write better programs. Therefore, GC will do nothing for me but get in my way."
But even beyond that, consider this. C# and the other .NET languages have GC built in. The compilers and the CLR for these languages were written primarily in C++. This includes the memory management facilities, except for a few performance-critical pieces written in assembler.
So you might say that anything that C# can do, C++ can do, since C++ begat C#.
Go ahead, downvote away...
"Why doesn't my Lamborghini have a snow plow blade mount?" Because it's not designed for snow removal ;)
C++ wasn't designed like C# and has different uses. Use the right tool for the right job and life is much easier.
Can I prove you wrong by giving examples of optional transparent garbage collectors for C++?
boehmgc
libgc
There's also a good discussion on the Boehm site that should be required reading for questions like this.
I think that you're taking the wrong approach. There's no reason that a GC should be transparent- why not have a std::gc_pointer<T>?
You need to consider the genuine purpose of a GC. This isn't to solve memory management, because the existing smart pointers (in C++0x) solve this just fine - it's to offer a different performance characteristic to manual memory management, that's very suitable for temporary allocations. Why not just have a std::gc_new? We already have a std::make_shared.
And, in C++0x, it is already implementation defined whether or not undeleted objects are deleted automatically.
The real problem is not so much ensuring object destruction happens deterministically (that could probably be done without too much trouble: when an object goes out of scope (or delete is called in the case of heap-allocated objects), its destructor can be called, while the actual reclaiming of memory can be left until a later garbage collection) -- but rather how to identify what to collect.
To do that, the GC needs to be able to traverse the object graph.
In "properly" GC'ed languages, that's simple enough, as every object is tagged with a type pointer of some kind, allowing the GC to know the structure of the object it is visiting.
In C++, there is usually no such thing. There is no way for the GC to know whether the word it is looking at is a pointer or not, and equally important, whether or not the next word is part of the same structure/array, or if it is unallocated.
Of course, the standard doesn't prohibit an implementation from adding such type information, but that would carry a cost in runtime performance and memory usage, which is incompatible with C++'s "you only pay for what you use" philosophy.
An alternative option, taken by the GC's that exist, is to implement a conservative GC, which might not reclaim all memory, because it has to guess at whether a word is a pointer or not, and when in doubt, it has to be pessimistic.

Why doesn't C++ have a garbage collector?

I'm not asking this question because of the merits of garbage collection first of all. My main reason for asking this is that I do know that Bjarne Stroustrup has said that C++ will have a garbage collector at some point in time.
With that said, why hasn't it been added? There are already some garbage collectors for C++. Is this just one of those "easier said than done" type things? Or are there other reasons it hasn't been added (and won't be added in C++11)?
Cross links:
Garbage collectors for C++
Just to clarify, I understand the reasons why C++ didn't have a garbage collector when it was first created. I'm wondering why the collector can't be added in.
Implicit garbage collection could have been added in, but it just didn't make the cut. Probably due to not just implementation complications, but also due to people not being able to come to a general consensus fast enough.
A quote from Bjarne Stroustrup himself:
I had hoped that a garbage collector
which could be optionally enabled
would be part of C++0x, but there were
enough technical problems that I have
to make do with just a detailed
specification of how such a collector
integrates with the rest of the
language, if provided. As is the case
with essentially all C++0x features,
an experimental implementation exists.
There is a good discussion of the topic here.
General overview:
C++ is very powerful and allows you to do almost anything. For this reason it doesn't automatically push many things onto you that might impact performance. Garbage collection can be easily implemented with smart pointers (objects that wrap pointers with a reference count, which auto delete themselves when the reference count reaches 0).
C++ was built with competitors in mind that did not have garbage collection. Efficiency was the main concern that C++ had to fend off criticism from in comparison to C and others.
There are 2 types of garbage collection...
Explicit garbage collection:
C++0x has garbage collection via pointers created with shared_ptr
If you want it you can use it, if you don't want it you aren't forced into using it.
For versions before C++0x, boost:shared_ptr exists and serves the same purpose.
Implicit garbage collection:
It does not have transparent garbage collection though. It will be a focus point for future C++ specs though.
Why Tr1 doesn't have implicit garbage collection?
There are a lot of things that tr1 of C++0x should have had, Bjarne Stroustrup in previous interviews stated that tr1 didn't have as much as he would have liked.
To add to the debate here.
There are known issues with garbage collection, and understanding them helps understanding why there is none in C++.
1. Performance ?
The first complaint is often about performance, but most people don't really realize what they are talking about. As illustrated by Martin Beckett the problem may not be performance per se, but the predictability of performance.
There are currently 2 families of GC that are widely deployed:
Mark-And-Sweep kind
Reference-Counting kind
The Mark And Sweep is faster (less impact on overall performance) but it suffers from a "freeze the world" syndrome: i.e. when the GC kicks in, everything else is stopped until the GC has made its cleanup. If you wish to build a server that answers in a few milliseconds... some transactions will not live up to your expectations :)
The problem of Reference Counting is different: reference-counting adds overhead, especially in Multi-Threading environments because you need to have an atomic count. Furthermore there is the problem of reference cycles so you need a clever algorithm to detect those cycles and eliminate them (generally implement by a "freeze the world" too, though less frequent). In general, as of today, this kind (even though normally more responsive or rather, freezing less often) is slower than the Mark And Sweep.
I have seen a paper by Eiffel implementers that were trying to implement a Reference Counting Garbage Collector that would have a similar global performance to Mark And Sweep without the "Freeze The World" aspect. It required a separate thread for the GC (typical). The algorithm was a bit frightening (at the end) but the paper made a good job of introducing the concepts one at a time and showing the evolution of the algorithm from the "simple" version to the full-fledged one. Recommended reading if only I could put my hands back on the PDF file...
2. Resources Acquisition Is Initialization (RAII)
It's a common idiom in C++ that you will wrap the ownership of resources within an object to ensure that they are properly released. It's mostly used for memory since we don't have garbage collection, but it's also useful nonetheless for many other situations:
locks (multi-thread, file handle, ...)
connections (to a database, another server, ...)
The idea is to properly control the lifetime of the object:
it should be alive as long as you need it
it should be killed when you're done with it
The problem of GC is that if it helps with the former and ultimately guarantees that later... this "ultimate" may not be sufficient. If you release a lock, you'd really like that it be released now, so that it does not block any further calls!
Languages with GC have two work arounds:
don't use GC when stack allocation is sufficient: it's normally for performance issues, but in our case it really helps since the scope defines the lifetime
using construct... but it's explicit (weak) RAII while in C++ RAII is implicit so that the user CANNOT unwittingly make the error (by omitting the using keyword)
3. Smart Pointers
Smart pointers often appear as a silver bullet to handle memory in C++. Often times I have heard: we don't need GC after all, since we have smart pointers.
One could not be more wrong.
Smart pointers do help: auto_ptr and unique_ptr use RAII concepts, extremely useful indeed. They are so simple that you can write them by yourself quite easily.
When one need to share ownership however it gets more difficult: you might share among multiple threads and there are a few subtle issues with the handling of the count. Therefore, one naturally goes toward shared_ptr.
It's great, that's what Boost for after all, but it's not a silver bullet. In fact, the main issue with shared_ptr is that it emulates a GC implemented by Reference Counting but you need to implement the cycle detection all by yourself... Urg
Of course there is this weak_ptr thingy, but I have unfortunately already seen memory leaks despite the use of shared_ptr because of those cycles... and when you are in a Multi Threaded environment, it's extremely difficult to detect!
4. What's the solution ?
There is no silver bullet, but as always, it's definitely feasible. In the absence of GC one need to be clear on ownership:
prefer having a single owner at one given time, if possible
if not, make sure that your class diagram does not have any cycle pertaining to ownership and break them with subtle application of weak_ptr
So indeed, it would be great to have a GC... however it's no trivial issue. And in the mean time, we just need to roll up our sleeves.
What type? should it be optimised for embedded washing machine controllers, cell phones, workstations or supercomputers?
Should it prioritise gui responsiveness or server loading?
should it use lots of memory or lots of CPU?
C/c++ is used in just too many different circumstances.
I suspect something like boost smart pointers will be enough for most users
Edit - Automatic garbage collectors aren't so much a problem of performance (you can always buy more server) it's a question of predicatable performance.
Not knowing when the GC is going to kick in is like employing a narcoleptic airline pilot, most of the time they are great - but when you really need responsiveness!
One of the biggest reasons that C++ doesn't have built in garbage collection is that getting garbage collection to play nice with destructors is really, really hard. As far as I know, nobody really knows how to solve it completely yet. There are alot of issues to deal with:
deterministic lifetimes of objects (reference counting gives you this, but GC doesn't. Although it may not be that big of a deal).
what happens if a destructor throws when the object is being garbage collected? Most languages ignore this exception, since theres really no catch block to be able to transport it to, but this is probably not an acceptable solution for C++.
How to enable/disable it? Naturally it'd probably be a compile time decision but code that is written for GC vs code that is written for NOT GC is going to be very different and probably incompatible. How do you reconcile this?
These are just a few of the problems faced.
Though this is an old question, there's still one problem that I don't see anybody having addressed at all: garbage collection is almost impossible to specify.
In particular, the C++ standard is quite careful to specify the language in terms of externally observable behavior, rather than how the implementation achieves that behavior. In the case of garbage collection, however, there is virtually no externally observable behavior.
The general idea of garbage collection is that it should make a reasonable attempt at assuring that a memory allocation will succeed. Unfortunately, it's essentially impossible to guarantee that any memory allocation will succeed, even if you do have a garbage collector in operation. This is true to some extent in any case, but particularly so in the case of C++, because it's (probably) not possible to use a copying collector (or anything similar) that moves objects in memory during a collection cycle.
If you can't move objects, you can't create a single, contiguous memory space from which to do your allocations -- and that means your heap (or free store, or whatever you prefer to call it) can, and probably will, become fragmented over time. This, in turn, can prevent an allocation from succeeding, even when there's more memory free than the amount being requested.
While it might be possible to come up with some guarantee that says (in essence) that if you repeat exactly the same pattern of allocation repeatedly, and it succeeded the first time, it will continue to succeed on subsequent iterations, provided that the allocated memory became inaccessible between iterations. That's such a weak guarantee it's essentially useless, but I can't see any reasonable hope of strengthening it.
Even so, it's stronger than what has been proposed for C++. The previous proposal [warning: PDF] (that got dropped) didn't guarantee anything at all. In 28 pages of proposal, what you got in the way of externally observable behavior was a single (non-normative) note saying:
[ Note: For garbage collected programs, a high quality hosted implementation should attempt to maximize the amount of unreachable memory it reclaims. —end note ]
At least for me, this raises a serious question about return on investment. We're going to break existing code (nobody's sure exactly how much, but definitely quite a bit), place new requirements on implementations and new restrictions on code, and what we get in return is quite possibly nothing at all?
Even at best, what we get are programs that, based on testing with Java, will probably require around six times as much memory to run at the same speed they do now. Worse, garbage collection was part of Java from the beginning -- C++ places enough more restrictions on the garbage collector that it will almost certainly have an even worse cost/benefit ratio (even if we go beyond what the proposal guaranteed and assume there would be some benefit).
I'd summarize the situation mathematically: this a complex situation. As any mathematician knows, a complex number has two parts: real and imaginary. It appears to me that what we have here are costs that are real, but benefits that are (at least mostly) imaginary.
If you want automatic garbage collection, there are good commercial
and public-domain garbage collectors for C++. For applications where
garbage collection is suitable, C++ is an excellent garbage collected
language with a performance that compares favorably with other garbage
collected languages. See The C++ Programming Language (4rd
Edition) for a discussion of automatic garbage collection in C++.
See also, Hans-J. Boehm's site for C and C++ garbage collection (archive).
Also, C++ supports programming techniques that allow memory
management to be safe and implicit without a garbage collector. I consider garbage collection a last choice and an imperfect way of handling for resource management. That does not mean that it is never useful, just that there are better approaches in many situations.
Source: http://www.stroustrup.com/bs_faq.html#garbage-collection
As for why it doesnt have it built in, If I remember correctly it was invented before GC was the thing, and I don't believe the language could have had GC for several reasons(I.E Backwards compatibilty with C)
Hope this helps.
tl;dr: Because modern C++ doesn't need garbage collection.
Bjarne Stroustrup's FAQ answer on this matter says:
I don't like garbage. I don't like littering. My ideal is to eliminate the need for a garbage collector by not producing any garbage. That is now possible.
The situation, for code written these days (C++17 and following the official Core Guidelines) is as follows:
Most memory ownership-related code is in libraries (especially those providing containers).
Most use of code involving memory ownership follows the CADRe or RAII pattern, so allocation is made on construction and deallocation on destruction, which happens when exiting the scope in which something was allocated.
You do not explicitly allocate or deallocate memory directly.
Raw pointers do not own memory (if you've followed the guidelines), so you can't leak by passing them around.
If you're wondering how you're going to pass the starting addresses of sequences of values in memory - you can and should prefer span's, obviating the need for raw pointers. You can still use such pointers, they'll just be non-owning.
If you really need an owning "pointer", you use C++' standard-library smart pointers - they can't leak, and are decently efficient (although the ABI can get in the way of that). Alternatively, you can pass ownership across scope boundaries with "owner pointers". These are uncommon and must be used explicitly; but when adopted - they allow for nice static checking against leaks.
"Oh yeah? But what about...
... if I just write code the way we used to write C++ in the old days?"
Indeed, you could just disregard all of the guidelines and write leaky application code - and it will compile and run (and leak), same as always.
But it's not a "just don't do that" situation, where the developer is expected to be virtuous and exercise a lot of self control; it's just not simpler to write non-conforming code, nor is it faster to write, nor is it better-performing. Gradually it will also become more difficult to write, as you would face an increasing "impedance mismatch" with what conforming code provides and expects.
... if I reintrepret_cast? Or do complex pointer arithmetic? Or other such hacks?"
Indeed, if you put your mind to it, you can write code that messes things up despite playing nice with the guidelines. But:
You would do this rarely (in terms of places in the code, not necessarily in terms of fraction of execution time)
You would only do this intentionally, not accidentally.
Doing so will stand out in a codebase conforming to the guidelines.
It's the kind of code in which you would bypass the GC in another language anyway.
... library development?"
If you're a C++ library developer then you do write unsafe code involving raw pointers, and you are required to code carefully and responsibly - but these are self-contained pieces of code written by experts (and more importantly, reviewed by experts).
So, it's just like Bjarne said: There's really no motivation to collect garbage generally, as you all but make sure not to produce garbage. GC is becoming a non-problem with C++.
That is not to say GC isn't an interesting problem for certain specific applications, when you want to employ custom allocation and de-allocations strategies. For those you would want custom allocation and de-allocation, not a language-level GC.
Stroustrup made some good comments on this at the 2013 Going Native conference.
Just skip to about 25m50s in this video. (I'd recommend watching the whole video actually, but this skips to the stuff about garbage collection.)
When you have a really great language that makes it easy (and safe, and predictable, and easy-to-read, and easy-to-teach) to deal with objects and values in a direct way, avoiding (explicit) use of the heap, then you don't even want garbage collection.
With modern C++, and the stuff we have in C++11, garbage collection is no longer desirable except in limited circumstances. In fact, even if a good garbage collector is built into one of the major C++ compilers, I think that it won't be used very often. It will be easier, not harder, to avoid the GC.
He shows this example:
void f(int n, int x) {
Gadget *p = new Gadget{n};
if(x<100) throw SomeException{};
if(x<200) return;
delete p;
}
This is unsafe in C++. But it's also unsafe in Java! In C++, if the function returns early, the delete will never be called. But if you had full garbage collection, such as in Java, you merely get a suggestion that the object will be destructed "at some point in the future" (Update: it's even worse that this. Java does not promise to call the finalizer ever - it maybe never be called). This isn't good enough if Gadget holds an open file handle, or a connection to a database, or data which you have buffered for write to a database at a later point. We want the Gadget to be destroyed as soon as it's finished, in order to free these resources as soon as possible. You don't want your database server struggling with thousands of database connections that are no longer needed - it doesn't know that your program is finished working.
So what's the solution? There are a few approaches. The obvious approach, which you'll use for the vast majority of your objects is:
void f(int n, int x) {
Gadget p = {n}; // Just leave it on the stack (where it belongs!)
if(x<100) throw SomeException{};
if(x<200) return;
}
This takes fewer characters to type. It doesn't have new getting in the way. It doesn't require you to type Gadget twice. The object is destroyed at the end of the function. If this is what you want, this is very intuitive. Gadgets behave the same as int or double. Predictable, easy-to-read, easy-to-teach. Everything is a 'value'. Sometimes a big value, but values are easier to teach because you don't have this 'action at a distance' thing that you get with pointers (or references).
Most of the objects you make are for use only in the function that created them, and perhaps passed as inputs to child functions. The programmer shouldn't have to think about 'memory management' when returning objects, or otherwise sharing objects across widely separated parts of the software.
Scope and lifetime are important. Most of the time, it's easier if the lifetime is the same as the scope. It's easier to understand and easier to teach. When you want a different lifetime, it should be obvious reading the code that you're doing this, by use of shared_ptr for example. (Or returning (large) objects by value, leveraging move-semantics or unique_ptr.
This might seem like an efficiency problem. What if I want to return a Gadget from foo()? C++11's move semantics make it easier to return big objects. Just write Gadget foo() { ... } and it will just work, and work quickly. You don't need to mess with && yourself, just return things by value and the language will often be able to do the necessary optimizations. (Even before C++03, compilers did a remarkably good job at avoiding unnecessary copying.)
As Stroustrup said elsewhere in the video (paraphrasing): "Only a computer scientist would insist on copying an object, and then destroying the original. (audience laughs). Why not just move the object directly to the new location? This is what humans (not computer scientists) expect."
When you can guarantee only one copy of an object is needed, it's much easier to understand the lifetime of the object. You can pick what lifetime policy you want, and garbage collection is there if you want. But when you understand the benefits of the other approaches, you'll find that garbage collection is at the bottom of your list of preferences.
If that doesn't work for you, you can use unique_ptr, or failing that, shared_ptr. Well written C++11 is shorter, easier-to-read, and easier-to-teach than many other languages when it comes to memory management.
The idea behind C++ was that you would not pay any performance impact for features that you don't use. So adding garbage collection would have meant having some programs run straight on the hardware the way C does and some within some sort of runtime virtual machine.
Nothing prevents you from using some form of smart pointers that are bound to some third-party garbage collection mechanism. I seem to recall Microsoft doing something like that with COM and it didn't go to well.
To answer most "why" questions about C++, read Design and Evolution of C++
One of the fundamental principles behind the original C language is that memory is composed of a sequence of bytes, and code need only care about what those bytes mean at the exact moment that they are being used. Modern C allows compilers to impose additional restrictions, but C includes--and C++ retains--the ability to decompose a pointer into a sequence of bytes, assemble any sequence of bytes containing the same values into a pointer, and then use that pointer to access the earlier object.
While that ability can be useful--or even indispensable--in some kinds of applications, a language that includes that ability will be very limited in its ability to support any kind of useful and reliable garbage collection. If a compiler doesn't know everything that has been done with the bits that made up a pointer, it will have no way of knowing whether information sufficient to reconstruct the pointer might exist somewhere in the universe. Since it would be possible for that information to be stored in ways that the computer wouldn't be able to access even if it knew about them (e.g. the bytes making up the pointer might have been shown on the screen long enough for someone to write them down on a piece of paper), it may be literally impossible for a computer to know whether a pointer could possibly be used in the future.
An interesting quirk of many garbage-collected frameworks is that an object reference not defined by the bit patterns contained therein, but by the relationship between the bits held in the object reference and other information held elsewhere. In C and C++, if the bit pattern stored in a pointer identifies an object, that bit pattern will identify that object until the object is explicitly destroyed. In a typical GC system, an object may be represented by a bit pattern 0x1234ABCD at one moment in time, but the next GC cycle might replace all references to 0x1234ABCD with references to 0x4321BABE, whereupon the object would be represented by the latter pattern. Even if one were to display the bit pattern associated with an object reference and then later read it back from the keyboard, there would be no expectation that the same bit pattern would be usable to identify the same object (or any object).
SHORT ANSWER:
We don't know how to do garbage collection efficiently (with minor time and space overhead) and correctly all the time (in all possible cases).
LONG ANSWER:
Just like C, C++ is a systems language; this means it is used when you are writing system code, e.g., operating system. In other words, C++ is designed, just like C, with best possible performance as the main target. The language' standard will not add any feature that might hinder the performance objective.
This pauses the question: Why garbage collection hinders performance? The main reason is that, when it comes to implementation, we [computer scientists] do not know how to do garbage collection with minimal overhead, for all cases. Hence it's impossible to the C++ compiler and runtime system to perform garbage collection efficiently all the time. On the other hand, a C++ programmer, should know his design/implementation and he's the best person to decide how to best do the garbage collection.
Last, if control (hardware, details, etc.) and performance (time, space, power, etc.) are not the main constraints, then C++ is not the right tool. Other language might serve better and offer more [hidden] runtime management, with the necessary overhead.
All the technical talking is overcomplicating the concept.
If you put GC into C++ for all the memory automatically then consider something like a web browser. The web browser must load a full web document AND run web scripts. You can store web script variables in the document tree. In a BIG document in a browser with lots of tabs open, it means that every time the GC must do a full collection it must also scan all the document elements.
On most computers this means that PAGE FAULTS will occur. So the main reason, to answer the question is that PAGE FAULTS will occur. You will know this as when your PC starts making lots of disk access. This is because the GC must touch lots of memory in order to prove invalid pointers. When you have a bona fide application using lots of memory, having to scan all objects every collection is havoc because of the PAGE FAULTS. A page fault is when virtual memory needs to get read back into RAM from disk.
So the correct solution is to divide an application into the parts that need GC and the parts that do not. In the case of the web browser example above, if the document tree was allocated with malloc, but the javascript ran with GC, then every time the GC kicks in it only scans a small portion of memory and all PAGED OUT elements of the memory for the document tree does not need to get paged back in.
To further understand this problem, look up on virtual memory and how it is implemented in computers. It is all about the fact that 2GB is available to the program when there is not really that much RAM. On modern computers with 2GB RAM for a 32BIt system it is not such a problem provided only one program is running.
As an additional example, consider a full collection that must trace all objects. First you must scan all objects reachable via roots. Second scan all the objects visible in step 1. Then scan waiting destructors. Then go to all the pages again and switch off all invisible objects. This means that many pages might get swapped out and back in multiple times.
So my answer to bring it short is that the number of PAGE FAULTS which occur as a result of touching all the memory causes full GC for all objects in a program to be unfeasible and so the programmer must view GC as an aid for things like scripts and database work, but do normal things with manual memory management.
And the other very important reason of course is global variables. In order for the collector to know that a global variable pointer is in the GC it would require specific keywords, and thus existing C++ code would not work.
When we compare C++ with Java, we see that C++ was not designed with implicit Garbage Collection in mind, while Java was.
Having things like arbitrary pointers in C-Style is not only bad for GC-implementations, but it would also destroy backward compatibility for a large amount of C++-legacy-code.
In addition to that, C++ is a language that is intended to run as standalone executable instead of having a complex run-time environment.
All in all:
Yes it might be possible to add Garbage Collection to C++, but for the sake of continuity it is better not to do so.
Mainly for two reasons:
Because it doesn't need one (IMHO)
Because it's pretty much incompatible with RAII, which is the cornerstone of C++
C++ already offers manual memory management, stack allocation, RAII, containers, automatic pointers, smart pointers... That should be enough. Garbage collectors are for lazy programmers who don't want to spend 5 minutes thinking about who should own which objects or when should resources be freed. That's not how we do things in C++.
Imposing garbage collection is really a low level to high level paradigm shift.
If you look at the way strings are handled in a language with garbage collection, you will find they ONLY allow high level string manipulation functions and do not allow binary access to the strings. Simply put, all string functions first check the pointers to see where the string is, even if you are only drawing out a byte. So if you are doing a loop that processes each byte in a string in a language with garbage collection, it must compute the base location plus offset for each iteration, because it cannot know when the string has moved. Then you have to think about heaps, stacks, threads, etc etc.