Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I know this is a really dumb question but I feel I have been going about learning C++ wrong, ignoring memory too much. I always hear about memory management in C++ and C but what is it's importance to something like a video game, or some office program?
C and C++ are languages which are by most people considered low level (or with low level parts), this allows you to write hardware specific code. And as hardware usually have a lot of preconditions about their input, you'll have to manually manage memory, in terms of memory layout, allocation, padding alike.
This is to be expected when interacting with hardware, and is actually required in order to deal with hardware. However when implementing non hardware specific code, the same utilities and language features still apply. I.e. if you want a piece of dynamic memory, you'll have to explicitly request it, and explicitly release it (the hard part). The way around this, in C++ is to use classes, which helps you handle memory management, by either abstracting the memory management away all together, or by providing garbage collection (usually via reference counting).
The consequence of not cleaning up your garbage, I.e. returning resources to the system, also known as leaking, is that the system will EVENTUALLY run out of resources (as resources are generally limited, although sometimes immense). If your program is small, and has a limited span of executing, this may not be an issue, but nevertheless you should handle your resources, as the hosting environment is actually NOT required to do so, for you, after program termination (although most system will do so, atleast in terms of memory).
Also please notice, that you should have focus on managing resources. Rather than just memory. There are a lot of resources, which are all limited, and hence all needs to be managed. Other resources could be; files, IP sockets, handles, hardware devices, ...
For games in specific, you'll have to expect a high resource usage, in terms of memory and file access, also your game is likely to run, for quite a while (assuming its good), and hence handling resource management becomes critical!
My best piece if advice is keeping away from raw pointers and manual memory management (new/free), and instead using the standard containers (std::vector, alike), value semantics (I.e. pass arguments around by value instead of by pointers.), reference semantics, and if you really have to use pointers, make use of std::unique_ptr, and std::shared_ptr. (This is assuming your writing non hardware code, like a game or a text processor).
Sean Paul did a talk about avoiding pointers at the going native conference in 2013, and it's really worth watching. I can't remember the name of the talk, but it's live on the channel9 webpage for free. The other talks from going native are also recommendable!
You probably already know the answer, but considering that your computer only has a finite amount of memory, it's only natural that applications must manage this memory in a conservative manner.
If, for example, a game poorly manages memory and uses up a large chunk of your 8 gigs of installed ram, other applications that demand a modest amount of memory will essentially start fighting over it. This will usually result in your OS to start swapping memory with other storage mediums and ultimately degrade the performance of your computer until more memory is available.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
In some video games, I find that everytime a new character is created a factory method is used create the new one like this
class CharacterEngine
{
public:
static Character* CreateCharacter(string Name, Weapons InitialWeapons)
{
return new Character(Name, InitialWeapons);
}
};
//...
Now that if I have 100000000 characters (very many, e.g like simulated particles), heap allocation like this may fail to work on computers with small RAM. What is your solution to this problem?
Edit
What other methods or designs do you know can change or replace the factory method/class?
Do you actually have 100K characters? And are you actually in an environment in which you are memory constrained and allocation fails? Even if Character is a whopping 1KB in side, you'd be looking at 100MBs consumed, which isn't that much, even for feature phones.
But perhaps you're worried that you might actually have memory to spare, but fragmentation is so high you can't use it. That's a fairer concern, and one usually relevant to games. Perhaps take a look at the object pool pattern. Also, taking into consideration the large number of characters you're speaking of, perhaps flighweight might also help!
Finally, running out of memory isn't like other program errors like losing a TCP connection or facing a disk error. If you need to allocate the 100001th character and there's no more memory for it you can't not allocate it, show an error to the user or try again later. You can't go on without them as it were. So don't - just bail the program and perhaps do whatever cleaning up is required to not lose too much game state etc. Have a read for malloc never fails as well.
The heap memory is obviously limited, but the limit is in practice not that small (at least gigabytes on current PCs).
And memory consumption is not the biggest problem in a game. If you have many characters, you might need to deal with interactions between them, and that could be more difficult (e.g. determining the set of characters close to a given one could be more challenging).
You should read more about memory management, virtual address space, smart pointers, reference counting, RAII, circular references, weak references, hash consing.
Notice that the heap is global to your program & process (it is not the property of some particular class or code chunk, but of your entire program).
The heap allocation routines (related to new & delete) are generally implemented about some operating system primitives (often system calls) to grow the virtual address space. On Linux, see mmap(2). The operating system could provide some mean to query your virtual address space (on Linux, see proc(5) and for a process of pid 1234, the /proc/1234/maps pseudo-file).
I recommend reading a good book on garbage collection, such as the GC handbook. It teaches you concepts and terminology which are relevant for C++ programming (notably in games). In some sense, you may want to implement your own GC for your game.
C++ has some allocator concept and standard containers know about that.
Read also some Introduction to Algorithms.
heap allocation like this may fail to work on computers with small RAM.
Then either improve your program to use less memory, or get a bigger computer. Perhaps consider some distributed computing approach (e.g. cloud computing), like in MMORG.
What other methods or designs do you know can change or replace the factory method/class?
They won't change much the consumed memory, because in your design every character is represented by its unique C++ object. So that does not matter much.
Assuming you have enough local disk space to store the information of all characters, you can mmap one or more files that store all the character data, and create a character object from data in the file(s) only when needed.
If you have neither enough memory nor disk space to store data of all characters locally, then it becomes a much more difficult problem -- you might need to assign to every character an URI and load it from the network...
EDIT: Of course, after updating the character data, you'd need to write it back to the corresponding file. And for performance sake, you might want to implement some caching mechanism so frequently used characters don't need to be read & written back every time they are used.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Wants to create an application storing data in memory. But i dont want the data to be lost even if my app crashes.
What concept should i use?
Should I use a shared memory, or is there some other concept that suits my requirement better.
You are asking for persistence (or even orthogonal persistence) and/or for application checkpointing.
This is not possible (at least thru portable C++ code) in the general case for some arbitrary existing C++ code, e.g. because of ASLR, because of pointers on -or to- the local call stack, because of multi-threading, and because of external resources (sockets, opened files, ...), because the current continuation cannot be accessed, restored and handled in standard C++.
However, you might design your application with persistence in mind. This is a strong architectural requirement. You could for instance have every class contain some dumping method and its load factory function. Beware of shared pointers, and take into account that you could have cyclic references. Study garbage collection algorithms (e.g. in the Gc HandBook) which are similar to those needed for persistence (a copying GC is quite similar to a checkpointing algorithm).
Look also in serialization libraries (like libs11n). You might also consider persisting into textual format (e.g. JSON), perhaps inside some Sqlite database (or some real database like PostGreSQL or MongoDb....). I am doing this (in C) in my monimelt software.
You might also consider checkpointing libraries like BLCR
The important thing is to think about persistence & checkpointing very early at design time. Thinking of your application as some specialized bytecode interpreter or VM might help (notably if you want to persist continuations, or some form of "call stack").
You could fork your process (assuming you are on Linux or Posix) before persistence. Hence, persistence time does not matter that much (e.g. if you persist every hour or every ten minutes).
Some language implementations are able to persist their entire state (notably their heap), e.g. SBCL (a good Common Lisp implementation) with its save-lisp-and-die, or Poly/ML -an ML dialect- with its SaveState, or Squeak (a Smalltalk implementation).
See also this answer & that one. J.Pitrat's blog has a related entry: CAIA as a sleeping beauty.
Persistency of data with code (e.g. vtables of objects, function pointers) might be technically difficult. dladdr(3) -with dlsym- might help (and, if you are able to code machine-specific things, consider the old getcontext(3), but I don't recommend that). Avoid name mangling (for dlsym) by declaring extern "C" all code related to persistence. If you want to persist some data and be able to restart from it with a slightly modified program (e.g. a small bugfix) things are much more complex.
More pragmatically, you could have a class representing your entire persistable state, and implement methods to persist (and reload it). You would then persist only at certain steps of your algorithm (e.g. if you have a main loop or an event loop, at start of that loop). You probably don't want to persist too often (e.g. because of the time and disk space required to persist), e.g. perhaps every ten minutes. You might perhaps consider some transaction log if it fits in the overall picture of your application.
Use memory mapped files - mmap (https://en.wikipedia.org/wiki/Mmap) And allocate all your structures inside mapped memory region. System will properly save mapped file to disk when your app crashes.
Background: I'm developing a multiplatform framework of sorts that will be used as base for both game and util/tool creation. The basic idea is to have a pool of workers, each executing in its own thread. (Furthermore, workers will also be able to spawn at runtime.) Each thread will have it's own memory manager.
I have long thought about creating my own memory management system, and I think this project will be perfect to finally give it a try. I find such a system fitting due to the types of usages of this framework will often require memory allocations in realtime (games and texture edition tools).
Problems:
No generally applicable solution(?) - The framework will be used for both games/visualization (not AAA, but indie/play) and tool/application creation. My understanding is that for game development it is usual (at least for console games) to allocate a big chunk of memory only once in the initialization, and then use this memory internally in the memory manager. But is this technique applicable in a more general application?
In a game you could theoretically know how much memory your scenes and resources will need, but for example, a photo editing application will load resources of all different sizes... So in the latter case a more dynamic memory "chunk size" would be needed? Which leads me to the next problem:
Moving already allocated data and keeping valid pointers - Normally when allocating on the heap, you will acquire a simple pointer to the memory chunk. In a custom memory manager, as far as I understand it, a similar approach is then to return a pointer to somewhere free in the pre-allocated chunk. But what happens if the pre-allocated chunk is too small and needs to be resized or even defragmentated? The data would be needed to be moved around in the memory and the old pointers would be invalid. Is there a way to transparently wrap these pointers in some way, but still use them as normally "outside" the memory management as if they were usual C++ pointers?
Third party libraries - If there is no way to transparently use a custom memory management system for all memory allocation in the application, every third party library I'm linking with, will still use the "old" OS memory allocations internally. I have learned that it is common for libraries to expose functions to set custom allocation functions that the library will use, but it is not guaranteed every library I will use will have this ability.
Questions: Is it possible and feasible to implement a memory manager that can use a dynamically sized memory chunk pool? If so, how would defragmentation and memory resize work, without breaking currently in-use pointers? And finally, how is such a system best implemented to work with third party libraries?
I'm also thankful for any related reading material, papers, articles and whatnot! :-)
As someone who has previously written many memory managers and heap implementations for AAA games for the last few generations of consoles let me tell you its simply not worth it anymore.
Your information is old - back in the gamecube era [circa 2003] we used to do what you said- allocate a large chunk and carve out that chunk manually using custom algorithms tweaked for each game.
Once virtual memory came along (xbox era), games got more complicated [and so made more allocations and became multimthreaded] address fragmentation made this untenable. So we switched to custom allocators to handle certain types of requests only - for instance physical memory, or lock free small block low fragmentation heaps or thread local cache of recently used blocks.
As built in memory managers become better it gets harder to do better than those - certainly in the general case and a close thing for a specific use cases. Doug Lea Allocator [or whatever the mainstream c++ linux compilers come with now] and the latest Windows low fragmentation heaps are really very good, and you'd do far better investing your time elsewhere.
I've got spreadsheets at work measuring all kinds of metrics for a whole load of allocators - all the big name ones and a fair few I've collected over the years. And basically whilst the specialist allocators can win on a few metrics [lowest overhead per alloc, spacial proximity, lowest fragmentation, etc] for overall metrics the mainstream ones are simply the best.
As a user of your library, my personal preferred option is you just allocate memory when you need it. Use operator new/the new operator and I can use the standard C++ mechanisms to replace those and use my custom heap (if I indeed have one), or alternatively I can use platform specific ways of replacing your allocations (e.g. XMemAlloc on Xbox). I don't need tagging [capturing callstacks is far superior which I can do if I want]. Lower down that list comes you giving me an interface that you'll call when you need to allocate memory - this is just a pain for you to implement and I'll probably just pass it onto operator new anyway. The worst thing you can do is 'know best' and create your own custom heaps. If memory allocation performance is a problem, I'd much rather you share the solution the whole game uses than roll your own.
If you're looking to write your own malloc()/free(), etc., you probably should start by checking out the source code for existing systems such as dlmalloc. This is a hard problem, though, for what it's worth. Writing your own malloc library is Hard. Beating existing general purpose malloc libraries will be Even Harder.
And now, here is the correct answer: DON'T IMPLEMENT YET ANOTHER MEMORY MANAGER.
It is incredibly hard to implement a memory manager that does not fail under different kinds of usage patterns and events. You may be able to build a specific manager that works well under YOUR usage patterns, but to write one which works well for MANY users is a full-time job that almost no one has really done well. Worse, it is fantastically easy to implement a memory manager that works great 99% of the time and then 1% of the time crash or suddenly consume most or all available memory on your system due to unexpected heap fragmentation.
I say this as someone who has written multiple memory managers, watched multiple people write their own memory managers, and watched even more people attempt to write memory managers and fail. This problem is deceptively difficult, not because it's hard to write templated allocators and generic types with inheritance and such, but because the other solutions given in this thread tend to fail under corner types of load behavior. Once you start supporting byte alignments (as all real-world allocators must) then heap fragmentation rears its ugly head. Cute heuristics that work great for small test programs, fail miserably when subjected to large, real-world programs.
And once you get it working, someone else will need: cookies to verify against memory stomps; heap usage reporting; memory pools; pools of pools; memory leak tracking and reporting; heap auditing; chunk splitting and coalescing; thread-local storage; lookasides; CPU and process-level page faulting and protection; setting and checking and clearing "free-memory" patterns aka 0xdeadbeef; and whatever else I can't think of off the top of my head.
Writing yet another memory manager falls squarely under the heading of Premature Optimization. Since there are multiple free, good, memory managers with thousands of hours of development and testing behind them, you have to justify spending the cost of your own time in such a way that the result would provide some sort of measurable improvement over what other people have done, and you can use, for free.
If you are SURE you want to implement your own memory manager (and hopefully you are NOT sure after reading this message), read through the dlmalloc sources in detail, then read through the tcmalloc sources in detail as well, THEN make sure you understand the performance trade-offs in implementing a thread-safe versus a thread-unsafe memory manager, and why the naive implementations tend to give poor performance results.
Prepare more than one solution and let the user of the framework adopt any particular one. Policy classes to the generic allocator you develop would do this nicely.
A nice way to get around this is to wrap up pointers in a class with overloaded * operator. Make the internal data of that class only an index to the memory pool. Now, you can just change the index quickly after a background thread copies the data over.
Most good C++ libraries support allocators and you should implement one. You can also overload the global new so your version gets used. And keep in mind that you generally won't need to think about a library allocating or deallocating a large amount of data, which is generally a responsibility of client code.
Currently I'm working on a solution for the memory limits per process. So I came to shared memory. First, I'm using windows 7 with visual studio as developer platform, the software will run on a modern windows server system with multiple CPU's and a huge memory.
Well, I informed my self about memory limits per process, and I need to access much more memory. So my idea was creating multiple processes and use shared memory.
But is it really good to create a lot shared memory? And what about performance?
Well, I informed my self about memory limits per process, and I need to access much more memory. So my idea was creating multiple processes and use shared memory.
The limits on memory per process are for virtual memory. This basically means that your address space has a maximum size (e.g. 4 gigabytes on a system with 32-bit pointers). Since shared memory is a mapping of memory into your address space, there's no way that would get you out of the problem you have.
Keep in mind that if you distribute the memory blocks into multiple processes, you'll eventually reach the limits of physical memory and then system performance will slow to a crawl.
If you really need more memory than your system can grant you, you need to start to persist your data to disk. Memory mapped files can allow you to quickly swap memory blocks in and out of your address space.
# Aurus,
It sounds as though what you require to cover your targets is a custom-engineered solution tailored to specific (albeit under-described) requirements. While Stack Overflow is extremely useful for developers and software engineers seeking professional clarity and programmatic examples, whatever applicable high-level engineering is available within may not be easy to locate and will likely not provide the specific answers you seek. One could make too many assumptions from your post.
Whatever benefit(s) you may (or may not) attain from staggering quantities of RAM and or multiple threads on multiple processors would best be left to those with hard experience building such systems. I have years in the field myself and can confidently express that I myself lack that specific experience. Honestly I hope to avoid that eventuality because high-dollar hardware commonly attends high-pressure schedules, and those can lead to other issues as well. I'll speculate a tiny bit though -- if only because it costs you nothing...
If your intent is firmly fixed upon utilizing Windows platforms my first-order guess is:
a clustered server environment (many multi-core processors for crunching high numbers of threads, backed by a massive quantity of available RAM)
cutting edge drive hardware -- if you're seeking to minimize the impact of frequent virtual memory access you'll likely need to target specific cutting-edge hardware options that enable you to literally replace spindle-drives with more eloquent DRAM sticks, that is to say solid state drives -- not the trivial type you commonly find in modern iPods and mobile PDAs... I refer to the real deal -- classic solid state drives [one fine example is here -- look under hardware]. Their products are two to three orders of magnitude faster than spindle drives and even far faster than consumer solid state as well (albeit not cheap).
Your goals appear to indicate that cost isn't a great concern, but that's about as good as I can give you while lacking more specific information.
One final bit of advice though, when seeking help from engineers it's best to tell them exactly what you seek to accomplish (the goal(s)). Allow them to provide the options and match the limitations of reality and modern technology to your dilemma as well as your financial targets. More often than not, even with esoteric and eccentric requirements, the best solution is actually a custom 'outside-the-box' engineering solution that also ends up being far cheaper to build / implement than a brute-force approach. To put it another way, help the engineers to help you while noting that the GIGO Principle applies as well.
I sincerely hope something I provided is useful. Good luck.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
What is the design factor in managing memory in C++?
For example: why is there a memory leak when a program does not release a memory object before it exits? Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ? I know I am being a bit naive, but what is the design philosophy of memory management in C++ with respect to classes, structs, methods, interfaces, abstract classes?
Certainly one cannot humanely remember every spec of C++. What is the core driving design of memory management?
What is the core driving design of memory management ?
In almost all cases, you should use automatic resource management. Basically:
Wherever it is practical to do so, prefer creating objects with automatic storage duration (that is, on the stack, or function-local)
Whenever you must use dynamic allocation, use Scope-Bound Resource Management (SBRM; more commonly called Resource Acquisition is Initialization or RAII).
Rarely do you have to write your own RAII container: the C++ standard library provides a whole set of containers (e.g., vector and map) and smart pointers like shared_ptr (from C++ TR1, C++0x, and Boost) work very well for most common situations.
Basically, in really good C++ code, you should never call delete yourself1 to clean up memory that you've allocated: memory management and resource cleanup should always be encapsulated in a container of some kind.
1. Obviously, the exception here is when you implement an RAII container yourself, since that container must be responsible for cleaning up whatever it owns.
It's not entirely clear whether you're asking about the philosophy of what's built into C++, or how to use it in a way that prevents memory leaks.
The primary way to prevent memory leaks (and other resource leaks) is known as either RAII (Resource Acquisition Is Initialization) or SBRM (Scope Bound Resource Management). Either way, the basic idea is pretty simple: since objects with auto storage duration are automatically destroyed on exit from their scope, you allocate memory in the ctor of such an object, and free the memory in its dtor.
As far as C++ itself goes, it doesn't really have a philosophy. It provides mechanisms, but leaves it up to the programmer to decide which mechanism is appropriate for the situation at hand. That's often RAII. Sometimes it might be a garbage collector. Still other times, other times it might be various sorts of custom memory managers. Of course, sometimes it's a combination of two or all three of those, or something else entirely.
Edit: As to why C++ does things this way, it's fairly simple: almost any other choice will render the language unsuited to at least some kinds of problems -- including a number for which C++ was quite clearly intended to be suitable. One of the most obvious of these was being able to run on a "bare" machine with a minimum of support structure (e.g., no OS)
C and C++ take the position that you, the programmer, know when you are done with memory you have allocated. This avoids the need for the language runtime to know much of anything about what has been allocated, and the associated tasks (reference counting, garbage collection, etc.) needed to "clean up" when necessary.
At the crux is the idea that: if you allocate it, you must free it. (malloc/free, new/delete)
There are several methods of helping manage this so that you don't have to explicitly remember. RAII and the smart pointer implementations that provide containers that do it is extremely useful and powerful for managing memory based on object creation and destruction. They will save you hours of time.
why is there a memory leak when a program does not release a memory object before it exits ?
Well, the OS typically clean up your mess for you. However, what happens when your program is running for an arbitrary amount of time and you have leaked so much memory that you can't allocate anymore? You crash, and that's not good.
Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ?
No. Some programming languages have automated memory management, some do not. There are benefits and drawbacks to both models. Languages with manual memory management allow you to say when and where resources are allocated and released, i.e., it is very deterministic. A relative beginner will however inevitably write code that leaks while they are getting used to dealing with memory management.
Automated schemes are great for the programmer, but you don't get the same level of determinism. If I am writing a hardware driver, this may not be a good model for me. If I were writing a simple GUI, then I probably don't care about some objects persisting for a bit longer than they need to, so I will take an automated management scheme every time. That's not to say that GC'd languages are only for 'simple' tasks, some tasks just require a tighter control over your resources. Not all platforms have 4GB+ memory for you to play around in).
There are patterns that you can use to help you with memory management. The canonical example would be RAII (Resource Allocation is Initialization)
Philosophy-wise, I think there are two things that lead to C++ not having a garbage collector (which seems to be what you're getting at):
Compatibility with C. C++ tries to be very compatible with C, for better or worse. C didn't have garbage collection, so C++ doesn't, at least not by default. I guess you could sum this up as "historical reasons".
The "you only pay for what you use" philosophy. C++ tries to avoid imposing any overhead above C unless you explicitly ask for it. So you only pay the price of exceptions if you actually throw one, etc. There's an argument that garbage collection would impose a cost whenever an object is allocated on the heap so it couldn't be the default behavior in C++.
Note that there is actually quite a bit of debate about whether garbage collection is actually more or less efficient than manual memory management. The better garbage collectors generally want to be able to move stuff around though, and C++ has pointer arithmetic (again, inherited from C) that makes it very hard to make such a collector work with C++.
Here's Stroustrup's (not really direct) answer to "Why doesn't C++ have garbage collection?":
If you want automatic garbage collection, there are good commercial and public-domain garbage collectors for C++. For applications where garbage collection is suitable, C++ is an excellent garbage collected language with a performance that compares favorably with other garbage collected languages. See The C++ Programming Language (3rd Edition) for a discussion of automatic garbage collection in C++. See also, Hans-J. Boehm's site for C and C++ garbage collection.
Also, C++ supports programming techniques that allows memory management to be safe and implicit without a garbage collector.
C++0x offers a GC ABI.
Isn't a good programming language design supposed to maintain a "foo-table" that takes care of this situation ?
Is it? why? A good programming language is one that lets you solve problems, no more, no less.
A garbage collector certainly lowers the barrier of entry, but it also takes control away from the programmer, which might be a problem in some cases. True, on a modern 2.5GHz quad-core computer, and with today's advanced and efficient garbage collectors, we can live with that. But C++ had to work with much more limited hardware, ranging from desktop computers with a whopping 16MB of RAM down to embedded platforms with 16KB, and everything in between. It has to be usable in realtime code, where you can not just pause the program for 0.5 seconds to run a garbage collection.
C++ isn't just designed to be the language used on desktop computers. It's meant to be usable everywhere, on memory-limited systems, in hard realtime scenarios, on large supercomputers and everywhere else.
C++'s guiding principle is "you don't pay for what you don't use". If you don't want a garbage collector, you shouldn't have to pay the (steep) price of one.
There are very powerful techniques to manage memory in C++ and avoid memory leaks, even without a garbage collector. If a garbage collector was the only way to avoid memory leaks, then there'd be a strong argument in favor of adding one to the language. But it isn't. You just have to learn how to properly manage memory yourself in C++.
What is the core driving design of memory management?
The driving design (no pun intended) is a bit like that of stick-shift transmission cars, as opposed to automatic transmission cars. Like a stick-shift car, C++ gives you freedom and control over the machine, but it's not as easy to use as automatic ones that take care of many things for you.
The following could easily have been written about C++ versus Java:
People who drive stick shift cars know the difference and the advantages of
having total control of your car engine; people who drive cars with automatic
transmissions do not. (...) Race cars, for example, do not use automatic
transmissions. (...) People who are used to shifting gears will focus more
on their driving making it more efficient and safe.
http://www.eslbee.com/contrast_stick_shift_or_automatic.htm
I should add, though, that C++ does have some mechanism that handle the memory for you, like others have mentioned, e.g. RAII, smart pointers etc.
C++ has no design philosophy wrt memory. All it has are two functions for allocating memory (new malloc) and two functions for freeing memory ( delete free ) and a frew related functions. Even those can be replaced by the programmer.
This is because C++ aims to run on generic computers. Generic computers are a CPU, memory (RAM, varieties of ROM) and a bus/busses for peripherals. There is no built in memory management on generic computers.
Now most computers come with memory ( typically a ROM variant ) which contains a bios/monitor. There you might see some rudimentary form of memory management --probably not.
Some computers come with OSes which will have memory management, but even there it is often primitive, and it is easy for me to claim that most computers running a C++ program have no OS at all.
If you expect C++ to run on any computer, it cannot have a memory management philosophy.