So I recently got into learning C++ and I was really intrigued by memory allocation, garbage collection, heap, stack, etc...
I wrote this program, and I was wondering what it would do to my computer? Would it crash because my RAM would get filled up? Would it run slower? Would it stop before it fills up too much ram?
I am too afraid of running it on my machine.
#include <iostream>
using namespace std;
int main()
{
while (true) {
int* x = new int;
}
}
EDIT:
So I ran the program on my PC and it is really just a ram eating program. I kept it going until windows explorer failed and my desktop went black. Then I rebooted the PC and it was fine.
It actually depends on the host system on which the program is run, and the compiler.
Some compilers (particularly with optimisation settings enabled) are smart enough to recognize that that the memory allocated in the loop is not used in a way that affects program output. The compiler may then omit the memory allocation and may then (since an infinite loop that does nothing actually yields undefined behaviour in C++) do anything. Possible outcomes then include the program looping forever (consuming CPU cycles but not allocating memory), the compiler not generating the loop at all (so the program immediately terminates).
If compiler optimisation is not in play (e.g. a non-optimised build) the program may loop and repeatedly allocate memory. What happens then depends on the operating system.
Some operating systems - such as unix variants - can be configured (or are configured by default) to do lazy allocation or over-committing, in the sense they won't actually allocate virtual or physical memory until the program tries to non-trivially use that memory (e.g. stores data in it). Since your loop never uses the memory it allocates, on a system doing lazy allocation, the loop may run indefinitely - with no observable effect on the host system, other than lost CPU cycles.
If the OS is not configured to do lazy allocation (e.g. actually allocates the memory immediately) the loop will continue until allocation fails and terminate the program by throwing an exception.
What happens during that process depends, again, on the operating system.
If the operating system imposes a limited quota to the process that is significantly less than the total memory (physical or virtual) available to the operating system, then other processes (and the OS itself) may be unaffected by our program.
If the operating system allows the program to allocate all available memory, then the repeated allocation in this program may affect the behaviour of other programs, or the operating system itself. If the operating system is able to detect that, it may forceably terminate the program - and, in that process, it may reclaim the allocated memory. If the operating system does not detect that behaviour and terminate the program, the program may cause slow-down of the operating system.
If your operating system itself is hosted (e.g. in a hypervisor environment) then the operating system itself may be terminated, but other instances of operating systems running on the same physical machine will normally be unaffected.
It has been at least two decades since a novice programmer could unleash such a program on an unsuspecting machine, and be even a little confident that the machine will eventually slow down and possibly crash, or need a hard power cycle. Operating system design, and larger system designs (e.g. routine use of hypervisors) make such pranks less likely to have any enduring effect.
first thing you will face with is memory leak.
but your programme will continue to allocate memory from heap and will face with memory leak but OS will stop your programme. there is concept in OS named OOM. in this concept if some programme continue to allocate memory and cause some important part of system goes down OS will immediately send SIGKILL signal. which is cause you programme killed and then free allocated memory.
We are all taught that you MUST free every pointer that is allocated. I'm a bit curious, though, about the real cost of not freeing memory. In some obvious cases, like when malloc() is called inside a loop or part of a thread execution, it's very important to free so there are no memory leaks. But consider the following two examples:
First, if I have code that's something like this:
int main()
{
char *a = malloc(1024);
/* Do some arbitrary stuff with 'a' (no alloc functions) */
return 0;
}
What's the real result here? My thinking is that the process dies and then the heap space is gone anyway so there's no harm in missing the call to free (however, I do recognize the importance of having it anyway for closure, maintainability, and good practice). Am I right in this thinking?
Second, let's say I have a program that acts a bit like a shell. Users can declare variables like aaa = 123 and those are stored in some dynamic data structure for later use. Clearly, it seems obvious that you'd use some solution that will calls some *alloc function (hashmap, linked list, something like that). For this kind of program, it doesn't make sense to ever free after calling malloc because these variables must be present at all times during the program's execution and there's no good way (that I can see) to implement this with statically allocated space. Is it bad design to have a bunch of memory that's allocated but only freed as part of the process ending? If so, what's the alternative?
Just about every modern operating system will recover all the allocated memory space after a program exits. The only exception I can think of might be something like Palm OS where the program's static storage and runtime memory are pretty much the same thing, so not freeing might cause the program to take up more storage. (I'm only speculating here.)
So generally, there's no harm in it, except the runtime cost of having more storage than you need. Certainly in the example you give, you want to keep the memory for a variable that might be used until it's cleared.
However, it's considered good style to free memory as soon as you don't need it any more, and to free anything you still have around on program exit. It's more of an exercise in knowing what memory you're using, and thinking about whether you still need it. If you don't keep track, you might have memory leaks.
On the other hand, the similar admonition to close your files on exit has a much more concrete result - if you don't, the data you wrote to them might not get flushed, or if they're a temp file, they might not get deleted when you're done. Also, database handles should have their transactions committed and then closed when you're done with them. Similarly, if you're using an object oriented language like C++ or Objective C, not freeing an object when you're done with it will mean the destructor will never get called, and any resources the class is responsible might not get cleaned up.
Yes you are right, your example doesn't do any harm (at least not on most modern operating systems). All the memory allocated by your process will be recovered by the operating system once the process exits.
Source: Allocation and GC Myths (PostScript alert!)
Allocation Myth 4: Non-garbage-collected programs
should always deallocate all memory
they allocate.
The Truth: Omitted
deallocations in frequently executed
code cause growing leaks. They are
rarely acceptable. but Programs that
retain most allocated memory until
program exit often perform better
without any intervening deallocation.
Malloc is much easier to implement if
there is no free.
In most cases, deallocating memory
just before program exit is pointless.
The OS will reclaim it anyway. Free
will touch and page in the dead
objects; the OS won't.
Consequence: Be careful with "leak
detectors" that count allocations.
Some "leaks" are good!
That said, you should really try to avoid all memory leaks!
Second question: your design is ok. If you need to store something until your application exits then its ok to do this with dynamic memory allocation. If you don't know the required size upfront, you can't use statically allocated memory.
=== What about future proofing and code reuse? ===
If you don't write the code to free the objects, then you are limiting the code to only being safe to use when you can depend on the memory being free'd by the process being closed ... i.e. small one-time use projects or "throw-away"[1] projects)... where you know when the process will end.
If you do write the code that free()s all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.
[1] regarding "throw-away" projects. Code used in "Throw-away" projects has a way of not being thrown away. Next thing you know ten years have passed and your "throw-away" code is still being used).
I heard a story about some guy who wrote some code just for fun to make his hardware work better. He said "just a hobby, won't be big and professional". Years later lots of people are using his "hobby" code.
You are correct, no harm is done and it's faster to just exit
There are various reasons for this:
All desktop and server environments simply release the entire memory space on exit(). They are unaware of program-internal data structures such as heaps.
Almost all free() implementations do not ever return memory to the operating system anyway.
More importantly, it's a waste of time when done right before exit(). At exit, memory pages and swap space are simply released. By contrast, a series of free() calls will burn CPU time and can result in disk paging operations, cache misses, and cache evictions.
Regarding the possiblility of future code reuse justifing the certainty of pointless ops: that's a consideration but it's arguably not the Agile way. YAGNI!
I completely disagree with everyone who says OP is correct or there is no harm.
Everyone is talking about a modern and/or legacy OS's.
But what if I'm in an environment where I simply have no OS?
Where there isn't anything?
Imagine now you are using thread styled interrupts and allocate memory.
In the C standard ISO/IEC:9899 is the lifetime of memory stated as:
7.20.3 Memory management functions
1 The order and contiguity of storage allocated by successive calls to the calloc,
malloc, and realloc functions is unspecified. The pointer returned if the allocation
succeeds is suitably aligned so that it may be assigned to a pointer to any type of object
and then used to access such an object or an array of such objects in the space allocated
(until the space is explicitly deallocated). The lifetime of an allocated object extends
from the allocation until the deallocation.[...]
So it has not to be given that the environment is doing the freeing job for you.
Otherwise it would be added to the last sentence: "Or until the program terminates."
So in other words:
Not freeing memory is not just bad practice. It produces non portable and not C conform code.
Which can at least be seen as 'correct, if the following: [...], is supported by environment'.
But in cases where you have no OS at all, no one is doing the job for you
(I know generally you don't allocate and reallocate memory on embedded systems,
but there are cases where you may want to.)
So speaking in general plain C (as which the OP is tagged),
this is simply producing erroneous and non portable code.
I typically free every allocated block once I'm sure that I'm done with it. Today, my program's entry point might be main(int argc, char *argv[]) , but tomorrow it might be foo_entry_point(char **args, struct foo *f) and typed as a function pointer.
So, if that happens, I now have a leak.
Regarding your second question, if my program took input like a=5, I would allocate space for a, or re-allocate the same space on a subsequent a="foo". This would remain allocated until:
The user typed 'unset a'
My cleanup function was entered, either servicing a signal or the user typed 'quit'
I can not think of any modern OS that does not reclaim memory after a process exits. Then again, free() is cheap, why not clean up? As others have said, tools like valgrind are great for spotting leaks that you really do need to worry about. Even though the blocks you example would be labeled as 'still reachable' , its just extra noise in the output when you're trying to ensure you have no leaks.
Another myth is "If its in main(), I don't have to free it", this is incorrect. Consider the following:
char *t;
for (i=0; i < 255; i++) {
t = strdup(foo->name);
let_strtok_eat_away_at(t);
}
If that came prior to forking / daemonizing (and in theory running forever), your program has just leaked an undetermined size of t 255 times.
A good, well written program should always clean up after itself. Free all memory, flush all files, close all descriptors, unlink all temporary files, etc. This cleanup function should be reached upon normal termination, or upon receiving various kinds of fatal signals, unless you want to leave some files laying around so you can detect a crash and resume.
Really, be kind to the poor soul who has to maintain your stuff when you move on to other things .. hand it to them 'valgrind clean' :)
It is completely fine to leave memory unfreed when you exit; malloc() allocates the memory from the memory area called "the heap", and the complete heap of a process is freed when the process exits.
That being said, one reason why people still insist that it is good to free everything before exiting is that memory debuggers (e.g. valgrind on Linux) detect the unfreed blocks as memory leaks, and if you have also "real" memory leaks, it becomes more difficult to spot them if you also get "fake" results at the end.
This code will usually work alright, but consider the problem of code reuse.
You may have written some code snippet which doesn't free allocated memory, it is run in such a way that memory is then automatically reclaimed. Seems allright.
Then someone else copies your snippet into his project in such a way that it is executed one thousand times per second. That person now has a huge memory leak in his program. Not very good in general, usually fatal for a server application.
Code reuse is typical in enterprises. Usually the company owns all the code its employees produce and every department may reuse whatever the company owns. So by writing such "innocently-looking" code you cause potential headache to other people. This may get you fired.
What's the real result here?
Your program leaked the memory. Depending on your OS, it may have been recovered.
Most modern desktop operating systems do recover leaked memory at process termination, making it sadly common to ignore the problem (as can be seen by many other answers here.)
But you are relying on a safety feature not being part of the language, one you should not rely upon. Your code might run on a system where this behaviour does result in a "hard" memory leak, next time.
Your code might end up running in kernel mode, or on vintage / embedded operating systems which do not employ memory protection as a tradeoff. (MMUs take up die space, memory protection costs additional CPU cycles, and it is not too much to ask from a programmer to clean up after himself).
You can use and re-use memory (and other resources) any way you like, but make sure you deallocated all resources before exiting.
If you're using the memory you've allocated, then you're not doing anything wrong. It becomes a problem when you write functions (other than main) that allocate memory without freeing it, and without making it available to the rest of your program. Then your program continues running with that memory allocated to it, but no way of using it. Your program and other running programs are deprived of that memory.
Edit: It's not 100% accurate to say that other running programs are deprived of that memory. The operating system can always let them use it at the expense of swapping your program out to virtual memory (</handwaving>). The point is, though, that if your program frees memory that it isn't using then a virtual memory swap is less likely to be necessary.
There's actually a section in the OSTEP online textbook for an undergraduate course in operating systems which discusses exactly your question.
The relevant section is "Forgetting To Free Memory" in the Memory API chapter on page 6 which gives the following explanation:
In some cases, it may seem like not calling free() is reasonable. For
example, your program is short-lived, and will soon exit; in this case,
when the process dies, the OS will clean up all of its allocated pages and
thus no memory leak will take place per se. While this certainly “works”
(see the aside on page 7), it is probably a bad habit to develop, so be wary
of choosing such a strategy
This excerpt is in the context of introducing the concept of virtual memory. Basically at this point in the book, the authors explain that one of the goals of an operating system is to "virtualize memory," that is, to let every program believe that it has access to a very large memory address space.
Behind the scenes, the operating system will translate "virtual addresses" the user sees to actual addresses pointing to physical memory.
However, sharing resources such as physical memory requires the operating system to keep track of what processes are using it. So if a process terminates, then it is within the capabilities and the design goals of the operating system to reclaim the process's memory so that it can redistribute and share the memory with other processes.
EDIT: The aside mentioned in the excerpt is copied below.
ASIDE: WHY NO MEMORY IS LEAKED ONCE YOUR PROCESS EXITS
When you write a short-lived program, you might allocate some space
using malloc(). The program runs and is about to complete: is there
need to call free() a bunch of times just before exiting? While it seems
wrong not to, no memory will be "lost" in any real sense. The reason is
simple: there are really two levels of memory management in the system.
The first level of memory management is performed by the OS, which
hands out memory to processes when they run, and takes it back when
processes exit (or otherwise die). The second level of management
is within each process, for example within the heap when you call
malloc() and free(). Even if you fail to call free() (and thus leak
memory in the heap), the operating system will reclaim all the memory of
the process (including those pages for code, stack, and, as relevant here,
heap) when the program is finished running. No matter what the state
of your heap in your address space, the OS takes back all of those pages
when the process dies, thus ensuring that no memory is lost despite the
fact that you didn’t free it.
Thus, for short-lived programs, leaking memory often does not cause any
operational problems (though it may be considered poor form). When
you write a long-running server (such as a web server or database management
system, which never exit), leaked memory is a much bigger issue,
and will eventually lead to a crash when the application runs out of
memory. And of course, leaking memory is an even larger issue inside
one particular program: the operating system itself. Showing us once
again: those who write the kernel code have the toughest job of all...
from Page 7 of Memory API chapter of
Operating Systems: Three Easy Pieces
Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau
Arpaci-Dusseau Books
March, 2015 (Version 0.90)
There's no real danger in not freeing your variables, but if you assign a pointer to a block of memory to a different block of memory without freeing the first block, the first block is no longer accessible but still takes up space. This is what's called a memory leak, and if you do this with regularity then your process will start to consume more and more memory, taking away system resources from other processes.
If the process is short-lived you can often get away with doing this as all allocated memory is reclaimed by the operating system when the process completes, but I would advise getting in the habit of freeing all memory you have no further use for.
You are correct, memory is automatically freed when the process exits. Some people strive not to do extensive cleanup when the process is terminated, since it will all be relinquished to the operating system. However, while your program is running you should free unused memory. If you don't, you may eventually run out or cause excessive paging if your working set gets too big.
You are absolutely correct in that respect. In small trivial programs where a variable must exist until the death of the program, there is no real benefit to deallocating the memory.
In fact, I had once been involved in a project where each execution of the program was very complex but relatively short-lived, and the decision was to just keep memory allocated and not destabilize the project by making mistakes deallocating it.
That being said, in most programs this is not really an option, or it can lead you to run out of memory.
It depends on the scope of the project that you're working on. In the context of your question, and I mean just your question, then it doesn't matter.
For a further explanation (optional), some scenarios I have noticed from this whole discussion is as follow:
(1) - If you're working in an embedded environment where you cannot rely on the main OS' to reclaim the memory for you, then you should free them since memory leaks can really crash the program if done unnoticed.
(2) - If you're working on a personal project where you won't disclose it to anyone else, then you can skip it (assuming you're using it on the main OS') or include it for "best practices" sake.
(3) - If you're working on a project and plan to have it open source, then you need to do more research into your audience and figure out if freeing the memory would be the better choice.
(4) - If you have a large library and your audience consisted of only the main OS', then you don't need to free it as their OS' will help them to do so. In the meantime, by not freeing, your libraries/program may help to make the overall performance snappier since the program does not have to close every data structure, prolonging the shutdown time (imagine a very slow excruciating wait to shut down your computer before leaving the house...)
I can go on and on specifying which course to take, but it ultimately depends on what you want to achieve with your program. Freeing memory is considered good practice in some cases and not so much in some so it ultimately depends on the specific situation you're in and asking the right questions at the right time. Good luck!
If you're developing an application from scratch, you can make some educated choices about when to call free. Your example program is fine: it allocates memory, maybe you have it work for a few seconds, and then closes, freeing all the resources it claimed.
If you're writing anything else, though -- a server/long-running application, or a library to be used by someone else, you should expect to call free on everything you malloc.
Ignoring the pragmatic side for a second, it's much safer to follow the stricter approach, and force yourself to free everything you malloc. If you're not in the habit of watching for memory leaks whenever you code, you could easily spring a few leaks. So in other words, yes -- you can get away without it; please be careful, though.
If a program forgets to free a few Megabytes before it exits the operating system will free them. But if your program runs for weeks at a time and a loop inside the program forgets to free a few bytes in each iteration you will have a mighty memory leak that will eat up all the available memory in your computer unless you reboot it on a regular basis => even small memory leaks might be bad if the program is used for a seriously big task even if it originally wasn't designed for one.
It depends on the OS environment the program is running in, as others have already noted, and for long running processes, freeing memory and avoiding even very slow leaks is important always. But if the operating system deals with stuff, as Unix has done for example since probably forever, then you don't need to free memory, nor close files (the kernel closes all open file descriptors when a process exits.)
If your program allocates a lot of memory, it may even be beneficial to exit without "hesitation". I find that when I quit Firefox, it spends several !minutes ! paging in gigabytes of memory in many processes. I guess this is due to having to call destructors on C++ objects. This is actually terrible. Some might argue, that this is necessary to save state consistently, but in my opinion, long-running interactive programs like browsers, editors and design programs, just to mention a few, should ensure that any state information, preferences, open windows/pages, documents etc is frequently written to permanent storage, to avoid loss of work in case of a crash. Then this state-saving can be performed again quickly when the user elects to quit, and when completed, the processes should just exit immediately.
All memory allocated for this process will be marked unused by OS then reused, because the memory allocation is done by user space functions.
Imagine OS is a god, and the memories is the materials for creating a wolrd of process, god use some of materials creat a world (or to say OS reserved some of memory and create a process in it). No matter what the creatures in this world have done the materials not belong to this world won't be affected. After this world expired, OS the god, can recycle materials allocated for this world.
Modern OS may have different details on releasing user space memory, but that has to be a basic duty of OS.
I think that your two examples are actually only one: the free() should occur only at the end of the process, which as you point out is useless since the process is terminating.
In you second example though, the only difference is that you allow an undefined number of malloc(), which could lead to running out of memory. The only way to handle the situation is to check the return code of malloc() and act accordingly.
Suppose I had a program like this:
int main(void)
{
int* arr = new int[x];
//processing; neglect to call delete[]
return 0;
}
In a trivial example such as this, I assume there is little actual harm in neglecting to free the memory allocated for arr, since it should be released by the OS when the program is finished running. For any non-trivial program, however, this is considered to be bad practice and will lead to memory leaks.
My question is, what are the consequences of memory leaks in a non-trivial program? I realize that memory leaks are bad practice, but I do not understand why they are bad and what trouble they cause.
A memory leak can diminish the performance of the computer by reducing the amount of available memory. Eventually, in the worst case, too much of the available memory may become allocated and all or part of the system or device stops working correctly, the application fails, or the system slows down unacceptably due to thrashing.
Memory leaks may not be serious or even detectable by normal means. In modern operating systems, normal memory used by an application is released when the application terminates. This means that a memory leak in a program that only runs for a short time may not be noticed and is rarely serious.
Much more serious leaks include those:
where the program runs for an extended time and consumes additional memory over time, such as background tasks on servers, but especially in embedded devices which may be left running for many years
where new memory is allocated frequently for one-time tasks, such as when rendering the frames of a computer game or animated video
where the program can request memory — such as shared memory — that is not released, even when the program terminates
where memory is very limited, such as in an embedded system or portable device
where the leak occurs within the operating system or memory manager
when a system device driver causes the leak
running on an operating system that does not automatically release memory on program termination. Often on such machines if memory is lost, it can only be reclaimed by a reboot, an example of such a system being AmigaOS.
Check out here for more info.
There is an underlying assumption to your question:
The role of delete and delete[] is solely to release memory.
... and it is erroneous.
For better or worse, delete and delete[] have a dual role:
Run destructors
Free memory (by calling the right overload of operator delete)
With the corrected assumption, we can now ask the corrected question:
What is the risk in not calling delete/delete[] to end the lifetime of dynamically allocated variables ?
As mentioned, an obvious risk is leaking memory (and ultimately crashing). However this is the least of your worries. The much bigger risk is undefined behavior, which means that:
compiler may inadvertently not produce executable code that behaves as expected: Garbage in, Garbage out.
in pragmatic terms, the most likely output is that destructors are not run...
The latter is extremely worrisome:
Mutexes: Forget to release a lock and you get a deadlock...
File Descriptors: Some platforms (such as FreeBSD I believe) have a notoriously low default limit on the number of file descriptors a process may open; fail to close your file descriptors and you will not be able to open any new file or socket!
Sockets: on top of being a file descriptor, there is a limited range of ports associated to an IP (which with the latest version of Linux is no longer global, yeah!). The absolute maximum is 65,536 (u16...) but the ephemeral port range is usually much smaller (half of it). If you forget to release connections in a timely fashion you can easily end up in a situation where even though you have plenty of bandwidth available, your server stops accepting new connections because there is no ephemeral port available.
...
The problem with the attitude of well, I got enough memory anyway is that memory is probably the least of your worries simply because memory is probably the least scarce resource you manipulate.
Of course you could say: Okay, I'll concentrate on other resources leak, but tools nowadays report them as memory leaks (and it's sufficient) so isolating that leak among hundreds/thousands is like seeking a needle in a haystack...
Note: did I mention that you can still run out of memory ? Whether on lower-end machines/systems or on a restricted processes/virtual-machines memory can be quite tight for the task at hand.
Note: if you find yourself calling delete, you are doing it wrong. Learn to use the Standard Library std::unique_ptr and its containers std::vector. In C++, automatic memory management is easy, the real challenge is to avoid dangling pointers...
Let's say we have this program running:
while(true)
{
int* arr = new int;
}
The short term problem is that your computer will eventually run out of memory and the program will crash.
Instead, we could have this program that would run forever because there is no memory leak:
while(true)
{
int* arr = new int;
delete arr;
}
When a simple program like this crashes there is no long term consequences because the operating system will free the memory after the crash.
But you can imagine more critical systems where a system crash will have catastrophic consequences such as:
while(true)
{
int* arr = new int;
generateOxygenForAstronauts();
}
Think about the astronauts and free your memory!
A tool that runs for a short period of time and then exits can often get away with having memory leaks, as your example indicates. But a program that is expected to run without failure for long periods of time must be completely free of memory leaks. As others have mentioned, the whole system will bog down first. Additionally, code that leaks memory often is very bad at handling allocation failures - the result of a failed allocation is usually a crash and loss of data. From the user's perspective, this crash usually happens at exactly the worst possible moment (e.g. during file save, when file buffers get allocated).
Well, it is a strange question, since the immediate answer is straightforward: as you lose memory to memory leaks, you can/will eventually run out of memory. How big a problem that represents to a specific program depends on how big each leak is, how often these leaks occur and for how long. That's all there is to it.
A program that allocates relatively low amount of memory and/or is not run continuously might not suffer any problems from memory leaks at all. But a program that is run continuously will eventually run out of memory, even if it leaks it very slowly.
Now, if one decided to look at it closer, every block of memory has two sides to it: it occupies a region of addresses in the address space of the process and it occupies a portion of the actual physical storage.
On a platform without virtual memory, both sides work against you. Once the memory block is leaked, you lose the address space and you lose the storage.
On a platform with virtual memory the actual storage is a practically unlimited resource. You can leak as much memory as you want, you will never run out of the actual storage (within practically reasonable limits, of course). A leaked memory block will eventually be pushed out to external storage and forgotten for good, so it will not directly affect the program in any negative way. However, it will still hold its region of address space. And the address space still remains a limited resource, which you can run out of.
One can say, that if we take an imaginary virtual memory platform with address space that is overwhelmingly larger than anything ever consumable by our process (say, 2048-bit platform and a typical text editor), then memory leaks will have no consequence for our program. But in real life memory leaks typically constitute a serious problem.
Nowadays compilers do some optimization on your code before generating the binary. And so single newing without deleting it wouldn't have much of harm.
But in general by doing any "Newing" you should "delete" that portion of memory you've reserved in your program.
And also be aware that simple deleting doesn't guarantee not running out of memory.
There are different aspects from O.S. and the compiler side to control this feature.
This link may help you a little
And this one too
My question has two parts:
Is it possible that, if a segfault occurs after allocating memory but before freeing it, this leaks memory (that is, the memory is never freed resulting in a memory leak)?
If so, is there any way to ensure allocated memory is cleaned up in the event of a segfault?
I've been reading about memory manamgement in C++ but was unable to find anything about my specific question.
In the event of a seg fault, the OS is responsible for cleaning up all the resources held by your program.
Edit:
Modern operating systems will clean up any leaked memory regardless of how your program terminates. The memory is only leaked for the life of your program. Most OS's will also clean up many other types of resources such as open files and socket connections.
Is it possible that, if a segfault occurs after allocating memory but
before freeing it, this leaks memory (that is, the memory is never
freed resulting in a memory leak)?
Yes and No: The process which crashes should be tiedied completely by the OS. However consider other processes spawned by your process: They might not get terminated completely. However usually these shouldn't take too many resources at all, but this may vary depending on your program. See http://en.wikipedia.org/wiki/Zombie_process
If so, is there any way to ensure allocated memory is cleaned up in
the event of a segfault?
In case the program is non critical (meaning there are no lives at stake if it crashes) I suggest fixing the segmentation fault. If you really need to be able to handle segmentation faults see the answer on this topic: How to catch segmentation fault in Linux?
UPDATE: Please note that despite the fact that it is possible to handle SIGSEGV signals (and continuning in program flow) it is not a secure way to rely on, since - as pointed out in the comments below - it is undefined behaviour meaning differen platforms/compilers/... may react differently.
So by any means possible fixing segmentation faults (as well as access violations on windows) should have first priority. Still using the suggested solution to handle signals this way must be thoroughly tested and if put in production code you must be aware of it and draw any consequences - which may vary and depend on your requirements so I will not name any.
The C++ standard does not concern itself with seg-faults (that's a platform-specific thing).
In practice, it really depends on what you do, and what your definition of "memory leak" is. In theory, you can register a handler for the seg-fault signal, in which you can do all necessary cleanup. However, any modern OS will automatically clear up a terminating process anyway.
One, there are resources the system is responsible for cleaning up. One of them is memory. You do not have to worry about permanently leaked RAM on a segfault.
Two, there are resources that the system is not responsible for cleaning up. You could write a program that inserts its pid into a database and removes it on close. That won't be removed on a segfault. You can either 1) add a handler to clean up that sort of non-system resources, or 2) fix the bugs in your program to begin with.
Modern operating systems separate applications' memory as to be able to clean up after them. The problem with segmentation faults is that they normally only occur when something goes wrong. At this point, the default behavior is to shut down the application because it's no longer functioning as expected.
Likewise, unless under some bizarre circumstance, your application has likely done something you cannot account for if it has hit a segmentation fault. So, "cleaning up" is likely practically impossible. It's not out of the realm of possibility to use memory carefully similar to transactional databases in order to guarantee an acceptable roll-back state, however doing so (and depending on how fine-grained of a level) may be beyond tedious.
A more practical version may be to provide your own sort of sandboxing between application components and reboot a component if it dies, restoring it to a acceptable previous saved state wholesale. That way you can flush all of its allocated memory and have it start from scratch. You still lose whatever data it hadn't saved as of the last checkpoint, however.
I know some Java and am right now trying C++ instead and apparently in C++ you can do things like declare an int array of size 6, then change the 10th element of that array, which I'm understanding to be simply the 4th byte after the end of the section of memory that was allocated for the 6-integer array.
So my question is, if I'm careless is it possible to accidentally alter memory in my C++ program that is being used by other programs on my system? Is there an actual risk of seriously messing something up this way? I mean I know you can just restart your computer and clear the memory if you have to, but if I don't do that, there could be some lasting damage.
It depends on your system. Formally, an out of bounds access is
undefined behavior. On a modern general purpose system, each user
process has its own address space, and one process can't modify, or even
read, that of another process (barring shared memory), so unless you're
writing kernel code, you shouldn't be able to break anything outside of
your own process (and non-kernel code can't normally do physical IO, so
I don't see how anything in the hardware could break).
If you're writing kernel code, however, or working on an embedded
processor with no memory mapping or protection, you can literally
destroy hardware with an out of bounds write; if the program is
controlling something like a nuclear power plant, you can even destroy a
lot more than just the machine your code is running on.
Each process is given its own virtual address space, so naturally processes don't see each others memory. Don't forget that even a buffer overrun that is local to your program can have dire consequences - the overrun may cause the program to misbehave and do something that has lasting effect (like deleting all files for example).
This depends on what operating system and environment you are in:
Normal OS (Windows, Linux etc) userspace program: You can only mess up your own process memory. However having really bad luck this can be enough. Imagine for example that you make a call to some function that deletes files. If your memory is corrupted at the time of the call, the parameters to the function might be messed up to mean deletion of something else than you intended. As long as you keep from calling delete file routines etc. in the programs where you test memory handling, this risk is non-existent.
Normal OS, kernel space device driver: You can access system memory and the memory of the currently running process, possibly destroying everything.
Simple embedded OS without memory protection: You can access everything and destroy anything.
Legacey OS without memory protection (Win 3.X, MS-DOS): You can access everyting and destroy anything.
Every program runs in its own address space and one program cannot access (read / modify) any other programs address space (this is a memory management technique called paging).
If you try to access an address in memory that your program cannot read it will cause a segmentation or page fault and your program will crash.
In answer to your question, no permanent damage will be caused.
I'm not sure modern OS (especially win7) allows you to do that. The OS will block the buffer overrun action as you've described
Back in the day (DOS times) some viruses would try to damage hardware by directly programming video or hard drive controller, but even then it was not easy or sure thing. Modern hardware and OSs make it practicaly impossible for user level applications to damage hardware. So program away :) you wont break anything.
There is a different possibility. Having a buffer overrun might allow ill reputed people to exploit that bug and execute arbitrary code in your clients computer. I guess thats bad enough. And the most dangerous part about overrun is that you may not find it even after a seriously excessive test.