Memory overwrites in C++ code showing up in consistent locations - c++

I have a very un-scientific observation about memory overwrites and was curious if anyone else has noticed something similar, knows why, and/or can tell me why I wasn't really seeing what I thought I was seeing.
What I noticed was that for some C++ programs, when I have a memory overwrite bug in that program, it would usually (if not always) show up in a specific section of code which was often unrelated to the section of code with the bug. This is not a blanket observation. Not all C++ programs behave this way. But when I have one, it is pretty consistent. (No comment on why my code has enough memory overwrites that I have the opportunity to notice consistent-anything :) )
I'm not asking why a memory overwrite in function1 can show up in function2; that is understood. My observation is that over the life of a given program, we have discovered memory overwrites in function1, function2, function3, function4, and function5. But in each case, we discovered the problem because the code would crash in function6. Always in function6 and only in function6. None of those functions are related and do not touch anything that function6 uses.
Over my lifetime, I've encountered two C programs and one C++ program that behaved this way. These were years apart in unrelated systems and hardware. I just found it weird and wondered if anyone else has seen this. Plus, I suspect that I may be seeing the same pattern in a C++/JNI/Java program that I'm working on now, but it is young enough that I've not had enough hits to be sure of a pattern.

Perhaps the real question here is about what escalates "silent" memory corruption into an actual/formal crash (as opposed to "just" more subtle problems such as unexpected data values, that the user might or might not notice or recognize). I don't think that question can be answered generally, as it depends a lot on the specifics of the compiler, the code, the memory layout of the in-memory data structures, etc.
It can be said that in most modern (non-embedded) systems there is an MMU that handles translating virtual (i.e. per-process) memory addresses into physical memory addresses, and that many user-space crashes are the result of the MMU generating an unrecoverable page fault when the user program tries to dereference a virtual address that has no defined physical equivalent. So perhaps in this case function6() was trying to dereference a pointer whose value had been corrupted in such a way that the MMU couldn't translate it to a physical address. Note that the compiler often places its own pointers on the stack (to remember where the program's control flow should return to when a function returns), so a bad pointer dereference can happen even in code that doesn't explicitly dereference any pointers.
Another common cause for a crash would be a deliberately induced crash invoked by code that notices that the data it is working with is "in a state that should never happen" and calls abort() or similar. This can happen in user code that has assert() calls in it, or in system-provided code such as the code that manages the process's heap. So it could be that function6() tried to allocate or free heap memory, and in so doing gave the heap manager the chance to detect an "impossible state" in one of its data structures and crash out. Keep in mind that the process's heap is really just one big data structure that is shared by all parts of the program that use the heap, so it's not terribly surprising that heap corruption caused by one part of the program might result in a crash later on by another (mostly unrelated) part of the program that also uses the heap.

Related

Why is a segmentation fault not recoverable?

Following a previous question of mine, most comments say "just don't, you are in a limbo state, you have to kill everything and start over". There is also a "safeish" workaround.
What I fail to understand is why a segmentation fault is inherently nonrecoverable.
The moment in which writing to protected memory is caught - otherwise, the SIGSEGV would not be sent.
If the moment of writing to protected memory can be caught, I don't see why - in theory - it can't be reverted, at some low level, and have the SIGSEGV converted to a standard software exception.
Please explain why after a segmentation fault the program is in an undetermined state, as very obviously, the fault is thrown before memory was actually changed (I am probably wrong and don't see why). Had it been thrown after, one could create a program that changes protected memory, one byte at a time, getting segmentation faults, and eventually reprogramming the kernel - a security risk that is not present, as we can see the world still stands.
When exactly does a segmentation fault happen (= when is SIGSEGV sent)?
Why is the process in an undefined behavior state after that point?
Why is it not recoverable?
Why does this solution avoid that unrecoverable state? Does it even?
When exactly does segmentation fault happen (=when is SIGSEGV sent)?
When you attempt to access memory you don’t have access to, such as accessing an array out of bounds or dereferencing an invalid pointer. The signal SIGSEGV is standardized but different OS might implement it differently. "Segmentation fault" is mainly a term used in *nix systems, Windows calls it "access violation".
Why is the process in undefined behavior state after that point?
Because one or several of the variables in the program didn’t behave as expected. Let’s say you have some array that is supposed to store a number of values, but you didn’t allocate enough room for all them. So only those you allocated room for get written correctly, and the rest written out of bounds of the array can hold any values. How exactly is the OS to know how critical those out of bounds values are for your application to function? It knows nothing of their purpose.
Furthermore, writing outside allowed memory can often corrupt other unrelated variables, which is obviously dangerous and can cause any random behavior. Such bugs are often hard to track down. Stack overflows for example are such segmentation faults prone to overwrite adjacent variables, unless the error was caught by protection mechanisms.
If we look at the behavior of "bare metal" microcontroller systems without any OS and no virtual memory features, just raw physical memory - they will just silently do exactly as told - for example, overwriting unrelated variables and keep on going. Which in turn could cause disastrous behavior in case the application is mission-critical.
Why is it not recoverable?
Because the OS doesn’t know what your program is supposed to be doing.
Though in the "bare metal" scenario above, the system might be smart enough to place itself in a safe mode and keep going. Critical applications such as automotive and med-tech aren’t allowed to just stop or reset, as that in itself might be dangerous. They will rather try to "limp home" with limited functionality.
Why does this solution avoid that unrecoverable state? Does it even?
That solution is just ignoring the error and keeps on going. It doesn’t fix the problem that caused it. It’s a very dirty patch and setjmp/longjmp in general are very dangerous functions that should be avoided for any purpose.
We have to realize that a segmentation fault is a symptom of a bug, not the cause.
Please explain why after a segmentation fault the program is in an undetermined state
I think this is your fundamental misunderstanding -- the SEGV does not cause the undetermined state, it is a symptom of it. So the problem is (generally) that the program is in an illegal, unrecoverable state WELL BEFORE the SIGSEGV occurs, and recovering from the SIGSEGV won't change that.
When exactly does segmentation fault happen (=when is SIGSEGV sent)?
The only standard way in which a SIGSEGV occurs is with the call raise(SIGSEGV);. If this is the source of a SIGSEGV, then it is obviously recoverable by using longjump. But this is a trivial case that never happens in reality. There are platform-specific ways of doing things that might result in well-defined SEGVs (eg, using mprotect on a POSIX system), and these SEGVs might be recoverable (but will likely require platform specific recovery). However, the danger of undefined-behavior related SEGV generally means that the signal handler will very carefully check the (platform dependent) information that comes along with the signal to make sure it is something that is expected.
Why is the process in undefined behavior state after that point?
It was (generally) in undefined behavior state before that point; it just wasn't noticed. That's the big problem with Undefined Behavior in both C and C++ -- there's no specific behavior associated with it, so it might not be noticed right away.
Why does this solution avoid that unrecoverable state? Does it even?
It does not, it just goes back to some earlier point, but doesn't do anything to undo or even identify the undefined behavior that cause the problem.
A segfault happens when your program tries to dereference a bad pointer. (See below for a more technical version of that, and other things that can segfault.) At that point, your program has already tripped over a bug that led to the pointer being bad; the attempt to deref it is often not the actual bug.
Unless you intentionally do some things that can segfault, and intend to catch and handle those cases (see section below), you won't know what got messed up by a bug in your program (or a cosmic ray flipping a bit) before a bad access actually faulted. (And this generally requires writing in asm, or running code you JITed yourself, not C or C++.)
C and C++ don't define the behaviour of programs that cause segmentation faults, so compilers don't make machine-code that anticipates attempted recovery. Even in a hand-written asm program, it wouldn't make sense to try unless you expected some kinds of segfaults, there's no sane way to try to truly recover; at most you should just print an error message before exiting.
If you mmap some new memory at whatever address the access way trying to access, or mprotect it from read-only to read+write (in a SIGSEGV handler), that can let the faulting instruction execute, but that's very unlikely to let execution resume. Most read-only memory is read-only for a reason, and letting something write to it won't be helpful. And an attempt to read something through a pointer probably needed to get some specific data that's actually somewhere else (or to not be reading at all because there's nothing to read). So mapping a new page of zeros to that address will let execution continue, but not useful correct execution. Same for modifying the main thread's instruction pointer in a SIGSEGV handler, so it resumes after the faulting instruction. Then whatever load or store will just have not happened, using whatever garbage was previously in a register (for a load), or similar other results for CISC add reg, [mem] or whatever.
(The example you linked of catching SIGSEGV depends on the compiler generating machine code in the obvious way, and the setjump/longjump depends on knowing which code is going to segfault, and that it happened without first overwriting some valid memory, e.g. the stdout data structures that printf depends on, before getting to an unmapped page, like could happen with a loop or memcpy.)
Expected SIGSEGVs, for example a JIT sandbox
A JIT for a language like Java or Javascript (which don't have undefined behaviour) needs to handle null-pointer dereferences in a well-defined way, by (Java) throwing a NullPointerException in the guest machine.
Machine code implementing the logic of a Java program (created by a JIT compiler as part of a JVM) would need to check every reference at least once before using, in any case where it couldn't prove at JIT-compile time that it was non-null, if it wanted to avoid ever having the JITed code fault.
But that's expensive, so a JIT may eliminate some null-pointer checks by allowing faults to happen in the guest asm it generates, even though such a fault will first trap to the OS, and only then to the JVM's SIGSEGV handler.
If the JVM is careful in how it lays out the asm instructions its generating, so any possible null pointer deref will happen at the right time wrt. side-effects on other data and only on paths of execution where it should happen (see #supercat's answer for an example), then this is valid. The JVM will have to catch SIGSEGV and longjmp or whatever out of the signal handler, to code that delivers a NullPointerException to the guest.
But the crucial part here is that the JVM is assuming its own code is bug-free, so the only state that's potentially "corrupt" is the guest actual state, not the JVM's data about the guest. This means the JVM is able to process an exception happening in the guest without depending on data that's probably corrupt.
The guest itself probably can't do much, though, if it wasn't expecting a NullPointerException and thus doesn't specifically know how to repair the situation. It probably shouldn't do much more than print an error message and exit or restart itself. (Pretty much what a normal ahead-of-time-compiled C++ program is limited to.)
Of course the JVM needs to check the fault address of the SIGSEGV and find out exactly which guest code it was in, to know where to deliver the NullPointerException. (Which catch block, if any.) And if the fault address wasn't in JITed guest code at all, then the JVM is just like any other ahead-of-time-compiled C/C++ program that segfaulted, and shouldn't do much more than print an error message and exit. (Or raise(SIGABRT) to trigger a core dump.)
Being a JIT JVM doesn't make it any easier to recover from unexpected segfaults due to bugs in your own logic. The key thing is that there's a sandboxed guest which you're already making sure can't mess up the main program, and its faults aren't unexpected for the host JVM. (You can't allow "managed" code in the guest to have fully wild pointers that could be pointing anywhere, e.g. to guest code. But that's normally fine. But you can still have null pointers, using a representation that does in practice actually fault if hardware tries to deref it. That doesn't let it write or read the host's state.)
For more about this, see Why are segfaults called faults (and not aborts) if they are not recoverable? for an asm-level view of segfaults. And links to JIT techniques that let guest code page-fault instead of doing runtime checks:
Effective Null Pointer Check Elimination Utilizing Hardware Trap a research paper on this for Java, from three IBM scientists.
SableVM: 6.2.4 Hardware Support on Various Architectures about NULL pointer checks
A further trick is to put the end of an array at the end of a page (followed by a large-enough unmapped region), so bounds-checking on every access is done for free by the hardware. If you can statically prove the index is always positive, and that it can't be larger than 32 bit, you're all set.
Implicit Java Array Bounds Checking on 64-bit
Architectures. They talk about what to do when array size isn't a multiple of the page size, and other caveats.
Background: what are segfaults
The usual reason for the OS delivering SIGSEGV is after your process triggers a page fault that the OS finds is "invalid". (I.e. it's your fault, not the OS's problem, so it can't fix it by paging in data that was swapped out to disk (hard page fault) or copy-on-write or zero a new anonymous page on first access (soft page fault), and updating the hardware page tables for that virtual page to match what your process logically has mapped.).
The page-fault handler can't repair the situation because the user-space thread normally because user-space hasn't asked the OS for any memory to be mapped to that virtual address. If it did just try to resume user-space without doing anything to the page table, the same instruction would just fault again, so instead the kernel delivers a SIGSEGV. The default action for that signal is to kill the process, but if user-space has installed a signal handler it can catch it.
Other reasons include (on Linux) trying to run a privileged instruction in user-space (e.g. an x86 #GP "General Protection Fault" hardware exception), or on x86 Linux a misaligned 16-byte SSE load or store (again a #GP exception). This can happen with manually-vectorized code using _mm_load_si128 instead of loadu, or even as a result of auto-vectorization in a program with undefined behaviour: Why does unaligned access to mmap'ed memory sometimes segfault on AMD64? (Some other OSes, e.g. MacOS / Darwin, deliver SIGBUS for misaligned SSE.)
Segfaults usually only happen after your program encountered a bug
So your program state is already messed up, that's why there was for example a NULL pointer where you expected one to be non-NULL, or otherwise invalid. (e.g. some forms of use-after free, or a pointer overwritten with some bits that don't represent a valid pointer.)
If you're lucky it will segfault and fail early and noisily, as close as possible to the actual bug; if you're unlucky (e.g. corrupting malloc bookkeeping info) you won't actually segfault until long after the buggy code executed.
The thing you have to understand about segmentation faults is that they are not a problem. They are an example of the Lord's near-infinite mercy (according to an old professor I had in college). A segmentation fault is a sign that something is very wrong, and your program thought it was a good idea to access memory where there was no memory to be had. That access is not in itself the problem; the problem came at some indeterminate time before, when something went wrong, that eventually caused your program to think that this access was a good idea. Accessing non-existent memory is just a symptom at this point, but (and this is where the Lord's mercy comes into it) it's an easily-detected symptom. It could be much worse; it could be accessing memory where there is memory to be had, just, the wrong memory. The OS can't save you from that.
The OS has no way to figure out what caused your program to believe something so absurd, and the only thing it can do is shut things down, before it does something else insane in a way the OS can't detect so easily. Usually, most OSes also provide a core dump (a saved copy of the program's memory), which could in theory be used to figure out what the program thought it was doing. This isn't really straightforward for any non-trivial program, but that's why the OS does it, just in case.
While your question asks specifically about segmentation faults, the real question is:
If a software or hardware component is commanded to do something nonsensical or even impossible, what should it do? Do nothing at all? Guess what actually needs to be done and do that? Or use some mechanism (such as "throwing an exception") to halt the higher-level computation which issued the nonsensical command?
The vast weight of experience gathered by many engineers, over many years, agrees that the best answer is halting the overall computation, and producing diagnostic information which may help someone figure out what is wrong.
Aside from illegal access to protected or nonexistent memory, other examples of 'nonsensical commands' include telling a CPU to divide an integer by zero or to execute junk bytes which do not decode to any valid instruction. If a programming language with run-time type checking is used, trying to invoke any operation which is not defined for the data types involved is another example.
But why is it better to force a program which tries to divide by zero to crash? Nobody wants their programs to crash. Couldn't we define division-by-zero to equal some number, such as zero, or 73? And couldn't we create CPUs which would skip over invalid instructions without faulting? Maybe our CPUs could also return some special value, like -1, for any read from a protected or unmapped memory address. And they could just ignore writes to protected addresses. No more segfaults! Whee!
Certainly, all those things could be done, but it wouldn't really gain anything. Here's the point: While nobody wants their programs to crash, not crashing does not mean success. People write and run computer programs to do something, not just to "not crash". If a program is buggy enough to read or write random memory addresses or attempt to divide by zero, the chances are very low that it will do what you actually want, even if it is allowed to continue running. On the other hand, if the program is not halted when it attempts crazy things, it may end up doing something that you do not want, such as corrupting or destroying your data.
Historically, some programming languages have been designed to always "just do something" in response to nonsensical commands, rather than raising a fatal error. This was done in a misguided attempt to be more friendly to novice programmers, but it always ended badly. The same would be true of your suggestion that operating systems should never crash programs due to segfaults.
At the machine-code level, many platforms would allow programs that are "expecting" segmentation faults in certain circumstances to adjust the memory configuration and resume execution. This may be useful for implementing things like stack monitoring. If one needs to determine the maximum amount of stack that was ever used by an application, one could set the stack segment to allow access only to a small amount of stack, and then respond to segmentation faults by adjusting the bounds of the stack segment and resuming code execution.
At the C language level, however, supporting such semantics would greatly impede optimization. If one were to write something like:
void test(float *p, int *q)
{
float temp = *p;
if (*q += 1)
function2(temp);
}
a compiler might regard the read of *p and the read-modify-write sequence on *q as being unsequenced relative to each other, and generate code that only reads *p in cases where the initial value of *q wasn't -1. This wouldn't affect program behavior anything if p were valid, but if p was invalid this change could result in the segment fault from the access to *p occurring after *q was incremented even though the access that triggered the fault was performed before the increment.
For a language to efficiently and meaningfully support recoverable segment faults, it would have to document the range of permissible and non-permissible optimizations in much more detail than the C Standard has ever done, and I see no reason to expect future versions of the C Standard to include such detail.
It is recoverable, but it is usually a bad idea.
For example Microsoft C++ compiler has option to turn segfaults into exceptions.
You can see the Microsoft SEH documentation, but even they do not suggest using it.
Honestly if I could tell the computer to ignore a segmentation fault. I would not take this option.
Usually the segmentation fault occurs because you are dereferencing either a null pointer or a deallocated pointer. When dereferencing null the behavior is completely undefined. When referencing a deallocated pointer the data you are pulling either could be the old value, random junk or in the worst case values from another program. In either case I want the program to segfault and not continue and report junk calculations.
Segmentation faults were a constant thorn in my side for many years. I worked primarily on embedded platforms and since we were running on bare metal, there was no file system on which to record a core dump. The system just locked up and died, perhaps with a few parting characters out the serial port. One of the more enlightening moments from those years was when I realized that segmentation faults (and similar fatal errors) are a good thing. Experiencing one is not good, but having them in place as hard, unavoidable failure points is.
Faults like that aren't generated lightly. The hardware has already tried everything it can to recover, and the fault is the hardware's way of warning you that continuing is dangerous. So much, in fact, that bringing the whole process/system crashing down is actually safer than continuing. Even in systems with protected/virtual memory, continuing execution after this sort of fault can destabilize the rest of the system.
If the moment of writing to protected memory can be caught
There are more ways to get into a segfault than just writing to protected memory. You can also get there by e.g., reading from a pointer with an invalid value. That's either caused by previous memory corruption (the damage has already been done, so it's too late to recover) or by a lack of error checking code (should have been caught by your static analyzer and/or tests).
Why is it not recoverable?
You don't necessarily know what caused the problem or what the extent of it is, so you can't know how to recover from it. If your memory has been corrupted, you can't trust anything. The cases where this would be recoverable are cases where you could have detected the problem ahead of time, so using an exception isn't the right way to solve the problem.
Note that some of these types of problems are recoverable in other languages like C#. Those languages typically have an extra runtime layer that's checking pointer addresses ahead of time and throwing exceptions before the hardware generates a fault. You don't have any of that with low-level languages like C, though.
Why does this solution avoid that unrecoverable state? Does it even?
That technique "works", but only in contrived, simplistic use cases. Continuing to execute is not the same as recovering. The system in question is still in the faulted state with unknown memory corruption, you're just choosing to continue blazing onward instead of heeding the hardware's advice to take the problem seriously. There's no telling what your program would do at that point. A program that continues to execute after potential memory corruption would be an early Christmas gift for an attacker.
Even if there wasn't any memory corruption, that solution breaks in many different common use cases. You can't enter a second protected block of code (such as inside a helper function) while already inside of one. Any segfault that happens outside a protected block of code will result in a jump to an unpredictable point in your code. That means every line of code needs to be in a protective block and your code will be obnoxious to follow. You can't call external library code, since that code doesn't use this technique and won't set the setjmp anchor. Your "handler" block can't call library functions or do anything involving pointers or you risk needing endlessly-nested blocks. Some things like automatic variables can be in an unpredictable state after a longjmp.
One thing missing here, about mission critical systems (or any
system): In large systems in production, one can't know where, or
even if the segfaults are, so the reccomendation to fix the bug and
not the symptom does not hold.
I don't agree with this thought. Most segmentation faults that I've seen are caused by dereferencing pointers (directly or indirectly) without validating them first. Checking pointers before you use them will tell you where the segfaults are. Split up complex statements like my_array[ptr1->offsets[ptr2->index]] into multiple statements so that you can check the intermediate pointers as well. Static analyzers like Coverity are good about finding code paths where pointers are used without being validated. That won't protect you against segfaults caused by outright memory corruption, but there's no way to recover from that situation in any case.
In short-term practice, I think my errors are only access to
null and nothing more.
Good news! This whole discussion is moot. Pointers and array indices can (and should!) be validated before they are used, and checking ahead of time is far less code than waiting for a problem to happen and trying to recover.
This might not be a complete answer, and it is by no means complete or accurate, but it doesn't fit into a comment
So a SIGSEGV can occur when you try to access memory in a way that you should not (like writing to it when it is read-only or reading from an address range that is not mapped). Such an error alone might be recoverable if you know enough about the environment.
But how do you want to determine why that invalid access happened in the first place.
In one comment to another answer you say:
short-term practice, I think my errors are only access to null and nothing more.
No application is error-free so why do you assume if null pointer access can happen that your application does not e.g. also have a situation where a use after free or an out of bounds access to "valid" memory locations happens, that doesn't immediately result in an error or a SIGSEGV.
A use-after-free or out-of-bounds access could also modify a pointer into pointing to an invalid location or into being a nullptr, but it could also have changed other locations in the memory at the same time. If you now only assume that the pointer was just not initialized and your error handling only considers this, you continue with an application that is in a state that does not match your expectation or one of the compilers had when generating the code.
In that case, the application will - in the best case - crash shortly after the "recovery" in the worst case some variables have faulty values but it will continue to run with those. This oversight could be more harmful for a critical application than restarting it.
If you however know that a certain action might under certain circumstances result in a SIGSEGV you can handle that error, e.g. that you know that the memory address is valid, but that the device the memory is mapped to might not be fully reliable and might cause a SIGSEGV due to that then recovering from a SIGSEGV might be a valid approach.
Depends what you mean by recovery. The only sensible recovery in case the OS sends you the SEGV signal is to clean up your program and spin another one from the start, hopefully not hitting the same pitfall.
You have no way to know how much your memory got corrupted before the OS called an end to the chaos. Chances are if you try to continue from the next instruction or some arbitrary recovery point, your program will misbehave further.
The thing that it seems many of the upvoted responses are forgetting is that there are applications in which segfaults can happen in production without a programming error. And where high availability, decades of lifetime and zero maintenance are expected. In those environments, what's typically done is that the program is restarted if it crashes for any reason, segfault included. Additionally, a watchdog functionality is used to ensure that the program does not get stuck in an unplanned infinite loop.
Think of all the embedded devices you rely on that have no reset button. They rely on imperfect hardware, because no hardware is perfect. The software has to deal with hardware imperfections. In other words, the software must be robust against hardware misbehavior.
Embedded isn't the only area where this is crucial. Think of the amount of servers handling just StackOverflow. The chance of ionizing radiation causing a single event upset is tiny if you look at any one operation at ground level, but this probability becomes non-trivial if you look at a large number of computers running 24/7. ECC memory helps against this, but not everything can be protected.
Your program is an undertermined state because C can't define the state. The bugs which cause these errors are undefined behavior. This is the nastiest class of bad behaviors.
The key issue with recovering from these things is that, being undefined behavior, the complier is not obliged to support them in any way. In particular, it may have done optimizations which, if only defined behaviors occur, provably have the same effect. The compiler is completely within its rights to reorder lines, skip lines, and do all sorts of fancy tricks to make your code run faster. All it has to do is prove that the effect is the same according to the C++ virtual machine model.
When an undefined behavior occurs, all that goes out the window. You may get into difficult situations where the compiler has reordered operations and now can't get you to a state which you could arrive at by executing your program for a period of time. Remember that assignments erase the old value. If an assignment got moved up before the line that segfaulted, you can't recover the old value to "unwind" the optimization.
The behavior of this reordered code was indeed identical to the original, as long as no undefined behavior occurred. Once the undefined behavior occurred, it exposes the fact that the reorder occurred and could change results.
The tradeoff here is speed. Because the compiler isn't walking on eggshells, terrified of some unspecified OS behavior, it can do a better job of optimizing your code.
Now, because undefined behavior is always undefined behavior, no matter how much you wish it wasn't, there cannot be a spec C++ way to handle this case. The C++ language can never introduce a way to resolve this, at least short of making it defined behavior, and paying the costs for that. On a given platform and compiler, you may be able to identify that this undefined behavior is actually defined by your compiler, typically in the form of extensions. Indeed, the answer I linked earlier shows a way to turn a signal into an exception, which does indeed work on at least one platform/compiler pair.
But it always has to be on the fringe like this. The C++ developers value the speed of optimized code over defining this undefined behavior.
As you use the term SIGSEGV I believe you are using a system with an operating system and that the problem occurs in your user land application.
When the application gets the SIGSEGV it is a symptom of something gone wrong before the memory access. Sometimes it can be pinpointed to exactly where things went wrong, generally not. So something went wrong, and a while later this wrong was the cause of a SIGSEGV. If the error happened "in the operating system" my reaction would be to shut down the system. With a very specific exceptions -- when the OS has a specific function to check for memory card or IO card installed (or perhaps removed).
In the user land I would probably divide my application into several processes. One or more processes would do the actual work. Another process would monitor the worker process(es) and could discover when one of them fails. A SIGSEGV in a worker process could then be discovered by the monitor process, which could restart the worker process or do a fail-over or whatever is deemed appropriate in the specific case. This would not recover the actual memory access, but might recover the application function.
You might look into the Erlang philosophy of "fail early" and the OTP library for further inspiration about this way of doing things. It does not handle SIGSEGV though, but several other types of problems.
Your program cannot recover from a segmentation fault because it has no idea what state anything is in.
Consider this analogy.
You have a nice house in Maine with a pretty front garden and a stepping stone path running across it. For whatever reason, you've chosen to connect each stone to the next with a ribbon (a.k.a. you've made them into a singly-linked list).
One morning, coming out of the house, you step onto the first stone, then follow the ribbon to the second, then again to the third but, when you step onto the fourth stone, you suddenly find yourself in Albuquerque.
Now tell us - how do you recover from that?
Your program has the same quandary.
Something went spectacularly wrong but your program has no idea what it was, or what caused it or how to do anything useful about it.
Hence: it crashes and burns.
It is absolutely possible, but this would duplicate existing functionality in a less stable way.
The kernel will already receive a page fault exception when a program accesses an address that is not yet backed by physical memory, and will then assign and potentially initialize a page according to the existing mappings, and then retry the offending instruction.
A hypothetical SEGV handler would do the exact same thing: decide what should be mapped at this address, create the mapping and retry the instruction -- but with the difference that if the handler would incur another SEGV, we could go into an endless loop here, and detection would be difficult since that decision would need to look into the code -- so we'd be creating a halting problem here.
The kernel already allocates memory pages lazily, allows file contents to be mapped and supports shared mappings with copy-on-write semantics, so there isn't much to gain from this mechanism.
So far, answers and comments have responded through the lens of a higher-level programming model, which fundamentally limits the creativity and potential of the programmer for their convenience. Said models define their own semantics and do not handle segmentation faults for their own reasons, whether simplicity, efficiency or anything else. From that perspective, a segfault is an unusual case that is indicative of programmer error, whether the userspace programmer or the programmer of the language's implementation. The question, however, is not about whether or not it's a good idea, nor is it asking for any of your thoughts on the matter.
In reality, what you say is correct: segmentation faults are recoverable. You can, as any regular signal, attach a handler for it with sigaction. And, yes, your program can most certainly be made in such a way that handling segmentation faults is a normal feature.
One obstacle is that a segmentation fault is a fault, not an exception, which is different in regards to where control flow returns to after the fault has been handled. Specifically, a fault handler returns to the same faulting instruction, which will continue to fault indefinitely. This isn't a real problem, though, as it can be skipped manually, you may return to a specified location, you may attempt to patch the faulting instruction into becoming correct or you may map said memory into existence if you trust the faulting code. With proper knowledge of the machine, nothing is stopping you, not even those spec-wielding knights.

What happened when the program return but without free the allocated memory? [duplicate]

We are all taught that you MUST free every pointer that is allocated. I'm a bit curious, though, about the real cost of not freeing memory. In some obvious cases, like when malloc() is called inside a loop or part of a thread execution, it's very important to free so there are no memory leaks. But consider the following two examples:
First, if I have code that's something like this:
int main()
{
char *a = malloc(1024);
/* Do some arbitrary stuff with 'a' (no alloc functions) */
return 0;
}
What's the real result here? My thinking is that the process dies and then the heap space is gone anyway so there's no harm in missing the call to free (however, I do recognize the importance of having it anyway for closure, maintainability, and good practice). Am I right in this thinking?
Second, let's say I have a program that acts a bit like a shell. Users can declare variables like aaa = 123 and those are stored in some dynamic data structure for later use. Clearly, it seems obvious that you'd use some solution that will calls some *alloc function (hashmap, linked list, something like that). For this kind of program, it doesn't make sense to ever free after calling malloc because these variables must be present at all times during the program's execution and there's no good way (that I can see) to implement this with statically allocated space. Is it bad design to have a bunch of memory that's allocated but only freed as part of the process ending? If so, what's the alternative?
Just about every modern operating system will recover all the allocated memory space after a program exits. The only exception I can think of might be something like Palm OS where the program's static storage and runtime memory are pretty much the same thing, so not freeing might cause the program to take up more storage. (I'm only speculating here.)
So generally, there's no harm in it, except the runtime cost of having more storage than you need. Certainly in the example you give, you want to keep the memory for a variable that might be used until it's cleared.
However, it's considered good style to free memory as soon as you don't need it any more, and to free anything you still have around on program exit. It's more of an exercise in knowing what memory you're using, and thinking about whether you still need it. If you don't keep track, you might have memory leaks.
On the other hand, the similar admonition to close your files on exit has a much more concrete result - if you don't, the data you wrote to them might not get flushed, or if they're a temp file, they might not get deleted when you're done. Also, database handles should have their transactions committed and then closed when you're done with them. Similarly, if you're using an object oriented language like C++ or Objective C, not freeing an object when you're done with it will mean the destructor will never get called, and any resources the class is responsible might not get cleaned up.
Yes you are right, your example doesn't do any harm (at least not on most modern operating systems). All the memory allocated by your process will be recovered by the operating system once the process exits.
Source: Allocation and GC Myths (PostScript alert!)
Allocation Myth 4: Non-garbage-collected programs
should always deallocate all memory
they allocate.
The Truth: Omitted
deallocations in frequently executed
code cause growing leaks. They are
rarely acceptable. but Programs that
retain most allocated memory until
program exit often perform better
without any intervening deallocation.
Malloc is much easier to implement if
there is no free.
In most cases, deallocating memory
just before program exit is pointless.
The OS will reclaim it anyway. Free
will touch and page in the dead
objects; the OS won't.
Consequence: Be careful with "leak
detectors" that count allocations.
Some "leaks" are good!
That said, you should really try to avoid all memory leaks!
Second question: your design is ok. If you need to store something until your application exits then its ok to do this with dynamic memory allocation. If you don't know the required size upfront, you can't use statically allocated memory.
=== What about future proofing and code reuse? ===
If you don't write the code to free the objects, then you are limiting the code to only being safe to use when you can depend on the memory being free'd by the process being closed ... i.e. small one-time use projects or "throw-away"[1] projects)... where you know when the process will end.
If you do write the code that free()s all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.
[1] regarding "throw-away" projects. Code used in "Throw-away" projects has a way of not being thrown away. Next thing you know ten years have passed and your "throw-away" code is still being used).
I heard a story about some guy who wrote some code just for fun to make his hardware work better. He said "just a hobby, won't be big and professional". Years later lots of people are using his "hobby" code.
You are correct, no harm is done and it's faster to just exit
There are various reasons for this:
All desktop and server environments simply release the entire memory space on exit(). They are unaware of program-internal data structures such as heaps.
Almost all free() implementations do not ever return memory to the operating system anyway.
More importantly, it's a waste of time when done right before exit(). At exit, memory pages and swap space are simply released. By contrast, a series of free() calls will burn CPU time and can result in disk paging operations, cache misses, and cache evictions.
Regarding the possiblility of future code reuse justifing the certainty of pointless ops: that's a consideration but it's arguably not the Agile way. YAGNI!
I completely disagree with everyone who says OP is correct or there is no harm.
Everyone is talking about a modern and/or legacy OS's.
But what if I'm in an environment where I simply have no OS?
Where there isn't anything?
Imagine now you are using thread styled interrupts and allocate memory.
In the C standard ISO/IEC:9899 is the lifetime of memory stated as:
7.20.3 Memory management functions
1 The order and contiguity of storage allocated by successive calls to the calloc,
malloc, and realloc functions is unspecified. The pointer returned if the allocation
succeeds is suitably aligned so that it may be assigned to a pointer to any type of object
and then used to access such an object or an array of such objects in the space allocated
(until the space is explicitly deallocated). The lifetime of an allocated object extends
from the allocation until the deallocation.[...]
So it has not to be given that the environment is doing the freeing job for you.
Otherwise it would be added to the last sentence: "Or until the program terminates."
So in other words:
Not freeing memory is not just bad practice. It produces non portable and not C conform code.
Which can at least be seen as 'correct, if the following: [...], is supported by environment'.
But in cases where you have no OS at all, no one is doing the job for you
(I know generally you don't allocate and reallocate memory on embedded systems,
but there are cases where you may want to.)
So speaking in general plain C (as which the OP is tagged),
this is simply producing erroneous and non portable code.
I typically free every allocated block once I'm sure that I'm done with it. Today, my program's entry point might be main(int argc, char *argv[]) , but tomorrow it might be foo_entry_point(char **args, struct foo *f) and typed as a function pointer.
So, if that happens, I now have a leak.
Regarding your second question, if my program took input like a=5, I would allocate space for a, or re-allocate the same space on a subsequent a="foo". This would remain allocated until:
The user typed 'unset a'
My cleanup function was entered, either servicing a signal or the user typed 'quit'
I can not think of any modern OS that does not reclaim memory after a process exits. Then again, free() is cheap, why not clean up? As others have said, tools like valgrind are great for spotting leaks that you really do need to worry about. Even though the blocks you example would be labeled as 'still reachable' , its just extra noise in the output when you're trying to ensure you have no leaks.
Another myth is "If its in main(), I don't have to free it", this is incorrect. Consider the following:
char *t;
for (i=0; i < 255; i++) {
t = strdup(foo->name);
let_strtok_eat_away_at(t);
}
If that came prior to forking / daemonizing (and in theory running forever), your program has just leaked an undetermined size of t 255 times.
A good, well written program should always clean up after itself. Free all memory, flush all files, close all descriptors, unlink all temporary files, etc. This cleanup function should be reached upon normal termination, or upon receiving various kinds of fatal signals, unless you want to leave some files laying around so you can detect a crash and resume.
Really, be kind to the poor soul who has to maintain your stuff when you move on to other things .. hand it to them 'valgrind clean' :)
It is completely fine to leave memory unfreed when you exit; malloc() allocates the memory from the memory area called "the heap", and the complete heap of a process is freed when the process exits.
That being said, one reason why people still insist that it is good to free everything before exiting is that memory debuggers (e.g. valgrind on Linux) detect the unfreed blocks as memory leaks, and if you have also "real" memory leaks, it becomes more difficult to spot them if you also get "fake" results at the end.
This code will usually work alright, but consider the problem of code reuse.
You may have written some code snippet which doesn't free allocated memory, it is run in such a way that memory is then automatically reclaimed. Seems allright.
Then someone else copies your snippet into his project in such a way that it is executed one thousand times per second. That person now has a huge memory leak in his program. Not very good in general, usually fatal for a server application.
Code reuse is typical in enterprises. Usually the company owns all the code its employees produce and every department may reuse whatever the company owns. So by writing such "innocently-looking" code you cause potential headache to other people. This may get you fired.
What's the real result here?
Your program leaked the memory. Depending on your OS, it may have been recovered.
Most modern desktop operating systems do recover leaked memory at process termination, making it sadly common to ignore the problem (as can be seen by many other answers here.)
But you are relying on a safety feature not being part of the language, one you should not rely upon. Your code might run on a system where this behaviour does result in a "hard" memory leak, next time.
Your code might end up running in kernel mode, or on vintage / embedded operating systems which do not employ memory protection as a tradeoff. (MMUs take up die space, memory protection costs additional CPU cycles, and it is not too much to ask from a programmer to clean up after himself).
You can use and re-use memory (and other resources) any way you like, but make sure you deallocated all resources before exiting.
If you're using the memory you've allocated, then you're not doing anything wrong. It becomes a problem when you write functions (other than main) that allocate memory without freeing it, and without making it available to the rest of your program. Then your program continues running with that memory allocated to it, but no way of using it. Your program and other running programs are deprived of that memory.
Edit: It's not 100% accurate to say that other running programs are deprived of that memory. The operating system can always let them use it at the expense of swapping your program out to virtual memory (</handwaving>). The point is, though, that if your program frees memory that it isn't using then a virtual memory swap is less likely to be necessary.
There's actually a section in the OSTEP online textbook for an undergraduate course in operating systems which discusses exactly your question.
The relevant section is "Forgetting To Free Memory" in the Memory API chapter on page 6 which gives the following explanation:
In some cases, it may seem like not calling free() is reasonable. For
example, your program is short-lived, and will soon exit; in this case,
when the process dies, the OS will clean up all of its allocated pages and
thus no memory leak will take place per se. While this certainly “works”
(see the aside on page 7), it is probably a bad habit to develop, so be wary
of choosing such a strategy
This excerpt is in the context of introducing the concept of virtual memory. Basically at this point in the book, the authors explain that one of the goals of an operating system is to "virtualize memory," that is, to let every program believe that it has access to a very large memory address space.
Behind the scenes, the operating system will translate "virtual addresses" the user sees to actual addresses pointing to physical memory.
However, sharing resources such as physical memory requires the operating system to keep track of what processes are using it. So if a process terminates, then it is within the capabilities and the design goals of the operating system to reclaim the process's memory so that it can redistribute and share the memory with other processes.
EDIT: The aside mentioned in the excerpt is copied below.
ASIDE: WHY NO MEMORY IS LEAKED ONCE YOUR PROCESS EXITS
When you write a short-lived program, you might allocate some space
using malloc(). The program runs and is about to complete: is there
need to call free() a bunch of times just before exiting? While it seems
wrong not to, no memory will be "lost" in any real sense. The reason is
simple: there are really two levels of memory management in the system.
The first level of memory management is performed by the OS, which
hands out memory to processes when they run, and takes it back when
processes exit (or otherwise die). The second level of management
is within each process, for example within the heap when you call
malloc() and free(). Even if you fail to call free() (and thus leak
memory in the heap), the operating system will reclaim all the memory of
the process (including those pages for code, stack, and, as relevant here,
heap) when the program is finished running. No matter what the state
of your heap in your address space, the OS takes back all of those pages
when the process dies, thus ensuring that no memory is lost despite the
fact that you didn’t free it.
Thus, for short-lived programs, leaking memory often does not cause any
operational problems (though it may be considered poor form). When
you write a long-running server (such as a web server or database management
system, which never exit), leaked memory is a much bigger issue,
and will eventually lead to a crash when the application runs out of
memory. And of course, leaking memory is an even larger issue inside
one particular program: the operating system itself. Showing us once
again: those who write the kernel code have the toughest job of all...
from Page 7 of Memory API chapter of
Operating Systems: Three Easy Pieces
Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau
Arpaci-Dusseau Books
March, 2015 (Version 0.90)
There's no real danger in not freeing your variables, but if you assign a pointer to a block of memory to a different block of memory without freeing the first block, the first block is no longer accessible but still takes up space. This is what's called a memory leak, and if you do this with regularity then your process will start to consume more and more memory, taking away system resources from other processes.
If the process is short-lived you can often get away with doing this as all allocated memory is reclaimed by the operating system when the process completes, but I would advise getting in the habit of freeing all memory you have no further use for.
You are correct, memory is automatically freed when the process exits. Some people strive not to do extensive cleanup when the process is terminated, since it will all be relinquished to the operating system. However, while your program is running you should free unused memory. If you don't, you may eventually run out or cause excessive paging if your working set gets too big.
You are absolutely correct in that respect. In small trivial programs where a variable must exist until the death of the program, there is no real benefit to deallocating the memory.
In fact, I had once been involved in a project where each execution of the program was very complex but relatively short-lived, and the decision was to just keep memory allocated and not destabilize the project by making mistakes deallocating it.
That being said, in most programs this is not really an option, or it can lead you to run out of memory.
It depends on the scope of the project that you're working on. In the context of your question, and I mean just your question, then it doesn't matter.
For a further explanation (optional), some scenarios I have noticed from this whole discussion is as follow:
(1) - If you're working in an embedded environment where you cannot rely on the main OS' to reclaim the memory for you, then you should free them since memory leaks can really crash the program if done unnoticed.
(2) - If you're working on a personal project where you won't disclose it to anyone else, then you can skip it (assuming you're using it on the main OS') or include it for "best practices" sake.
(3) - If you're working on a project and plan to have it open source, then you need to do more research into your audience and figure out if freeing the memory would be the better choice.
(4) - If you have a large library and your audience consisted of only the main OS', then you don't need to free it as their OS' will help them to do so. In the meantime, by not freeing, your libraries/program may help to make the overall performance snappier since the program does not have to close every data structure, prolonging the shutdown time (imagine a very slow excruciating wait to shut down your computer before leaving the house...)
I can go on and on specifying which course to take, but it ultimately depends on what you want to achieve with your program. Freeing memory is considered good practice in some cases and not so much in some so it ultimately depends on the specific situation you're in and asking the right questions at the right time. Good luck!
If you're developing an application from scratch, you can make some educated choices about when to call free. Your example program is fine: it allocates memory, maybe you have it work for a few seconds, and then closes, freeing all the resources it claimed.
If you're writing anything else, though -- a server/long-running application, or a library to be used by someone else, you should expect to call free on everything you malloc.
Ignoring the pragmatic side for a second, it's much safer to follow the stricter approach, and force yourself to free everything you malloc. If you're not in the habit of watching for memory leaks whenever you code, you could easily spring a few leaks. So in other words, yes -- you can get away without it; please be careful, though.
If a program forgets to free a few Megabytes before it exits the operating system will free them. But if your program runs for weeks at a time and a loop inside the program forgets to free a few bytes in each iteration you will have a mighty memory leak that will eat up all the available memory in your computer unless you reboot it on a regular basis => even small memory leaks might be bad if the program is used for a seriously big task even if it originally wasn't designed for one.
It depends on the OS environment the program is running in, as others have already noted, and for long running processes, freeing memory and avoiding even very slow leaks is important always. But if the operating system deals with stuff, as Unix has done for example since probably forever, then you don't need to free memory, nor close files (the kernel closes all open file descriptors when a process exits.)
If your program allocates a lot of memory, it may even be beneficial to exit without "hesitation". I find that when I quit Firefox, it spends several !minutes ! paging in gigabytes of memory in many processes. I guess this is due to having to call destructors on C++ objects. This is actually terrible. Some might argue, that this is necessary to save state consistently, but in my opinion, long-running interactive programs like browsers, editors and design programs, just to mention a few, should ensure that any state information, preferences, open windows/pages, documents etc is frequently written to permanent storage, to avoid loss of work in case of a crash. Then this state-saving can be performed again quickly when the user elects to quit, and when completed, the processes should just exit immediately.
All memory allocated for this process will be marked unused by OS then reused, because the memory allocation is done by user space functions.
Imagine OS is a god, and the memories is the materials for creating a wolrd of process, god use some of materials creat a world (or to say OS reserved some of memory and create a process in it). No matter what the creatures in this world have done the materials not belong to this world won't be affected. After this world expired, OS the god, can recycle materials allocated for this world.
Modern OS may have different details on releasing user space memory, but that has to be a basic duty of OS.
I think that your two examples are actually only one: the free() should occur only at the end of the process, which as you point out is useless since the process is terminating.
In you second example though, the only difference is that you allow an undefined number of malloc(), which could lead to running out of memory. The only way to handle the situation is to check the return code of malloc() and act accordingly.

What happens if you allocate memory in a process once it has crashed?

I have been working with Google breakpad for crash reporting for sometime. One of the things that it emphasizes is that one should not allocate memory once a process has crashed. It says it isn't 'safe' to allocate memory in a process once it has crashed.
What exactly is the meaning of 'safe' here?
It's not safe because the standard library may be in a corrupt state, causing a second crash when you for more memory.
If you want to print something, do it w/o memory allocation (e.g. use local variables)
Once a process has crashed, the best you can hope for is learning what has happened before exiting the process for good. The "safety" is, therefore, reduced to your error reporting code not causing a crash of its own on top of the original crash. That is why your options at this point are limited: for example, trying to allocate memory is dangerous, because the original crash could have been caused by corrupted heap.
You really shouldn't be doing anything too complicated after a crash, lest you hang or have a crash in your crash code. The two main reasons are: locks and global variables. If your process crashed while holding a lock, and you need that lock for whatever you are doing after the crash, it'll either error off, or worse, hang. There are also lots of state information stored in global variables. Not just what's declared in your program, but also lots of things used by libraries and even the dynamic linker/loader. If any of those are bad, the result of calling a function that uses one of those globals is unpredictable.

Dangers of stack overflow and segmentation fault in C++

I'm trying to understand how the objects (variables, functions, structs, etc) work in c++. In this case I see there are basically two ways of storing them: the stack and the heap. Accordingly, whenever the heap storage is used it needs to be dealocated manually, but if the stack is used, then the dealocation is automaticcally done. so my question is related to the kinds of problems that bad practice might cause the program itself or to the computer. For example:
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program crashes (stack overflow), but does it cause some trouble to the computer itself? (To the RAM maybe or to the SO).
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the computer in general. I mean it might be that such memory could not be used never again or something.
3.- What are the problems of getting a segmentation fault (the heap).
Some other dangers or cares relevant to this are welcome.
Accordingly, whenever the stack storage is used it needs to be
dealocated manually, but if the heap is used, then the dealocation is
automaticcally done.
When you use stack - local variables in the function - they are deallocated automatically when the function ends (returns).
When you allocate from the heap, the memory allocated remains "in use" until it is freed. If you don't do that, your program, if it runes for long enough and keep allocating "stuff", will use all memory available to it, and eventually fail.
Note that "stackfault" is almost impossible to recover from in an application, because the stack is no longer usable when it's full, and most operations to "recover from error" will involve using SOME stack memory. The processor typically has a special trap to recover from stack fault, but that lands insise the operating system, and if the OS determines the application has run out of stack, it often shows no mercy at all - it just "kills" the application immediately.
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program
crashes (stack overflow), but does it cause some trouble to the
computer itself? (To the RAM maybe or to the SO).
No, the computer itself is not harmed by this in and of itself. There may of course be data-loss if your program wasn't saving something that the user was working on.
Unless the hardware is very badly designed, it's very hard to write code that causes any harm to the computer, beyond loss of stored data (of course, if you write a program that fills the entire hard disk from the first to the last sector, your data will be overwritten with whatever your program fills the disk with - which may well cause the machine to not boot again until you have re-installed an operating system on the disk). But RAM and processors don't get damaged by bad coding (fortunately, as most programmers make mistakes now and again).
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the
computer in general. I mean it might be that such memory could not be
used never again or something.
Once the program finishes (and most programs that use "too much memory" does terminate in some way or another, at some point).
Of course, how well the operating system and other applications handle "there is no memory at all available" varies a little bit. The operating system in itself is generally OK with it, but some drivers that are badly written may well crash, and thus cause your system to reboot if you are unlucky. Applications are more prone to crashing due to there not being enough memory, because allocations end up with NULL (zero) as the "returned address" when there is no memory available. Using address zero in a modern operating system will almost always lead to a "Segmentation fault" or similar problem (see below for more on that).
But these are extreme cases, most systems are set up such that one application gobbling all available memory will in itself fail before the rest of the system is impacted - not always, and it's certainly not guaranteed that the application "causing" the problem is the first one to be killed if the OS kills applications simply because they "eat a lot of memory". Linux does have a "Out of memory killer", which is a pretty drastic method to ensure the system can continue to work [by some definition of "work"].
3.- What are the problems of getting a segmentation fault (the heap).
Segmentation faults don't directly have anything to do with the heap. The term segmentation fault comes from older operating systems (Unix-style) that used "segments" of memory for different usages, and "Segmentation fault" was when the program went outside it's allocated segment. In modern systems, the memory is split into "pages" - typically 4KB each, but some processors have larger pages, and many modern processors support "large pages" of, for examble, 2MB or 1GB, which is used for large chunks of memory.
Now, if you use an address that points to a page that isn't there (or isn't "yours"), you get a segmentation fault. This, typically will end the application then and there. You can "trap" segmentation fault, but in all operating systems I'm aware of, it's not valid to try to continue from this "trap" - but you could for example store away some files to explain what happened and help troubleshoot the problem later, etc.
Firstly, your understanding of stack/heap allocations is backwards: stack-allocated data is automatically reclaimed when it goes out of scope. Dynamically-allocated data (data allocated with new or malloc), which is generally heap-allocated data, must be manually reclaimed with delete/free. However, you can use C++ destructors (RAII) to automatically reclaim dynamically-allocated resources.
Secondly, the 3 questions you ask have nothing to do with the C++ language, but rather they are only answerable with respect to the environment/operating system you run a C++ program in. Modern operating systems generally isolate processes so that a misbehaving process doesn't trample over OS memory or other running programs. In Linux, for example, each process has its own address space which is allocated by the kernel. If a misbehaving process attempts to write to a memory address outside of its allocated address space, the operating system will send a SIGSEGV (segmentation fault) which usually aborts the process. Older operating systems, such as MS-DOS, didn't have this protection, and so writing to an invalid pointer or triggering a stack overflow could potentially crash the whole operating system.
Likewise, with most mainstream modern operating systems (Linux/UNIX/Windows, etc.), memory leaks (data which is allocated dynamically but never reclaimed) only affect the process which allocated them. When the process terminates, all memory allocated by the process is reclaimed by the OS. But again, this is a feature of the operating system, and has nothing to do with the C++ language. There may be some older operating systems where leaked memory is never reclaimed, even by the OS.
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program crashes (stack overflow), but does it cause some trouble to the computer itself? (To the RAM maybe or to the SO).
A stack overflow should not cause trouble neither to the Operating System nor to the computer. Any modern OS provides an isolated address space to each process. When a process tries to allocate more data in its stack than space is available, the OS detects it (usually via an exception) and terminates the process. This guarantees that no other processes are affected.
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the computer in general. I mean it might be that such memory could not be used never again or something.
It depends on whether your program is a long running process or not, and the amount of data that you're failing to deallocate. In a long running process (e.g. a server) a recurrent memory leak can lead to thrashing: after some time, your process will be using so much memory that it won't fit in your physical memory. This is not a problem per se, because the OS provides virtual memory but the OS will spend more time moving memory pages from your physical memory to disk than doing useful work. This can affect other processes and it might slow down the system significantly (to the point that it might be better to reboot it).
3.- What are the problems of getting a segmentation fault (the heap).
A Segmentation Fault will crash your process. It's not directly related to the usage of the heap, but rather to accessing a memory region that does not belong to your process (because it's not part of its address space or because it was, but it was freed). Depending on what your process was doing, this can cause other problems: for instance, if the process was writing to a file when the crash happened it's very likely that it will end up corrupt.
First, stack means automatic memory and heap means manual memory. There are ways around both, but that's generally a more advanced question.
On modern operating systems, your application will crash but the operating system and machine as a whole will continue to function. There are of course exceptions to this rule, but they're (again) a more advanced topic.
Allocating from the heap and then not deallocating when you're done just means that your program is still considered to be using the memory even though you're not using it. If left unchecked, your program will fail to allocate memory (out of memory errors). How you handle out-of-memory errors could mean anything from a crash (unhandled error resulting in an unhandled exception or a NULL pointer being accessed and generating a segmentation fault) to odd behavior (caught exception or tested for NULL pointer but no special handling case) to nothing at all (properly handled).
On modern operating systems, the memory will be freed when your application exits.
A segmentation fault in the normal sense will simply crash your application. The operating system may immediately close file or socket handles. It may also perform a dump of your application's memory so that you can try to debug it posthumously with tools designed to do that (more advanced subject).
Alternatively, most (I think?) modern operating systems will use a special method of telling the program that it has done something bad. It is then up to the program's code to decide whether or not it can recover from that or perhaps add additional debug information for the operating system, or whatever really.
I suggest you look into auto pointers (also called smart pointers) for making your heap behave a little bit like a stack -- automatically deallocating memory when you're done using it. If you're using a modern compiler, see std::unique_ptr. If that type name can't be found, then look into the boost library (google). It's a little more advanced but highly valuable knowledge.
Hope this helps.

Segfaults and Memory leaks

My question has two parts:
Is it possible that, if a segfault occurs after allocating memory but before freeing it, this leaks memory (that is, the memory is never freed resulting in a memory leak)?
If so, is there any way to ensure allocated memory is cleaned up in the event of a segfault?
I've been reading about memory manamgement in C++ but was unable to find anything about my specific question.
In the event of a seg fault, the OS is responsible for cleaning up all the resources held by your program.
Edit:
Modern operating systems will clean up any leaked memory regardless of how your program terminates. The memory is only leaked for the life of your program. Most OS's will also clean up many other types of resources such as open files and socket connections.
Is it possible that, if a segfault occurs after allocating memory but
before freeing it, this leaks memory (that is, the memory is never
freed resulting in a memory leak)?
Yes and No: The process which crashes should be tiedied completely by the OS. However consider other processes spawned by your process: They might not get terminated completely. However usually these shouldn't take too many resources at all, but this may vary depending on your program. See http://en.wikipedia.org/wiki/Zombie_process
If so, is there any way to ensure allocated memory is cleaned up in
the event of a segfault?
In case the program is non critical (meaning there are no lives at stake if it crashes) I suggest fixing the segmentation fault. If you really need to be able to handle segmentation faults see the answer on this topic: How to catch segmentation fault in Linux?
UPDATE: Please note that despite the fact that it is possible to handle SIGSEGV signals (and continuning in program flow) it is not a secure way to rely on, since - as pointed out in the comments below - it is undefined behaviour meaning differen platforms/compilers/... may react differently.
So by any means possible fixing segmentation faults (as well as access violations on windows) should have first priority. Still using the suggested solution to handle signals this way must be thoroughly tested and if put in production code you must be aware of it and draw any consequences - which may vary and depend on your requirements so I will not name any.
The C++ standard does not concern itself with seg-faults (that's a platform-specific thing).
In practice, it really depends on what you do, and what your definition of "memory leak" is. In theory, you can register a handler for the seg-fault signal, in which you can do all necessary cleanup. However, any modern OS will automatically clear up a terminating process anyway.
One, there are resources the system is responsible for cleaning up. One of them is memory. You do not have to worry about permanently leaked RAM on a segfault.
Two, there are resources that the system is not responsible for cleaning up. You could write a program that inserts its pid into a database and removes it on close. That won't be removed on a segfault. You can either 1) add a handler to clean up that sort of non-system resources, or 2) fix the bugs in your program to begin with.
Modern operating systems separate applications' memory as to be able to clean up after them. The problem with segmentation faults is that they normally only occur when something goes wrong. At this point, the default behavior is to shut down the application because it's no longer functioning as expected.
Likewise, unless under some bizarre circumstance, your application has likely done something you cannot account for if it has hit a segmentation fault. So, "cleaning up" is likely practically impossible. It's not out of the realm of possibility to use memory carefully similar to transactional databases in order to guarantee an acceptable roll-back state, however doing so (and depending on how fine-grained of a level) may be beyond tedious.
A more practical version may be to provide your own sort of sandboxing between application components and reboot a component if it dies, restoring it to a acceptable previous saved state wholesale. That way you can flush all of its allocated memory and have it start from scratch. You still lose whatever data it hadn't saved as of the last checkpoint, however.