ANTLR3 C Target with C++ Exceptions - c++

I have some experience with ANTLR 2's C++ target, but have been hesitant to spend much time on ANTLR 3 because of my fears about exception safety.
Sadly, ANTLR 3 only has a C target which produces C which is "C++ compatible." This does not seem to include C++ exception safety, based on the following:
You can probably use [exceptions] carefully,
but as you point out, you have to be
careful with memory. The runtime
tracks all its normal memory
allocations so as long as you close
the 'classes' correctly you should
generally be OK. However, you should
make sure that throwing exceptions
does not bypass the normal rule clean
up, such as resetting error and
backtracking flags and so on.
(ANTLR-interest, circa 2009)
Does anyone have any experience using the ANTLR C target with (advanced) C++? Is it possible to safely throw exceptions? What extra code (if any) do I have to write to make it safe?

I don't have any ANTLR experience (sadly...), but there is NO way to make C code work with exceptions around. I refer you to More Effective C++, item 9 : "Use destructors to prevent resource leaks"
The idea is that if an exception is thrown during cleanup, you have no information on what is already delete()ed an what is not, and your software will leak memory. If you use auto_ptr/scroped_ptr, ou don't have to worry about this, since the compiler will deal with it itself.
But this idiom is C++-only, C was not designed with exceptions in mind.

Related

What are the possible error handling strategies using Exceptions in C++, and what are their consequences and implications?

I would like your help in understanding what are the possible approaches to using/disabling exceptions in C++.
My question is not about what is the best choice but just about what are the possible options and what these options imply.
Currently, the options I can think of are:
Compiling with -fno-exceptions and giving up most std containers (possibly defining internal containers which do not throw, such as suggested in SpiderMonkey Coding_Style)
Just avoiding to throw and catch in own code, but still using std containers which may throw exceptions. Being happy with the fact that, in the case of exceptions, the program may terminate without stack unwinding, and that even RAII handled external resources may be left hanging. (This seems to be Google C++ approach according to answers to this SO question)
No using exceptions but wrapping all in a catch all std::exception try block just to make sure stack is unwound and RAII handles to external resources are released before program is terminated, as for this Cert C++ rule
As above, but also throwing exceptions which will ultimately result in program termination.
Also using catched exceptions and recovering from exceptions.
I would like to know if my understanding of options is correct, and what I might be missing out or understanding wrong.
I would also like to know whether constraints on granting basic exception safety make sense for options 2-4 (where exceptions always ultimately lead to program termination) or if/how exception safety requirement can be relaxed/restricted to specific cases (e.g. handling external resources, files).
Update
Strictly speaking, when you want to disable exceptions only compiling with no exception support is true way to go, so Option 1. Because when you want to disable exceptions you should also not use or even handle them. Raising an exception would terminate or go to a hard faulty on most implementations instantly, and only by this you would avoid overhead implications like space or performance overhead even how small (see below) they are.
However if you just want to know which paradigms about exception usage are out there, your options are almost right, with the general idea you haven't mentioned of keeping exceptions exceptional and doing things that are likely or even just could throw, at program startup.
And even more in general, in comes to error handling in general: Exceptions are there to handle errors which you come across at runtime and if done correctly errors you can ONLY detect at runtime. You can avoid exceptions by simply making sure to resolve all problems you can detect before runtime, like proper coding and review (code time), using strict types and checks (templates , compile time), and rechecking them again (static analyser).
Update 2
If you understand you wrong about caring about exception safety I would say:
Basically at first it depends whenever you enable exceptions in general: If they are disabled you can't and shouldn't care about exception safety since there are none (or if they try to come into existence, you will terminate/crash/hardfault anyway).
If you enable exceptions but don't use them in your code, like in case 2, 4 and 3, no problem you want to terminate anyway, so missing cleanup code is not relevant (and the stuff in 3. gets still run in any case). But then should make it clear to everybody that you don't want to use them, so they won't try to recover from exceptions.
And if you use them in your code, you should care about exception safety in the way, that when an exception gets thrown, you clean up after your self too, and then its up the main handler or future code changes, when ever you still terminate or can recover. Not cleaning up but still using exception wouldn't make sense. Then you can just stick to option 1.
This should be to my knowledge exhaustive. For more information see below.
Recommendation
I recommend option 4. and I tell you why:
Exceptions are a very misunderstood and misused pattern. They are misused for program flow like the languages Java does in a excessive. They are often forbidden in embedded or safety code because of their nature that is hard to tell which handler will catch it, if there is one, and how long this will take, for which the C++ std just says basically "implementation magic".
Background
However in my reasoning the hate for exceptions is basically a big XY problem:
When people are complaining that their recovery is broken and hard to tell, then the usual problem is that the don't see you can't or should do much about the most exceptions, that's what they are for. However things like a timeout or a closed tcp connection are hardly non normal, but many people use exceptions for that, that would be wrong of course. But if your whole OS tells you that there is no network adapter or no more memory what could you do? The only thing you probably want is trying to log the reason somewhere and this should be done in one try/catch around the main block.
The same is for safety/real-time programs: For the example of running out of memory, when that happens you are **** anyway, the strategy then is to do this stuff at an unsafe initialisation time, in which exceptions are no problem too.
When using containers with throwing members its a similar scenario, what would you be able to do when you would get their error code instead? Not much, thats the reason why you would make sure at code time that there is no reason for errors, like making sure that an element is really in there, or reserving capacity for your vectors.
This has the nice benefit of cleaner code, not forgetting to check for errors and no performance penalty, at least using the common C++ implementations like gcc etc.
The reason of 3. is debatable, you could do this in my opinion too, but then my question is: What do you have to do in the destructors which wouldn't cleanup your operating system anyway? The OS would cleanup memory, sockets etc.. If have a freestanding scenario, the question remains, whenever different: You plan to halt because your UART broke for example, what would you like to do before halting? And why can't you do it in the catch block before rethrow?
To sum up
Has the problem of not using any throwing code, and still be left with the problem of how to handle rare error codes. (Thats why so many C programmers still use goto or long jumps)
Not viable IMHO, worst of both
as mentioned ok, but what do you need to do in your static DTors, what you even un-normal termination wouldn't do?
My favourite
Only if you really have rare conditions that you are actual able to recover from
What I mean as a general advice: Raising and exception should mean that something happened that should never happen in normal operation, but happened due to a unforeseeable fault, like hardware fault, detached cables, non found shared libraries, a programming fault you have done, like trying to .at() at an index not in the container, which you could have known of, but the container can not. Then it is only logical that throw an exception should almost every time lead to program termination/hard fault
As a consequence of compiling with exception support, is that for example for a freestanding ARM program, your program size will increase by 27kB, probably none for hosted (Linux/Windows etc.), and 2kB RAM for the freestanding case (see C++ exception handler on gnu arm cortex m4 with freertos )
However you are paying no performance penalty for using exceptions when using common compilers like clang or gcc, when you code runs normal i.e. when your branches/ifs which would trow an exception, are not triggered.
As a reference i.e. proof to my statements I refer to ISO/IEC TR 18015:2006 Technical Report on C++ Performance, with this little excerpt from 7.2.2.3:
Enable exception handling for real-time critical programs only if
exceptions are actually used. A complete analysis must always include
the throwing of an exception, and this analysis will always be
implementation dependent. On the other hand, the requirement to act
within a deterministic time might loosen in the case of an exception
(e.g. there is no need to handle any more input from a device when a
connection has broken down). An overview of alternatives for exception
handling is given in §5.4. But as shown there, all options have their
run-time costs, and throwing exceptions might still be the best way to
deal with exceptional cases. As long as no exceptions are thrown a
long way (i.e. there are only a few nested function calls between the
throw-expression and the handler), it might even reduce run-time
costs.

How to turn off exception handling?

In the book More Effective C++ (Number 15), I read that code becomes significantly slower if exceptions are enabled, even if they are not used. In my opinion, exceptions are of limited use, and I try to avoid them, but this is another topic.
I do not entirely understand his statement:
What does enabling/disabling exceptions mean? It it the difference between having zero or more than zero try/catch blocks? Is it a compiler flag? What happens if I use a DLL in which exceptions can occur?
Suppose no exception is ever thrown:
Does the code become slower as a whole or are only the parts where the program enters/exits try/catch blocks become slower? According to the author, both is true.
How can I compile without exceptions? Can I do this even if I have try/catch blocks? Can I do this if DLLs I use might throw exceptions?
What does enabling/disabling exceptions mean?
Passing a flag to the compiler which disables standard conformance in relation to exceptions and makes it not generate any exception support.
What happens if I use a DLL in which exceptions can occur?
If some library handles exception internally, nothing. If it lets it escape to the caller (I never saw any library which does that, because of ABI issues, but whatever), your program crashes (in best case) as it cannot handle it. If there is a wrapper for DLL which your code include and which converts error codes to exceptions (common occurence), then it is the same as if you used exceptions in your code.
Does the code become slower as a whole or are only the parts where the program enters/exits try/catch blocks become slower? According to the author, both is true.
Notice that the book you are citing is old. Compilers are evolving. Modern compilers use zero-cost exceptions which do not incure perfomance cost if exception is not thrown. Exception handling does make executable larger as it should generate all data and code needed to process exceptions, but it should not make it slower on non-exceptional path.
How can I compile without exceptions? Can I do this even if I have try/catch blocks?
You do that in compiler-specific way. Consult your compiler documentation. Usually doing so makes compiler reject code containing any exception-related facilities, e.g. point out try as unrecognised identifier.

How to overcome C++'s lack of tool support for embedded systems? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The question is NOT about the Linux kernel. It is NOT a C vs. C++ debate either.
I did a research and it seems to me that C++ lacks tool support when it comes to exception handling and memory allocation for embedded systems:
Why is the linux kernel not implemented in C++?
Beside the accepted answer see also Ben Collins' answer.
Linus Torvalds on C++:
"[...] anybody who designs his kernel modules for C++ is [...]
(b) a C++ bigot that can't see what he is writing is really just C anyway"
" - the whole C++ exception handling thing is fundamentally broken. It's especially broken for kernels.
- any compiler or language that likes to hide things like memory allocations behind your back just isn't a good choice for a kernel."
JOINT STRIKE FIGHTER AIR VEHICLE C++ CODING STANDARDS:
"AV Rule 208 C++ exceptions shall not be used"
Are the exception handling and the memory allocation the only points where C++ apparently lacks tool support (in this context)?
To fix the exception handling issue, one has to provide bound on the time till the exception is caught after it is thrown?
Could you please explain me why the memory allocation is an issue? How can one overcome this issue, what has to be done?
As I see this, in both cases one has to provide an upper bound at compile time on something nontrivial that happens and depends on things at run-time.
Answer:
No, dynamic casts were also an issue but it has been solved.
Basically yes. The time needed to handle exceptions has to be bounded by analyzing all the throw paths.
See solution on slides "How to live without new" in Embedded systems programming. In short: pre-allocate (global objects, stacks, pools).
Well, there are a couple of things. First, you have to remember that the STL is completely built on OS routines, the C standard library, and dynamic allocation. When your writing a kernel, there is no dynamic memory allocation for you (you're providing it) there is no C standard library (you have to provide one built on top of your kernel), and you are providing system calls. Then there is the fact that C interops very well and easily with assembly, whereas C++ is very difficult to interface with assembly because the ABI isn't necessarily constant, nor are names. Because of name mangling, you get a whole new level of complication.
Then, you have to remember that when you are building an OS, you need to know and control every aspect of the memory used by the kernel. In C++, there are quite a few hidden structures that you have no control over (vtables, RTTI, exceptions) that would severely interfere with your work.
In other words, what Linus is saying is that with C, you can easily understand the assembly being generated and it is simple enough to run directly on the machine. Although C++ can, you will always have to set up quite a bit of context and still do some C to interface the assembly and C. Another reason is that in systems programing, you need to know exactly how methods are being called. C has the very well documented C calling conventions, but in C++ you have this to deal with, name mangling, etc.
In short, it's because C++ does things without you asking.
Per #Josh's comment bellow, another thing C++ does behind your back is constructors and destructors. They add overhead to enter and exiting stack frames, and most of all, make assembly interop even harder, as when you destroy a C++ stack frame, you have to call the destructor of every object in it. This gets ugly quickly.
Why certain kernels refuse C++ code in their code base? Politics and preference, but I digress.
Some parts of modern OS kernels are written in certain subsets of C++. In these subsets mainly exceptions and RTTI are disabled (sometimes multiple inheritance and templates are disallowed, too).
This is the case too in C. Certain features should not be used in a kernel environment (e.g. VLAs).
Outside of exceptions and RTTI, certain features in C++ are heavily critiqued, when we are talking about kernel code (or embedded code).
These are vtables and constructors/destructors. They bring a bit of code under the hood, and that seems to be deemed 'bad'. If you don't want a constructor, then don't implement one. If you worry about using a class with a constructor, then worry too about a function you have to use to initialize a struct. The upside in C++ is, you cannot really forget using a dtor outside of forgetting to deallocate the memory.
But what about vtables?
When you implement a object which contains extension points (e.g. a linux filesystem driver), you implement something like a class with virtual methods. So why is it sooo bad, to have a vtable? You have to control the placement of this vtable when you have certain requirements in which pages the vtable resides. As far as I recall, this is irrelevant for linux, but under windows, code pages can be paged out, and when you call a paged out function from a too high irql, you crash. But you really have to watch out what functions you call, when you are on a high irql, whatever function it is. And you don't need to worry, if you don't use a virtual call in this context. In embedded software this could be worse, because (very seldomly) you need to directly control in which code page your code goes, but even there you can influence what your linker does.
So why are so many people so adamant of 'use C in kernel'?
Because they either got burned by a toolchain problem, or got burned by overenthusiastic developers using the latest stuff in kernel mode.
Maybe the kernel-mode developers are rather conservative, and C++ is a too newfangled thing ...
Why are exceptions not used in kernel-mode code?
Because they need to generate some code per function, introduce complexity in a code path and not handling an exception is bad for a kernel mode component, because it kills the system.
In C++, when an exception is thrown, the stack must be unwound and the according destructors must be called. This involves at least a bit of overhead. This is mostly negligible, but it does incur a cost, which may not be something you want. (Note I do not know how much a table base unwind does actually cost, I think I read that there is no cost when no exception is running, but ... I guess I have to look it up).
A code path, which cannot throw exceptions can be much easier reasoned about, then one which may.
So :
int f( int a )
{
if( a == 0 )
return -1;
if( g() < 0 )
return -2;
f3();
return h();
}
We can reason about every exit path, in this function, because we can easily see all returns, but when exceptions are enabled, the functions may throw and we cannot guarantee what the actual path is, that the function takes. This is the exact point of the code may do something we cannot see at once. (This is bad C++ code, when exceptions are enabled).
The third point is, you want user mode applications to crash, when something unexpected occurs (E.g. when memory runs out), a user mode application should crash (after freeing resources) to allow the developer to debug the problem or at least get a good error message. You should not have a uncaught exception in a kernel mode module, ever.
Note that all this can be overcome, there are SEH exceptions in the windows kernel, so point 2+3 are not really good points in the NT kernel.
There are no memory management problems with C++ in the kernel. E.g. the NT kernel headers provide overloads for new and delete, which let you specify the pool type of your allocation, but are otherwise exactly the same as the new and delete in a user mode application.
I don't really like language wars, and have voted to close this again. But anyway...
Well, there are a couple of things. First, you have to remember that the STL is completely built on OS routines, the C standard library, and dynamic allocation. When your writing a kernel, there is no dynamic memory allocation for you (you're providing it) there is no C standard library (you have to provide one built on top of your kernel), and you are providing system calls. Then there is the fact that C interops very well and easily with assembly, whereas C++ is very difficult to interface with assembly because the ABI isn't necessarily constant, nor are names. Because of name mangling, you get a whole new level of complication.
No, with C++ you can declare functions having an extern "C" (or optionally extern "assembly") calling convention. That makes the names compatible with everything else on the same platform.
Then, you have to remember that when you are building an OS, you need to know and control every aspect of the memory used by the kernel. In C++, there are quite a few hidden structures that you have no control over (vtables, RTTI, exceptions) that would severely interfere with your work.
You have to be careful when coding kernel features, but that is not limited to C++. Sure, you cannot use std::vector<byte> as the base for you memory allocation, but neither can you use malloc for that. You don't have to use virtual functions, multiple inheritance and dynamic allocations for all C++ classes, do you?
In other words, what Linus is saying is that with C, you can easily understand the assembly being generated and it is simple enough to run directly on the machine. Although C++ can, you will always have to set up quite a bit of context and still do some C to interface the assembly and C. Another reason is that in systems programing, you need to know exactly how methods are being called. C has the very well documented C calling conventions, but in C++ you have this to deal with, name mangling, etc.
Linus is possibly claiming that he can spot every call to f(x) and immediately see that it is calling g(x), h(x), and q(x) 20 levels deep. Still MyClass M(x); is a great mystery, as it might be calling some unknown code behind his back. Lost me there.
In short, it's because C++ does things without you asking.
How? If I write a constructor and a destructor for a class, it is because I am asking for the code to be executed. Don't tell me that C can magically copy an object without executing some code!
Per #Josh's comment bellow, another thing C++ does behind your back is constructors and destructors. They add overhead to enter and exiting stack frames, and most of all, make assembly interop even harder, as when you destroy a C++ stack frame, you have to call the destructor of every object in it. This gets ugly quickly.
Constuctors and destructors do not add code behind your back, they are only there if needed. Destructors are called only when it is required, like when dynamic memory needs to be deallocated. Don't tell me that C code work without this.
One reason for the lack of C++ support in both Linux and Windows is that a lot of the guys working on the kernels have been doing this since long before C++ was available. I have seen posts from the Windows kernel developers arguing that C++ support isn't really needed, as there are very few device drivers written in C++. Catch-22!
Are the exception handling and the memory allocation the only points where C++ apparently lacks tool support (in this context)?
In places where this is not properly handled, just don't use it. You don't have to use multiple inheritance, dynamic allocation, and throwing exceptions everywhere. If returning an error code works, fine. Do that!
To fix the exception handling issue, one has to provide bound on the time till the exception is caught after it is thrown?
No, but you just cannot use application levels features in the kernel. Implementing dynamic memory using a std::vector<byte> isn't a good idea, but who would really try that?
Could you please explain me why the memory allocation is an issue? How can one overcome this issue, what has to be done?
Using standard library features depending on memory allocation on a layer below the functions implementing the memory management would be a problem. Implementing malloc using calls to malloc would be just as silly. But who would try that?

What happens if exception gets thrown "through" c code? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Will C++ exceptions safely propagate through C code?
If you have c code, for example the png lib, with your own io handlers which are written in c++ and an exception gets thrown due to some io error. is it ok to let it go through the c code and catch it outside of the c code? i know that care has to be taken with memory leaks but typically all structures get pre-allocated.
It is entirely up to the compiler if this will work or not. None of the language standards can obviously say anything about what the other language should do.
In the best case, the exception will pass the C code and get back to the next C++ level while possibly leaking any dynamically allocated C structures. In the less good case, it will crash and burn!
One possibilty is of course to compile the C code as C++, while also checking that it is exception neutral.
The answer to your question is implementation-defined.
For GCC, as #Kerrek says, compiling the C code with -fexceptions will ensure the run-time remains consistent when an exception is thrown through C code. (Note that modern exception mechanisms are not just implemented via setjmp/longjmp; they actually unwind the stack frame by frame in order to handle destructors and try/catch blocks properly.)
However, the C code itself may not be expecting to have exceptions thrown through it. Lots of C code is written like this:
acquire a resource
do some stuff
release resource
Here "acquire a resource" could mean malloc, or pthread_mutex_lock, or fopen. If your C++ code is part of the "do some stuff", and it throws an exception, well... Whoops.
Resource leaks are not the only problem; so is correctness is general. Imagine:
subtract $100 from savings account
add $100 to checking account
Now imagine an exception gets thrown after the first step finishes but before the second one does.
In short, although your implementation may provide a way to let you throw exceptions through C code, it is a bad idea unless you also wrote that C code knowing exactly where an exception might be thrown.
It's okay for your C++ code if it's:
Compiled with exceptions enabled.
Exception safe.
It's okay for your C or C++ code if:
No resources need to be collected explicitly between the throwing of the exception, and its being caught.
Code compiled with C++ exception support adds special hooks to clean up when exceptions are thrown and not caught in that scope. C++ objects allocated on the stack will have their destructors invoked.
The only resource reclaimed in C (or C++ without exception support) when an exception is thrown is the space allocated in each stack frame.
This section of the GCC manual is quite helpful:
-fexceptions
Enable exception handling. Generates extra code needed to
propagate exceptions. For some targets, this implies GCC will
generate frame unwind information for all functions, which can
produce significant data size overhead, although it does not affect
execution. If you do not specify this option, GCC will enable it
by default for languages like C++ which normally require exception
handling, and disable it for languages like C that do not normally
require it. However, you may need to enable this option when
compiling C code that needs to interoperate properly with exception
handlers written in C++. You may also wish to disable this option
if you are compiling older C++ programs that don't use exception
handling.
Exceptions aren't usually really thrown "through" code; as the action implies, they are thrown "over" code.
As a forinstance; at least old-school Objective-C exceptions are usually implemented with setjmp and longjmp; essentially storing the addresses of code for later use.
It makes sense for C++ exceptions to be implemented with similar mechanisms; and as such; the type of code the exception is thrown "through", or more accurately, "over" matters little, if at all. One could even imagine a setting where an Objective-C exception is thrown over a C++ catch, or vice versa.
Edit: As Bo Persson mentiones; this is not to say that a C++ exception interrupting C code wouldn't cause havoc and leaks; but exceptions being thrown is usually a Bad Thing™ all round; so it's not likely to matter much.
PS: Kudos on actually asking a question where using both C and C++ tags is relevant. ;)
C doesn't know anything about exceptions so it's not possible to say what would happen.
To write conforming code you'd need to catch exceptions prior to returning into the C library, translate to error codes, and then translate the C-library error back into an exception on the other end.

Google C++ style guide's No-exceptions rule; STL?

Google's C++ style guide says "We do not use exceptions". The style does not mention STL with respect to usage of exception. Since STL allocators can fail, how do they handle exceptions thrown by containers?
If they use STL, how is the caller informed of allocation failures? STL methods like push_back() or map operator[] do not return any status codes.
If they do not use STL, what container implementation do they use?
They say that they don't use exceptions, not that nobody should use them. If you look at the rationale they also write:
Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.
The usual legacy problem. :-(
We simply don't handle exceptions thrown by containers, at least in application-level code.
I've been an engineer at Google Search working in C++ since 2008. We do use STL containers often. I cannot personally recall a single major failure or bug that was ever traced back to something like vector::push_back() or map::operator[] failing, where we said "oh man, we have to rewrite this code because the allocation could fail" or "dang, if only we used exceptions, this could have been avoided." Does a process ever run out of memory? Yes, but this is usually a simple mistake (e.g., someone added a large new data file to the program and forgot to increase the RAM allocation) or a catastrophic failure where there's no good way to recover and proceed. Our system already manages and restarts jobs automatically to be robust to machines with faulty disks, cosmic rays, etc., and this is really no different.
So as far as I can tell, there is no problem here.
I'm pretty sure that they mean they do not use exceptions in their code. If you check out their cpplint script, it does check to ensure you are including the correct headers for STL containers (like vector, list, etc).
I have found that Google mentions this explicitly about STL and exceptions (emphasis is mine):
Although you should not use exceptions in your own code, they are used
extensively in the ATL and some STLs, including the one that comes
with Visual C++. When using the ATL, you should define
_ATL_NO_EXCEPTIONS to disable exceptions. You should investigate whether
you can also disable exceptions in your STL, but if not, it is
OK to turn on exceptions in the compiler. (Note that this is only to
get the STL to compile. You should still not write exception handling
code yourself.)
I don't like such decisions (lucky that I am not working for Google), but they are quite clear about their behaviour and intentions.
You can’t handle allocation failures anyway on modern operating systems; as a performance optimization, they typically over-commit memory. For instance, if you call malloc() and ask for a really huge chunk of memory on Linux, it will succeed even if the memory required to back it actually isn’t there. It’s only when you access it that the kernel actually tries to allocate pages to back it, and at that point it’s too late to tell you that the allocation failed anyway.
So:
Except in special cases, don’t worry about allocation failures. If the machine runs out of memory, that’s a catastrophic failure from which you can’t reliably recover.
Nevertheless, it’s good practice to catch unhandled exceptions and log the e.what() output, then re-throw, since that may be more informative than a backtrace, and typical C++ library implementations don’t do that automatically for you.
The whole huge thread above about how you can’t rely on crashing when you run out of memory is complete and utter rubbish. The C(++) standard may not guarantee it, but on modern systems crashing is the only thing you can rely on if you run out of memory. In particular, you can’t rely on getting a NULL or indeed any other indication from your allocator, up to and include a C++ exception.
If you find yourself on an embedded system where page zero is accessible, I strongly suggest you fix that by mapping an inaccessible page at that location. Human beings cannot be relied upon to check for NULL pointers everywhere, but you can fix that by mapping a page once rather than trying to correct every possible (past, present and future) location at which someone might have missed a NULL.
I will qualify the above by saying that it’s possible you’re using some kind of custom allocator, or that you’re on a system that doesn’t over-commit (embedded systems without swap are one example of that, but not the only example). In that case, maybe you can handle out-of-memory conditions gracefully, on your system. But in general in the 21st century I’m afraid you are unlikely to get the chance; the first you’ll know that your system is out of memory is when things start crashing.
Stl itself is directly only throwing in case of memory allocation failure. But usually a real world application can fail for a variety of reasons, memory allocation failure just one of them. On 32 bit systems memory allocation failure is not something which should be ignored, as it can occur. So the entire discussion above that memory allocation failure is not going to happen is kind of pointless. Even assuming this, one would have to write ones code using two step initialization. And C++ exception handling predates 64 bit architectures by a long time.
I'm not certain how far I should go in dignifying the negative professionalism shown here by google and only answer the question asked. I remember some paper from IBM in around 1997 stating how well some people at IBM understood & appreciated the implications of C++ Exception Handling. Ok professionalism is not necessary
an indicator of success.
So giving up exception handling is not only a problem if one uses STL. It is a problem if one uses C++ as such. It means giving up on
constructor failure
being able to use member objects and base class objects as arguments for any of the following base/member class constructors ( without any testing). It is no wonder that people used two step construction before C++ exception handling existed.
giving up on hierarchical & rich error messages in an environment which allows for code to be provided by customers or third parties and throw errors, which the original writer of the calling code could not possible have foreseen when writing the calling code and could have provided space for in his return error code range.
avoids such hacks as returning a pointer to a static memory object to message allocation failure which the authors of FlexLm did
being able to use a memory allocator returning addresses into a memory mapped sparse file. In this case allocation failure happens when one accesses the memory in question.(ok currently this works only on windows but apple forced the gnu team to provide the necessary functionality in the G++ compiler. Some more pressure from Linux g++ developer will be necessary to provide the this functionality also for them) (oops -- this even applies to STL)
being able to leave this C style coding behind us (ignoring return values) and having to use a debugger with debug executable to find out what is failing in a non trivial environment with child processes and shared libraries provided by third parties or doing remote execution
being able to return rich error information to the caller without just dumping everything to stderr
There is only one possibility to handle allocation failure under assumptions outlined in the question:
that allocator force application exit on allocation failure. In particular, this requires the cusror allocator.
Index-out-of-bound exceptions are less interesting in this context,
because application can ensure they won't happen using pre-checks.
Late to the party here although I didn't see any comparisons between C++ exception handling at Google and exception handling in Go. Specifically, Go only has error handling via a built-in error type. The linked Golang blog post explicitly concludes
Proper error handling is an essential requirement of good software. By employing the techniques described in this post you should be able to write more reliable and succinct Go code.
The creation of Golang certainly took considerations of best practices from working with C++ into account. The underlying intuition is that less can be more. I haven't worked at Google but do find their use of C++ and creation of Golang to be potentially suggestive of underlying best practices at the company.