Why does "dynamic exception" guarantee cause overhead? - c++

In C++11 this is deprecated:
void foo() throw();
and replaced by
void foo() noexcept;
In this article it is explained that the reason for this (among others, that boil down to the same thing) is that
C++ exception specifications are checked at runtime rather than at compile time, so they offer no programmer guarantees that all exceptions have been handled.
While this does make sense to me, I don't understand why throw() was checked dynamically in the first place, or why noexcept does not provide exception guarantee other than calling std::terminate instead of normal stack unwinding (which is not really a solid guarantee IMO).
Wouldn't it be possible to check whether exceptions are thrown or not during compile time and fail compilation if this happens? As I see it, there are basically three cases:
void foo() noexcept
{
// 1. Trivial case
throw myexcept();
// 2. Try-catch case
// Necessary to check whether myexcept is derived
// from exception
try
{
throw myexcept();
}
catch(exception const & e)
{}
// 3. Nested function call
// Recursion necessary
bar();
}
With templates in C++ being instantiated for every type, compiling applications takes forever anyways - so why not change noexcept to force the compiler to check whether exceptions are thrown during compile time?
The only difficulty I see is that a function may or may not throw depending on runtime states - but that function should not be allowed to call itself noexcept anyway in my opinion.
Am I missing something, or was the intent to not increase the compilation time further, or to go easy on the compiler developers?

I think a lot of it came down to the fact that when exception specifications were being defined, compiler writers were well behind the power curve. Implementing C++98 as sufficiently complex that there's only ever been one compiler that even claimed to implement all its features. Every other compiler left out at least one major feature that was included in the standard. Most fairly openly admitted that they left out substantially more than that.
You also need to keep in mind that dynamic exception specifications were also considerably more complex than just throw(). It allows a programmer to specify an arbitrary set of types that can be thrown. Worse still, specifying that a function can throw foo means it can also throw anything derived from foo as well.
Enforcing exception specifications statically could have been done, but it would clearly have added quite a bit of extra work, and nobody was really sure what (if any) benefit it would provide. Under the circumstances, I think it was pretty easy for most to think that static enforcement was something that could be required later if there seemed to be enough use to justify the work. Changing from enforcing at run-time to compile-time wouldn't require modifying existing code, only existing implementations.
Another point is that I'm not sure there was ever really strong support of exception specifications anyway. I think there was general agreement on the basic idea, but when you get down to it, probably less about the details.
Bottom line: it was easy to mandate only dynamic enforcement, and leave static enforcement for later (if at all). Turns out, static enforcement probably wouldn't really add all that much positive in any case, so mandating it probably wouldn't have accomplished much anyway.

Related

g++ -fno-enforce-eh-specs - why/how does this violate the C++ standard?

From man gcc:
-fno-enforce-eh-specs
Don't generate code to check for violation of exception specifications
at run time. This option violates the C++ standard, but may be useful
for reducing code size in production builds.
When a compiler optimizes, it removes all sorts of checks, it breaks the boundaries between functions, it might even avoid things like system calls the developer put in the code in some cases. So:
Why is exception specification violation checking so special that it cannot be skipped?
What exactly is being checked?
If using this switch is so useful for reducing code size, how come the C++ standard requires these checks? I thought the idea is zero-overhead abstractions whenever possible.
The (now old?) standard requires that a function declared void f() throw(x); must not throw any exception other than x (or possibly derived from x?). If it tries to throw something else, std::unexpected() should be called, and probably end up in a call to std::terminate() to kill the program.
But if the exact type of the exception cannot be determined at compile time, a run-time check might be needed to determine if the exception is of an acceptable type.
Of course, if that check is removed by this option, an incorrect exception might escape from the function. And that would be a violation of the standard.
Why is exception specification violation checking so special that it cannot be skipped?
What exactly is being checked?
This has been answered already, but for completeness in my answer as well: for functions with throw(...) and noexcept specifications, when exceptions get thrown that are not permitted, std::unexpected() or std::terminate() must be called.
Effectively, they turn something like
void f();
void g() noexcept { f(); }
into something very similar to:
void f();
void g() { try { f(); } catch (...) { std::terminate(); } }
If using this switch is so useful for reducing code size, how come the C++ standard requires these checks? I thought the idea is zero-overhead abstractions whenever possible.
Indeed. But there is no way to implement this without overhead. Compilers do already manage to implement exceptions in such a way that the code that gets executed at run-time, as long as no exceptions are thrown, is identical to the version without exception handling. In that sense, on platforms with suitable ABIs, zero-overhead exceptions has been achieved. They, however, do still annotate that code with special markers that lets the run-time library figure out what code to call when exceptions do end up thrown. Those special markers take up space. The code that ends up interpreting those markers takes up space as well. And the actual exception handling code, even if it's only std::terminate(), takes up yet more space.

Should I declare wrappers for C functions noexcept?

Suppose you have a C++ function that only makes calls to C functions, like
int ClearTheBin()
{
int result = SHEmptyRecycleBinW(
nullptr,
nullptr,
SHERB_NOCONFIRMATION |
SHERB_NOPROGRESSUI |
SHERB_NOSOUND);
if (SUCCEEDED(result) ||
result == E_UNEXPECTED) // Already empty
{
return 0;
}
return result;
}
There are obviously a zillion different things that could go wrong with the call to the C function, but since C lacks exceptions such errors will be stored in the resulting code. My question is: should such functions be declared noexcept? Even if the method can't throw, it may give the reader the false impression "Nothing can go wrong with this function and I can assume it's 100% reliable," because that's usually what noexcept functions imply.
What's your thoughts on the matter?
"Nothing can go wrong with this function and I can assume it's 100%
reliable"
I wouldn't really assume that so quickly. Typically if I see a function marked as noexcept, I'd tend to look for alternative ways it could give me errors like a boolean state, some kind of global error retrieval function, a success/fail return value, something like that. Only lacking that, and perhaps in the face of a function that causes no side effects, might I sort of make this assumption.
In your case, you got the whole error code kind of thing going for the function, so clearly one glance at the interface documentation should tell me that things can go wrong, yet the noexcept suggests to me that these exceptional cases won't (or at least shouldn't) be reported to me (the caller) in the form of an exception.
Applying noexcept
It seems generally on the safe side to be reluctant to use noexcept. A single introduction of, say, std::string to that function and now it can throw, only the noexcept will turn that into a termination rather than something we can catch. Guaranteeing that a function will never throw at the interface/design level is a hard one to ensure in anything which actually mixes C++ code, especially in a team setting where any random colleague might introduce (maybe on a crunch day) some random C++ function call or object which can throw to this function.
Impossible to Throw
In some cases it's actually easy to make this guarantee. One example is a noexcept function which has its own try/catch block, where everything the function does is contained within. If an exception is caught, translate it into an error code of some sort and return that. In those cases, the try/catch guarantees that the function will swallow the exception and not leak it to the outside world. It becomes easy to apply noexcept here because the function can't throw.
Can't Possibly Throw Correctly
Another example of where noexcept doesn't require too much thought is in the case of a function that should be considered broken if it threw. An example is an API function which is called across module boundaries for an SDK designed to allow plugins to be built with any compiler. Since we shouldn't throw across module boundaries, and especially in this scenario, throwing would already be UB and shouldn't be done. In that case, noexcept can be easy to apply because the correct behavior of the function, by design, should never throw.
If in doubt, leave it out.
In your case, if in doubt, I'd suggest to leave it out. You can go a lot more wrong by using noexcept and inadvertently throwing out of the function and terminating the program. An exception is if you can make a hard guarantee that this ClearTheBin function is always going to be using only C, no operator new (or at least only nothrow versions), no dynamic_casts, no standard lib, etc. Omission seems like erring on the safe side, unless you can make this kind of hard interface-level guarantee for both present and future.
noexcept was never intended to mean that "nothing can go wrong". noexcept simply means that function does not throw C++ exceptions. It does not in any way imply that the function is somehow immune to other kinds of problems, like undefined behavior.
So, yes, if all your function does is calls C functions, then from the formal point of view declaring it noexcept is a perfectly reasonable thing to do.
If you personally want to develop your own convention, within which noexcept means something more, like a "100% reliable function" (assuming such thing even exists), then you are free to do so. But that becomes a matter of your own convention, no a matter of language specification.

Why noexcept is not enforced at compile time?

As you might know C++11 has noexcept keyword. Now ugly part about it is this:
Note that a noexcept specification on a function is not a compile-time
check; it is merely a method for a programmer to inform the compiler
whether or not a function should throw exceptions.
http://en.cppreference.com/w/cpp/language/noexcept_spec
So is this a design failure on the committee part or they just left it as an exercise for the compile writers :) in a sense that decent compilers will enforce it, bad ones can still be compliant?
BTW if you ask why there isnt a third option ( aka cant be done) reason is that I can easily think of a (slow) way to check if function can throw or not. Problem is off course if you limit the input to 5 and 7(aka I promise the file wont contain anything beside 5 and 7) and it only throws when you give it 33, but that is not a realistic problem IMHO.
The committee pretty clearly considered the possibility that code that (attempted to) throw an exception not allowed by an exception specification would be considered ill-formed, and rejected that idea. According to $15.4/11:
An implementation shall not reject an expression merely because when executed it throws or might throw an exception that the containing function does not allow. [ Example:
extern void f() throw(X, Y);
void g() throw(X) {
f(); // OK
}
the call to f is well-formed even though when called, f might throw exception Y that g does not allow. —end example ]
Regardless of what prompted the decision, or what else it may have been, it seems pretty clear that this was not a result of accident or oversight.
As for why this decision was made, at least some goes back to interaction with other new features of C++11, such as move semantics.
Move semantics can make exception safety (especially the strong guarantee) much harder to enforce/provide. When you do copying, if something goes wrong, it's pretty easy to "roll back" the transaction -- destroy any copies you've made, release the memory, and the original remains intact. Only if/when the copy succeeds, you destroy the original.
With move semantics, this is harder -- if you get an exception in the middle of moving things, anything you've already moved needs to be moved back to where it was to restore the original to order -- but if the move constructor or move assignment operator can throw, you could get another exception in the process of trying to move things back to try to restore the original object.
Combine this with the fact that C++11 can/does generate move constructors and move assignment operators automatically for some types (though there is a long list of restrictions). These don't necessarily guarantee against throwing an exception. If you're explicitly writing a move constructor, you almost always want to ensure against it throwing, and that's usually even pretty easy to do (since you're normally "stealing" content, you're typically just copying a few pointers -- easy to do without exceptions). It can get a lot harder in a hurry for template though, even for simple ones like std:pair. A pair of something that can be moved with something that needs to be copied becomes difficult to handle well.
That meant, if they'd decided to make nothrow (and/or throw()) enforced at compile time, some unknown (but probably pretty large) amount of code would have been completely broken -- code that had been working fine for years suddenly wouldn't even compile with the new compiler.
Along with this was the fact that, although they're not deprecated, dynamic exception specifications remain in the language, so they were going to end up enforcing at least some exception specifications at run-time anyway.
So, their choices were:
Break a lot of existing code
Restrict move semantics so they'd apply to far less code
Continue (as in C++03) to enforce exception specifications at run time.
I doubt anybody liked any of these choices, but the third apparently seemed the last bad.
One reason is simply that compile-time enforcement of exception specifications (of any flavor) is a pain in the ass. It means that if you add debugging code you may have to rewrite an entire hierarchy of exception specifications, even if the code you added won't throw exceptions. And when you're finished debugging you have to rewrite them again. If you like this kind of busywork you should be programming in Java.
The problem with compile-time checking: it's not really possible in any useful way.
See the next example:
void foo(std::vector<int>& v) noexcept
{
if (!v.empty())
++v.at(0);
}
Can this code throw?
Clearly not. Can we check automatically? Not really.
The Java's way of doing things like this is to put the body in a try-catch block, but I don't think it is better than what we have now...
As I understand things (admittedly somewhat fuzzy), the entire idea of throw specifications was found to be a nightmare when it actually came time to try to use it in useful way.
Calling functions that don't specify what they throw or do not throw must be considered to potentially throw anything at all! So the compiler, were it to require that you neither throw nor call anything that might throw anything outside of the specification you're provided actually enforce such a thing, your code could call almost nothing whatsoever, no library in existence would be of any use to you or anyone else trying to make any use of throw specifications.
And since it is impossible for a compiler to tell the difference between "This function may throw an X, but the caller may well be calling it in such a way that it will never throw anything at all" -- one would forever be hamstrung by this language "feature."
So... I believe that the only possibly useful thing to come of it was the idea of saying nothrow - which indicates that it is safe to call from dtors and move and swap and so on, but that you're making a notation that - like const - is more about giving your users an API contract rather than actually making the compiler responsible to tell whether you violate your contract or not (like with most things C/C++ - the intelligence is assumed to be on the part of the programmer, not the nanny-compiler).

Questions on Exception Specification and Application Design

Although this topic has been extensively discussed on SO, I'd like to clarify a few things that are still not clear to me so, considering the following facts:
10 years ago, Herb Sutter was telling us to refrain from using this functionality.
Specifying the possible exceptions that a function / method may throw does not force the compiler to yell at you when you decide to change the function's body and throw a new type of exception, forgetting by mistake to change the exception specification in the function's declaration.
If you have a very high level function that calls several other high level functions, which each run tons of code to produce the results, then I can imagine the maintenance from hell nightmare, when I would have to specify ALL the errors which the first function may throw, and this list would have to include all the exceptions the inner functions may throw and so on, thus creating tight coupling between high and low level functions, which is quite undesirable. On the other hand, we derive all exceptions from std::runtime_error, which we know is a good practice and we could specify that the high level functions just throw std::runtime_error and be done with it. But wait a minute... Where do we actually catch the exceptions? Would it not be rather odd / nasty / bad to enclose a call to one of these high level functions in a try / catch block, which catches a MyVerySpecific exception, when the high level function is supposed to throw only std::runtime_error??? Would it be any good to catch specific exceptions in lower level functions, which are not able to do anything about them but pass them on in a more generic container, with more information appended to them? I certainly don't want to write try / catch blocks in every function that I write, just to format exceptions. It would be like requiring every function to validate its parameters, and that can drive people insane, when they need to change something in a low level function.
Questions:
Do Herb Sutter's rants about exception specification still hold today? Has anything changed since then? I am mostly interested in pre-C++0x standards. If yes, I guess we can consider this topic closed.
Since it seems that the compiler mostly ignores these exception specifications, and when catching exceptions, in 99% of the cases, one would use catch (const CustomException &ex), how would one specify that a function throws CustomException? throw(CustomExecption) or throw (CustomException &) or throw (const CustomException &)? I have seen all variations, and, although I would go for the first one, do the others make any sense / add any benefits?
How would one actually use this functionality, and, at the same time, avoid the fallacies illustrated in the above 3rd fact?
EDIT: Suppose that we're building a library. How will its users know what exceptions to expect, if we don't use exception specification? They will certainly not see what functions will be called internally by the API methods...
1/ Do Herb Sutter's rants about exception specification still hold today? Has anything changed since then? I am mostly interested in pre-C++0x standards. If yes, I guess we can consider this topic closed.
Yes, they still hold.
Exceptions specifications are:
half-way implemented (function pointers don't specify exceptions, for example)
not checked at compile-time, but leading to termination at runtime !!
In general, I would be against exceptions specifications, because it causes a leak of implementation details. Look at the state of Java exceptions...
In C++ in particular ? Exceptions specifications are like shooting yourself in the foot since the tiniest error in documentation may lead to a std::terminate call. Note that almost all functions may throw a std::bad_alloc or a std::out_of_range for example.
Note: Since C++11, throw() was deprecated and now with C++17 it is gone; instead, from C++17 on, the noexcept(false) specifier can be used. It is better supported in function pointers, but still leads to termination at run-time rather than errors at compile-time.
2/ Since it seems that the compiler mostly ignores these exception specifications, and when catching exceptions, in 99% of the cases, one would use catch (const CustomException &ex), how would one specify that a function throws CustomException? throw(CustomExecption) or throw (CustomException &) or throw (const CustomException &)? I have seen all variations, and, although I would go for the first one, do the others make any sense / add any benefits?
The compiler does not ignore the exceptions specifications, it sets up very vigilant watchdogs (which axes) to make sure to kill your program in case you had missed something.
3/ How would one actually use this functionality, and, at the same time, avoid the fallacies illustrated in the above 3rd fact?
Your customer will appreciate if it stays informal, so the best example is:
void func(); // throw CustomException
and this lets you focus on the exceptions that matter too, and let "unimportant" exceptions slip through. If a consumer wants them all ? catch(std::exception const& e) works.
4/ EDIT: Suppose that we're building a library. How will its users know what exceptions to expect, if we don't use exception specification? They will certainly not see what functions will be called internally by the API methods...
Do they have to ?
Document what matters, std::exception or ... take care of the unexpected.
Do Herb Sutter's rants about exception specification still hold today? Has anything changed since then?
I wouldn't call that a rant. He just pointed out problems related to exception specifications.
Yes, it still holds. As explained in the text, if a not specified exception is throw, the program terminates, and that is not acceptable for 99% applications.
how would one specify that a function throws CustomException?
class A
{
//...
void foo() throws( CustomException );
};
How would one actually use this functionality, and, at the same time, avoid the fallacies illustrated in the above 3rd fact?
By looking at the function declaration, the user knows which exceptions can be thrown. The problem is when a new exception needs to be thrown, then all functions declarations needs to be changed.
Suppose that we're building a library. How will its users know what exceptions to expect, if we don't use exception specification?
By reading the documentation.

about throw() in C++

void MyFunction(int i) throw();
it just tells the compiler that the function does not throw any exceptions.
It can't make sure the function throw nothing, is that right?
So what's the use of throw()
Is it redundant? Why this idea is proposed?
First of all, when the compiler works right, it is enforced -- but at run-time, not compile-time.. A function with an empty exception specification will not throw an exception. If something happens that would create an exception escaping from it, will instead call unexpected(), which (in turn) calls abort. You can use set_unexpected to change what gets called, but about all that function is allowed to do is add extra "stuff" (e.g. cleanup) before aborting the program -- it can't return to the original execution path.
That said, at least one major compiler (VC++) parses exception specifications, but does not enforce them, though it can use empty exception specifications to improve optimization a little. In this case, an exception specification that isn't followed can/does result in undefined behavior instead of necessarily aborting the program.
It can't make sure the function throw nothing, is that right?
You are almost there. It is an exception specification. It means that as an implementer you gurantee to your client(s) that this piece of code will not throw an exception. This does not however stop some functions within MyFunction to throw and which, if you do not handle them, will bubble up and cause your/client's program in a way you did not intent it to. It does not even mean that you cannot have a throw expression inside.
It is best to avoid such specification, until and unless you are absolutely sure that your code will never throw -- which is kind of difficult except for very basic functions. See the standard swap, pointer assignments etc.
Is it redundant? Why this idea is proposed?
Not exactly. When properly used, it can be of help to the compiler for optimization purposes. See this article. This article explains the history behind no-throw well.
Digging a bit more I found this excellent article from the Boost documentation. A must read. Read about the exception guarantees part.
As you said, it just tells the compiler that the function does not throw any exceptions.
When the compiler expects possible exceptions, it often has to generate the code in some specific form, which makes it less efficient. It also might have to generate some additional "household" code for the sole purpose of handling exceptions when and if they are thrown.
When you tell the compiler that this function never throws anything, it makes it much easier to the compiler to recognize the situations when all these additional exception-related expenses are completely unnecessary, thus helping the compiler to generate more efficient code.
Note, that if at run time you actually try to throw something out of a function that is declared with throw() specification, the exception will not be allowed to leave the function. Instead a so called unexpected exception handler will be invoked, which by default will terminate the program. So, in that sense it is actually enforced that a throw() function does not throw anything.
P.S. Since exception specifications are mostly affecting the run-time behavior of the program, in general they might not have any compile time effect. However, this specific exception specification - the empty one throw() - is actually recognized by some compilers at compile time and does indeed lead to generation of more efficient code. Some people (me included) hold the opinion that the empty exception specification is the only one that is really worth using in the real-life code.
To be honest exception specifications in the real world have not turned out to be as usfull as envisoned by the origianl designers. Also the differences between C++ runtime checked exception specifications and Java's compile time checked exception specifications has caused a lot of problems.
The currently except norms for exception specifications are:
Don't use them.
Unless they are the empty form.
Make sure that if you use the empty form gurantee that you actually catch all excetions.
The main problem is that if you have a method with a throw spec. Then somthing it uses changes underneath and now throws new types of exception there is no warning or problem with the code. But at runtime if the exception occurs (the new one that is not in throw spec) then your code will terminate(). To me termination is a last resort and should never happen in a well formed program I would much rather unwind the stack all the way back to main with an exception thus allowing a sliughtly cleaner exit.