error: ISO C++17 does not allow dynamic exception specifications [duplicate] - c++

Are dynamic exception specifications invalid in c++17? Like this
void f() throw(int);

General C++ guidelines discourages to use of exception specifications with any version of C++ and new standard has removed this feature.
E.30: Don't use exception specifications
Reason
Exception specifications make error handling brittle, impose a
run-time cost, and have been removed from the C++ standard.
Example
int use(int arg)
throw(X, Y)
{
// ...
auto x = f(arg);
// ...
}
If f() throws an exception different from X and Y the unexpected
handler is invoked, which by default terminates. That's OK, but say
that we have checked that this cannot happen and f is changed to
throw a new exception Z, we now have a crash on our hands unless we
change use() (and re-test everything). The snag is that f() may be
in a library we do not control and the new exception is not anything
that use() can do anything about or is in any way interested in. We
can change use() to pass Z through, but now use()'s callers
probably needs to be modified. This quickly becomes unmanageable.
Alternatively, we can add a try-catch to use() to map Z into
an acceptable exception. This too, quickly becomes unmanageable. Note
that changes to the set of exceptions often happens at the lowest
level of a system (e.g., because of changes to a network library or
some middleware), so changes "bubble up" through long call chains. In
a large code base, this could mean that nobody could update to a new
version of a library until the last user was modified. If use() is
part of a library, it may not be possible to update it because a
change could affect unknown clients.
The policy of letting exceptions propagate until they reach a function
that potentially can handle it has proven itself over the years.
Note
No. This would not be any better had exception specifications been
statically enforced. For example, see Stroustrup94.
Note
If no exception may be thrown, use
noexcept
or its equivalent throw().

They are officially invalid in C++17. However, Visual C++17 with C++/Language/C++ Language Standard set to ISO C++17 still allows them. Setting warning level to 3 or higher [properties/General/Warning Level/] gives the warning,
warning C4290: C++ exception specification ignored except to indicate a function is not __declspec(nothrow)
Note that throw() is still legal and is equivalent to the newly added noexcept.

Related

Noexcept specifier: why no compile time checks? [duplicate]

I am curious about the rationale behind noexcept in the C++0x FCD. throw(X) was deprecated, but noexcept seems to do the same thing. Is there a reason that noexcept isn't checked at compile time? It seems that it would be better if these functions were checked statically that they only called throwing functions within a try block.
Basically, it's a linker problem, the standards committee was reluctant to break the ABI. (If it were up to me, I would do so, all it really requires is library recompilation, we have this situation already with thread enablement, and it's manageable.)
Consider how it would work out. Suppose the requirements were
every destructor is implicitly noexcept(true)
Arguably, this should be a strict requirement. Throwing destructors are always a bug.
every extern "C" is implicitly noexcept(true)
Same argument again: exceptions in C-land are always a bug.
every other function is implicitly noexcept(false) unless otherwise specified
a noexcept(true) function must wrap all its noexcept(false) calls in try{}catch(...){}
By analogy, a const method cannot call a non-const method.
This attribute must manifest as a distinct type in overload resolution, function pointer compatibility, etc.
Sounds reasonable, right?
To implement this, the linker needs to distinguish between noexcept(true) and noexcept(false) versions of functions, much as you can overload const and const versions of member functions.
So what does this mean for name-namgling? To be backwards-compatible with existing object code we would require that all existing names are interpreted as noexcept(false) with extra mangling for the noexcept(true) versions.
This would imply we cannot link against existing destructors unless the header is modified to tag them as noexcept(false)
this would break backwards compatibility,
this arguably ought to be impossible, see point 1.
I spoke to a standards committe member in person about this and he says that this was a rushed decision, motivated mainly by a constraint on move operations in containers (you might otherwise end up with missing items after a throw, which violates the basic guarantee). Mind you, this is a man whose stated design philosophy is that fault intolerant code is good. Draw your own conclusions.
Like I said, I would have broken the ABI in preference to breaking the language. noexcept is only a marginal improvement on the old way. Static checking is always better.
If I remember throw has been deprecated because there is no way to specify all the exceptions a template function can throw. Even for non-template functions you will need the throw clause because you have added some traces.
On the other hand the compiler can optimize code that doesn't throw exceptions. See "The Debate on noexcept, Part I" (along with parts II and III) for a detailed discussion. The main point seems to be:
The vast experience of the last two decades shows that in practice, only two forms of exceptions specifications are useful:
The lack of an overt exception specification, which designates a function that can throw any type of exception:
int func(); //might throw any exception
A function that never throws. Such a function can be indicated by a throw() specification:
int add(int, int) throw(); //should never throw any exception
Note that noexcept checks for an exception thrown by a failing dynamic_cast and typeid applied to a null pointer, which can only be done at runtime. Other tests can indeed be done at compile time.
As other answers have stated, statements such as dynamic_casts can possibly throw but can only be checked at runtime, so the compiler can't tell for certain at compile time.
This means at compile time the compiler can either let them go (ie. don't compile-time check), warn, or reject outright (which wouldn't be useful). That leaves warning as the only reasonable thing for the compiler to do.
But that's still not really useful - suppose you have a dynamic_cast which for whatever reason you know will never fail and throw an exception because your program is written so. The compiler probably doesn't know that and throws a warning, which becomes noise, which probably just gets disabled by the programmer for being useless, negating the point of the warning.
A similar issue is if you have a function which is not specified with noexcept (ie. can throw exceptions) which you want to call from many functions, some noexcept, some not. You know the function will never throw in the circumstances called by the noexcept functions, but again the compiler doesn't: more useless warnings.
So there's no useful way for the compiler to enforce this at compile time. This is more in the realm of static analysis, which tend to be more picky and throw warnings for this kind of thing.
Consider a function
void fn() noexcept
{
foo();
bar();
}
Can you statically check if its correct? You would have to know whether foo or bar are going to throw exceptions. You could force all functions calls to be inside a try{} block, something like this
void fun() noexcept
{
try
{
foo();
bar();
}
catch(ExceptionType &)
{
}
}
But that is wrong. We have no way of knowing that foo and bar will only throw that type of exception. In order to make sure we catch anything we'd need to use "...". If you catch anything, what are you going to do with any errors you catch? If an unexpected error arises here the only thing to do is abort the program. But that is essentially what the runtime check provided by default will do.
In short, providing enough details to prove that a given function will never throw the incorrect exceptions would produce verbose code in cases where the compiler can't be sure whether or not the function will throw the wrong type. The usefulness of that static proof probably isn't worth the effort.
There is some overlap in the functionality of noexcept and throw(), but they're really coming from opposite directions.
throw() was about correctness, and nothing else: a way to specify the behavior if an unexpected exception was thrown.
The primary purpose of noexcept is optimization: It allows/encourages the compiler to optimize around the assumption that no exceptions will be thrown.
But yes, in practice, they're not far from being two different names for the same thing. The motivations behind them are different though.

C++/GCC: How to detect unhandled exceptions in compile time

Introduction:
In Java, if you do not catch an exception, your code doesn't even compile, and the compiler crashes on unhandled exception.
Question:
Is there a way to tell GCC to be "strict" as Java in this case, and to raise an error or at least a warning on unhandled exception?
If not - are there IDEs (for Unix, please) that can highlight such cases as a warning?
It is not possible in C++. Exception specification is a part of a function declaration but not a part of its type. Any indirect call (via pointer or virtual call) just completely wipes any information about exceptions.
Exception specifications are deprecated anyway in C++11 in favour of noexcept, so it is unlikely any compiler would bother to enhance this language feature.
The only guarantee you can put on a C++ function is that it never throws an exception at all:
void f() noexcept;
However, this will terminate the program at runtime when an exception is thrown. It's not verified at compile-time.
If you want to guarantee that an error is handled, the closest you can get is returning a value of a type that wraps boost::variant<OK, Error> with a member function that takes two callbacks: a callback for the OK case and one for the Error case.
You can ALWAYS use:
int main()
{
try {
... your usual main ...
}
catch(...)
{
std::cerr << "Unhandled exception caught" << std::endl;
}
}
However, that is a fairly poor solution.
Unfortunately, the nature of C++ makes it very hard to catch the situation where something throws an exception and it's not handled, since just about everything has the potential to throw exceptions. I can only think of code-review - perhaps code analyzing tools, such as that built around CLANG will have the capability of doing this, but it probably won't be 100% accurate. In fact, I'm not even sure that the Clang Analyzer fully understands throw/try/catch currently, as it seems to not catch some fairly fundamental errors http://clang-analyzer.llvm.org/potential_checkers.html (see the "exceptions" heading).
First, your statement concerning Java is false; only certain
types of exceptions prevent the code from compiling. And for
the most part, those types of exceptions correspond to things
that are better handled by return codes. Exceptions are normally
only an appropriate solution when propagating an error through
a large number of functions, and you don't want to have to add
exception specifications for all of those functions.
That's really why Java makes its distinctions: exceptions that
derive from java.lang.Error should usually be crashes
(assertion failures and the like in C++); and exceptions that
derive from java.lang.RuntimeException should be exceptions in
C++. Neither are checked in Java, either, because it isn't
reasonable to have every function declare that it might throw
one of them.
As for the rest, the exceptions which you want to catch
immediately in the calling code, they are generally best handled
by return codes, rather than exceptions; Java may use exceptions
here because it has no out arguments, which can make using
return codes more awkward. Of course, in C++, you can also
silently ignore return codes, which is a drawback (but
historical reasons, etc.). But the real issue is the contract,
which is far more complex than function f might throw/return x;
it's more along the lines of "function f will throw/return x,
if condition c is met". And I know of no language which has
a means of enforcing that. In C++ (and for checked exceptions
in Java), exception specifications are more along the lines of
"function f will not throw anything but x". Which is generally
not very useful, unless "x" means all exceptions. In order to
write really robust code, you need a few functions which are
guaranteed never to throw. Interestingly enough, you can
specify this in C++, both pre-C++11 (throw()) and post
(noexcept); you cannot in Java, because you can't specify that
a function won't throw a java.lang.RuntimeError.
(Or a java.lang.Error, but that's less of an issue, since if
you get one of those, you're application is hosed anyway. Just
how are you expected to recover from
java.lang.VirtualMachineError? And of course, you can't
really expect to be able to recover from a segment violation in
C++ either. Although... java.lang.OutOfMemoryError derives
from java.lang.VirtualMachineError; although not easy, and not
always applicable, I've written C++ code which successfully
recovered from std::bad_alloc.)

Why are C++ exception specifications not checked at compile-time?

I just read that in the C++11 standard revision, exception specifications were deprecated. I previously thought specifying what your functions may throw is good practice, but apparently, not so.
After reading Herb Stutter's well-cited article, I cannot help but wonder: why on earth are exception specifications implemented the way they are, and why has the committee decided to deprecate them instead of having them checked at compile-time? Why would a compiler even allow an exception to be thrown which doesn't appear in the function definition? To me, this all sounds like saying "You probably shouldn't specify your function return type, because when you specify int f(), but return 3.5; inside of it, your program will likely crash." (i. e., where is the conceptual difference from strong typing?)
(For the lack of exception specification support in typedefs, given that template syntax is probably Turing-complete, implementing this sounds easy enough.)
The original reason was that it was deemed impossible to
reliably check given the body of existing code, and the fact
that no specifier means anything can throw. Which means that
if static checking was in force, the following code wouldn't
compile:
double
safeSquareRoot( double d ) throw()
{
return d > 0.0 ? sqrt( d ) : 0.0;
}
Also, the purpose of exceptions are to report errors over
a great distance, which means that the intermediate functions
shouldn't know what the functions they call might throw.
Requiring exception specifiers on them would break
encapsulation.
The only real case where a function needs to know about the
exceptions that might occur is to know what exceptions cannot
occur. In particular, it is impossible to write thread safe code
unless you can be guaranteed that some functions will never
throw. Even here, static checking isn't acceptable, for the
reasons explained above, so the exception specification is
designed to work more like an assertion that you cannot
disactivate: when you write throw(), you get more or less the
equivalent of an assertion failure if the function is terminated
by an exception.
The situation in Java is somewhat different. In Java,
there are no real out parameters, which means that if you can't
use return codes if the function also has a return value. The
result is that exceptions are used in a lot of cases where
a return code would be preferable. And these, you do have to
know about, and handle immediately. For things that should
really be exceptions, Java has java.lang.RuntimeException
(which isn't checked, statically or otherwise). And it has no
way of saying that a function cannot ever throw an exception; it
also uses unchecked exceptions (called Error) in cases where
aborting the program would be more appropriate.
If the function f() throw(int) called the function g() throw(int, double) what would happen?
A compile time check would prevent your function from calling any other function with a less strict throw specifier which would be a huge pain.

Why is C++0x's `noexcept` checked dynamically?

I am curious about the rationale behind noexcept in the C++0x FCD. throw(X) was deprecated, but noexcept seems to do the same thing. Is there a reason that noexcept isn't checked at compile time? It seems that it would be better if these functions were checked statically that they only called throwing functions within a try block.
Basically, it's a linker problem, the standards committee was reluctant to break the ABI. (If it were up to me, I would do so, all it really requires is library recompilation, we have this situation already with thread enablement, and it's manageable.)
Consider how it would work out. Suppose the requirements were
every destructor is implicitly noexcept(true)
Arguably, this should be a strict requirement. Throwing destructors are always a bug.
every extern "C" is implicitly noexcept(true)
Same argument again: exceptions in C-land are always a bug.
every other function is implicitly noexcept(false) unless otherwise specified
a noexcept(true) function must wrap all its noexcept(false) calls in try{}catch(...){}
By analogy, a const method cannot call a non-const method.
This attribute must manifest as a distinct type in overload resolution, function pointer compatibility, etc.
Sounds reasonable, right?
To implement this, the linker needs to distinguish between noexcept(true) and noexcept(false) versions of functions, much as you can overload const and const versions of member functions.
So what does this mean for name-namgling? To be backwards-compatible with existing object code we would require that all existing names are interpreted as noexcept(false) with extra mangling for the noexcept(true) versions.
This would imply we cannot link against existing destructors unless the header is modified to tag them as noexcept(false)
this would break backwards compatibility,
this arguably ought to be impossible, see point 1.
I spoke to a standards committe member in person about this and he says that this was a rushed decision, motivated mainly by a constraint on move operations in containers (you might otherwise end up with missing items after a throw, which violates the basic guarantee). Mind you, this is a man whose stated design philosophy is that fault intolerant code is good. Draw your own conclusions.
Like I said, I would have broken the ABI in preference to breaking the language. noexcept is only a marginal improvement on the old way. Static checking is always better.
If I remember throw has been deprecated because there is no way to specify all the exceptions a template function can throw. Even for non-template functions you will need the throw clause because you have added some traces.
On the other hand the compiler can optimize code that doesn't throw exceptions. See "The Debate on noexcept, Part I" (along with parts II and III) for a detailed discussion. The main point seems to be:
The vast experience of the last two decades shows that in practice, only two forms of exceptions specifications are useful:
The lack of an overt exception specification, which designates a function that can throw any type of exception:
int func(); //might throw any exception
A function that never throws. Such a function can be indicated by a throw() specification:
int add(int, int) throw(); //should never throw any exception
Note that noexcept checks for an exception thrown by a failing dynamic_cast and typeid applied to a null pointer, which can only be done at runtime. Other tests can indeed be done at compile time.
As other answers have stated, statements such as dynamic_casts can possibly throw but can only be checked at runtime, so the compiler can't tell for certain at compile time.
This means at compile time the compiler can either let them go (ie. don't compile-time check), warn, or reject outright (which wouldn't be useful). That leaves warning as the only reasonable thing for the compiler to do.
But that's still not really useful - suppose you have a dynamic_cast which for whatever reason you know will never fail and throw an exception because your program is written so. The compiler probably doesn't know that and throws a warning, which becomes noise, which probably just gets disabled by the programmer for being useless, negating the point of the warning.
A similar issue is if you have a function which is not specified with noexcept (ie. can throw exceptions) which you want to call from many functions, some noexcept, some not. You know the function will never throw in the circumstances called by the noexcept functions, but again the compiler doesn't: more useless warnings.
So there's no useful way for the compiler to enforce this at compile time. This is more in the realm of static analysis, which tend to be more picky and throw warnings for this kind of thing.
Consider a function
void fn() noexcept
{
foo();
bar();
}
Can you statically check if its correct? You would have to know whether foo or bar are going to throw exceptions. You could force all functions calls to be inside a try{} block, something like this
void fun() noexcept
{
try
{
foo();
bar();
}
catch(ExceptionType &)
{
}
}
But that is wrong. We have no way of knowing that foo and bar will only throw that type of exception. In order to make sure we catch anything we'd need to use "...". If you catch anything, what are you going to do with any errors you catch? If an unexpected error arises here the only thing to do is abort the program. But that is essentially what the runtime check provided by default will do.
In short, providing enough details to prove that a given function will never throw the incorrect exceptions would produce verbose code in cases where the compiler can't be sure whether or not the function will throw the wrong type. The usefulness of that static proof probably isn't worth the effort.
There is some overlap in the functionality of noexcept and throw(), but they're really coming from opposite directions.
throw() was about correctness, and nothing else: a way to specify the behavior if an unexpected exception was thrown.
The primary purpose of noexcept is optimization: It allows/encourages the compiler to optimize around the assumption that no exceptions will be thrown.
But yes, in practice, they're not far from being two different names for the same thing. The motivations behind them are different though.

Why aren't exceptions in C++ checked by the compiler?

C++ provides a syntax for checked exceptions, for example:
void G() throw(Exception);
void f() throw();
However, the Visual C++ compiler doesn't check them; the throw flag is simply ignored. In my opinion, this renders the exception feature unusable. So my question is: is there a way to make the compiler check whether exceptions are correctly caught/rethrown? For example a Visual C++ plugin or a different C++ compiler.
PS. I want the compiler to check whether exceptions are correctly caught, otherwise you end up in a situation where you have to put a catch around every single function call you make, even if they explicitly state they won't throw anything.
Update: the Visual C++ compiler does show a warning when throwing in a function marked with throw(). This is great, but regrettably, the warning doesn't show up when you call a subroutine that might throw. For example:
void f() throw(int) { throw int(13); }
void h() throw() { g(); } //no warning here!
What's funny is that Java has checked exceptions, and Java programmers hate those too.
Exception specifications in C++ are useless for 3 reasons:
1. C++ exception specifications inhibit optimization.
With the exception possibly of throw(), compilers insert extra code to check that when you throw an exception, it matches the exception specification of functions during a stack unwind. Way to make your program slower.
2. C++ exception specifications are not compiler-enforced
As far as your compiler is concerned, the following is syntactically correct:
void AStupidFunction() throw()
{
throw 42;
}
What's worse, nothing useful happens if you violate an exception specification. Your program just terminates!
3. C++ exception specifications are part of a function's signature.
If you have a base class with a virtual function and try to override it, the exception specifications must match exactly. So, you'd better plan ahead, and it's still a pain.
struct A
{
virtual int value() const throw() {return 10;}
}
struct B : public A
{
virtual int value() const {return functionThatCanThrow();} // ERROR!
}
Exception specifications give you these problems, and the gain for using them is minimal. In contrast, if you avoid exception specifications altogether, coding is easier and you avoid this stuff.
Exception specifications are pretty useless in C++.
It's not enforced that no other exceptions will be thrown, but merely that the global function unexpected() will be called (which can be set)
Using exception specifications mainly boils down to deluding yourself (or your peers) into some false sense of security. Better to simply not bother.
Have a look at this:
http://www.gotw.ca/publications/mill22.htm
basically exception specifications are unworkable/unusable but that doesn't make exceptions unworkable.
As for your question, there is no way to get the compiler to check that every type thrown is caught somewhere higher in the code, I expect compilation units make this difficult and it's impossible to do it for code intended to be used in a library (where the top level is not available at compile time). If you want to be sure everything is caught then stick a catch(...) at the very top of you code.
Because the standard says so. The exception declaration doesn't mean that no other exception will be thrown. It means that if an undeclared exception is thrown, there will be called a special global function called unexpected(), which by default terminates the program. Generally declaring exceptions in functions is discouraged (maybe except for empty exception list) as the standard behaviour is not very helpful.
To detect prior to runtime cases such as ...
extern void f() throw (class Mystery);
void g() throw() {
f() ;
}
... you need static analysis. Yes, the compiler is doing plenty of static analysis, but because the standard is "raise std::unexpected if the throw doesn't match," and it is perfectly legal to write a routine that throws an object that does not match the specifier, the compiler implementers neither warn nor remark.
Static analysis tools that claim to provide warning service include Gimpel Software's lint for C++ ...
1560 Uncaught exception 'Name' not on throw-list for function 'Symbol'
and, according to this answer to a prior question, QA C++.
I cannot check this for lack of a MSVC installation, but are you really sure the compiler ignores the throw() specification?
This MSDN page suggests that Microsoft is aware of throw() and expects their compiler to handle it correctly. Well, almost, see the note about how they depart from the ANSI/ISO standard in some details.
Edit: In practice, though, I agree with Patrick: Exception specifications are mostly useless.