Focusing on Visual C++, have you ever experienced significant performance gains in C++ code using throw() (i.e. __declspec(nothrow)) non-throwing specification?
Does it really help the optimizer?
Are there any benchmarks showing performance gains?
I found different (opposite) advice on the Internet:
Boost exception-specification rationale is against throw(), instead Larry Osterman seems to be in favor of it in his blog post: Why add a throw() to your methods?
(I'd like to clarify that I'm interested in VC++ specific code; I know that in GCC the throw() specification can actually be a "pessimization" due to run-time checks.)
P.S. Reading ATL headers, I found that throw() is used pervasively; moreover, I found a convenient C++ RAII unique_handle class in this MSDN article that uses throw() specification as well.
The MSVC compiler treats it as an optimization hint, yes.
Boost has to be cross-platform, they have to go for something that's safe and efficient on a variety of compilers. And as the boost docs say, some compilers might generate slower code when throw() is specified, and in many cases, compilers can deduce that no exception is thrown regardless of whether there is a throw() specification, so for Boost, the safest approach is to just never use throw-specifications.
But if you're targeting MSVC specifically, then throw() effectively tells the compiler not to generate exception-handling code for the function, which may give a speedup in cases where the function was too complex for the compiler to determine that no exception could be thrown.
Main problem with throw() is that code inside function marked as throw() can throw.
For example, this will work perfectly:
void foo() throw()
{
throw "haha\n";
}
int main()
{
try {
foo();
}
catch(const char* s) {
std::cout << s;
}
return 0;
}
Note, that foo would not throw to main, of cause. And you will not catch the exception (as if you comment throw() specifier). Instead, compiler will wrap code function with try{}catch() block. When exception is generated it will be handled by global handler (that will mean your program crash by default).
Note, that compiler have to wrap function code with try{}cath() block, unless compiler is sure, that inner code has no possibility to generate exception.
As a result, some optimization with the caller of foo can be done, but things get more complex inside foo.
EDIT:
Things with __declspec(nothrow) is different: As Microsft tells,
This attribute tells the compiler that the declared function and the
functions it calls never throw an exception.
It means, that compiler can omit try{}catch() wrapper code.
EDIT2
Actually, Microsoft violates standard behavior and does not generate warpper for throw(). Well, then you can use throw() to improve performance.
I add it, but its not to help the optimizer its to help me write more correct code.
class X
{
public:
void swap(X& rhs) throw(); // Swap better not ever throw
// If it does there is something else
// much more seriously wrong
};
Related
I am curious about the rationale behind noexcept in the C++0x FCD. throw(X) was deprecated, but noexcept seems to do the same thing. Is there a reason that noexcept isn't checked at compile time? It seems that it would be better if these functions were checked statically that they only called throwing functions within a try block.
Basically, it's a linker problem, the standards committee was reluctant to break the ABI. (If it were up to me, I would do so, all it really requires is library recompilation, we have this situation already with thread enablement, and it's manageable.)
Consider how it would work out. Suppose the requirements were
every destructor is implicitly noexcept(true)
Arguably, this should be a strict requirement. Throwing destructors are always a bug.
every extern "C" is implicitly noexcept(true)
Same argument again: exceptions in C-land are always a bug.
every other function is implicitly noexcept(false) unless otherwise specified
a noexcept(true) function must wrap all its noexcept(false) calls in try{}catch(...){}
By analogy, a const method cannot call a non-const method.
This attribute must manifest as a distinct type in overload resolution, function pointer compatibility, etc.
Sounds reasonable, right?
To implement this, the linker needs to distinguish between noexcept(true) and noexcept(false) versions of functions, much as you can overload const and const versions of member functions.
So what does this mean for name-namgling? To be backwards-compatible with existing object code we would require that all existing names are interpreted as noexcept(false) with extra mangling for the noexcept(true) versions.
This would imply we cannot link against existing destructors unless the header is modified to tag them as noexcept(false)
this would break backwards compatibility,
this arguably ought to be impossible, see point 1.
I spoke to a standards committe member in person about this and he says that this was a rushed decision, motivated mainly by a constraint on move operations in containers (you might otherwise end up with missing items after a throw, which violates the basic guarantee). Mind you, this is a man whose stated design philosophy is that fault intolerant code is good. Draw your own conclusions.
Like I said, I would have broken the ABI in preference to breaking the language. noexcept is only a marginal improvement on the old way. Static checking is always better.
If I remember throw has been deprecated because there is no way to specify all the exceptions a template function can throw. Even for non-template functions you will need the throw clause because you have added some traces.
On the other hand the compiler can optimize code that doesn't throw exceptions. See "The Debate on noexcept, Part I" (along with parts II and III) for a detailed discussion. The main point seems to be:
The vast experience of the last two decades shows that in practice, only two forms of exceptions specifications are useful:
The lack of an overt exception specification, which designates a function that can throw any type of exception:
int func(); //might throw any exception
A function that never throws. Such a function can be indicated by a throw() specification:
int add(int, int) throw(); //should never throw any exception
Note that noexcept checks for an exception thrown by a failing dynamic_cast and typeid applied to a null pointer, which can only be done at runtime. Other tests can indeed be done at compile time.
As other answers have stated, statements such as dynamic_casts can possibly throw but can only be checked at runtime, so the compiler can't tell for certain at compile time.
This means at compile time the compiler can either let them go (ie. don't compile-time check), warn, or reject outright (which wouldn't be useful). That leaves warning as the only reasonable thing for the compiler to do.
But that's still not really useful - suppose you have a dynamic_cast which for whatever reason you know will never fail and throw an exception because your program is written so. The compiler probably doesn't know that and throws a warning, which becomes noise, which probably just gets disabled by the programmer for being useless, negating the point of the warning.
A similar issue is if you have a function which is not specified with noexcept (ie. can throw exceptions) which you want to call from many functions, some noexcept, some not. You know the function will never throw in the circumstances called by the noexcept functions, but again the compiler doesn't: more useless warnings.
So there's no useful way for the compiler to enforce this at compile time. This is more in the realm of static analysis, which tend to be more picky and throw warnings for this kind of thing.
Consider a function
void fn() noexcept
{
foo();
bar();
}
Can you statically check if its correct? You would have to know whether foo or bar are going to throw exceptions. You could force all functions calls to be inside a try{} block, something like this
void fun() noexcept
{
try
{
foo();
bar();
}
catch(ExceptionType &)
{
}
}
But that is wrong. We have no way of knowing that foo and bar will only throw that type of exception. In order to make sure we catch anything we'd need to use "...". If you catch anything, what are you going to do with any errors you catch? If an unexpected error arises here the only thing to do is abort the program. But that is essentially what the runtime check provided by default will do.
In short, providing enough details to prove that a given function will never throw the incorrect exceptions would produce verbose code in cases where the compiler can't be sure whether or not the function will throw the wrong type. The usefulness of that static proof probably isn't worth the effort.
There is some overlap in the functionality of noexcept and throw(), but they're really coming from opposite directions.
throw() was about correctness, and nothing else: a way to specify the behavior if an unexpected exception was thrown.
The primary purpose of noexcept is optimization: It allows/encourages the compiler to optimize around the assumption that no exceptions will be thrown.
But yes, in practice, they're not far from being two different names for the same thing. The motivations behind them are different though.
I just read that in the C++11 standard revision, exception specifications were deprecated. I previously thought specifying what your functions may throw is good practice, but apparently, not so.
After reading Herb Stutter's well-cited article, I cannot help but wonder: why on earth are exception specifications implemented the way they are, and why has the committee decided to deprecate them instead of having them checked at compile-time? Why would a compiler even allow an exception to be thrown which doesn't appear in the function definition? To me, this all sounds like saying "You probably shouldn't specify your function return type, because when you specify int f(), but return 3.5; inside of it, your program will likely crash." (i. e., where is the conceptual difference from strong typing?)
(For the lack of exception specification support in typedefs, given that template syntax is probably Turing-complete, implementing this sounds easy enough.)
The original reason was that it was deemed impossible to
reliably check given the body of existing code, and the fact
that no specifier means anything can throw. Which means that
if static checking was in force, the following code wouldn't
compile:
double
safeSquareRoot( double d ) throw()
{
return d > 0.0 ? sqrt( d ) : 0.0;
}
Also, the purpose of exceptions are to report errors over
a great distance, which means that the intermediate functions
shouldn't know what the functions they call might throw.
Requiring exception specifiers on them would break
encapsulation.
The only real case where a function needs to know about the
exceptions that might occur is to know what exceptions cannot
occur. In particular, it is impossible to write thread safe code
unless you can be guaranteed that some functions will never
throw. Even here, static checking isn't acceptable, for the
reasons explained above, so the exception specification is
designed to work more like an assertion that you cannot
disactivate: when you write throw(), you get more or less the
equivalent of an assertion failure if the function is terminated
by an exception.
The situation in Java is somewhat different. In Java,
there are no real out parameters, which means that if you can't
use return codes if the function also has a return value. The
result is that exceptions are used in a lot of cases where
a return code would be preferable. And these, you do have to
know about, and handle immediately. For things that should
really be exceptions, Java has java.lang.RuntimeException
(which isn't checked, statically or otherwise). And it has no
way of saying that a function cannot ever throw an exception; it
also uses unchecked exceptions (called Error) in cases where
aborting the program would be more appropriate.
If the function f() throw(int) called the function g() throw(int, double) what would happen?
A compile time check would prevent your function from calling any other function with a less strict throw specifier which would be a huge pain.
I have an exception class as follows:
#include <exception>
struct InvalidPathException : public std::exception
{
explicit InvalidPathException() {}
const char* what() const;
};
const char*
InvalidPathException::what() const {
return "Path is not valid";
}
When compiling under GCC 4.4 with -Wall -std=c++0x
error: looser throw specifier for
'virtual const char*
InvalidPathException::what() const'
error: overriding 'virtual const char*
std::exception::what() const throw ()'
Quite right, too, since I'm overriding std::exception's what() method that does indeed have a throw() exception specifier. But as one will often be informed, we shouldn't use exception specifiers. And as I understand it, they are deprecated in C++11, but evidently not yet in GCC with -std=c++0x.
So I'd be interested in the best approach for now. In the code I'm developing I do care about performance and so worry about the oft-mentioned overhead of throw(), but in reality is this overhead so severe? Am I right in thinking that I'd only suffer it when what() is actually called, which would only be after such an exception is thrown (and likewise for the other methods inherited from std::exception that all have throw() specifiers)?
Alternatively, is there a way to work around this error given by GCC?
Empty throw specifications are useful, as they actually enable compiler optimizations at the caller's site, as Wikipedia knows (I don't have a technical quote handy).
And for reasons of optimization opportunities, nothrow-specifications are not deprecated in the upcoming standard, they just don't look like throw () any more but are called noexcept. Well, yes, and they work slightly differently.
Here's a discussion of noexcept that also details why traditional nothrow-specifications prohobit optimizations at the callee's site.
Generally, you pay for every throw specification you have, at least with a fully compliant compiler, which GCC has, in this respect, appearantly not always been. Those throw specifications have to be checked at run-time, even empty ones. That is because if an exception is raised that does not conform to the throw specification, stack unwinding has to take place within that stack frame (so you need code for that in addition to the conformance check) and then std::unexpected has to be called. On the other hand, you potentially save time/space for every empty throw specification as the compiler may make more assumptions when calling that function. I chicken out by saying only a profiler can give you the definitive answer of whether your particular code suffers from or improves by (empty!) throws specification.
As a workaround to your actual problem, may the following work?
Introduce #define NOTHROW throw () and use it for your exception's what and other stuff.
When GCC implements noexcept, redefine NOTHROW.
Update
As #James McNellis notes, throw () will be forward-compatible. In that case, I suggest just using throw () where you have to and, apart from that, if in doubt, profile.
I am curious about the rationale behind noexcept in the C++0x FCD. throw(X) was deprecated, but noexcept seems to do the same thing. Is there a reason that noexcept isn't checked at compile time? It seems that it would be better if these functions were checked statically that they only called throwing functions within a try block.
Basically, it's a linker problem, the standards committee was reluctant to break the ABI. (If it were up to me, I would do so, all it really requires is library recompilation, we have this situation already with thread enablement, and it's manageable.)
Consider how it would work out. Suppose the requirements were
every destructor is implicitly noexcept(true)
Arguably, this should be a strict requirement. Throwing destructors are always a bug.
every extern "C" is implicitly noexcept(true)
Same argument again: exceptions in C-land are always a bug.
every other function is implicitly noexcept(false) unless otherwise specified
a noexcept(true) function must wrap all its noexcept(false) calls in try{}catch(...){}
By analogy, a const method cannot call a non-const method.
This attribute must manifest as a distinct type in overload resolution, function pointer compatibility, etc.
Sounds reasonable, right?
To implement this, the linker needs to distinguish between noexcept(true) and noexcept(false) versions of functions, much as you can overload const and const versions of member functions.
So what does this mean for name-namgling? To be backwards-compatible with existing object code we would require that all existing names are interpreted as noexcept(false) with extra mangling for the noexcept(true) versions.
This would imply we cannot link against existing destructors unless the header is modified to tag them as noexcept(false)
this would break backwards compatibility,
this arguably ought to be impossible, see point 1.
I spoke to a standards committe member in person about this and he says that this was a rushed decision, motivated mainly by a constraint on move operations in containers (you might otherwise end up with missing items after a throw, which violates the basic guarantee). Mind you, this is a man whose stated design philosophy is that fault intolerant code is good. Draw your own conclusions.
Like I said, I would have broken the ABI in preference to breaking the language. noexcept is only a marginal improvement on the old way. Static checking is always better.
If I remember throw has been deprecated because there is no way to specify all the exceptions a template function can throw. Even for non-template functions you will need the throw clause because you have added some traces.
On the other hand the compiler can optimize code that doesn't throw exceptions. See "The Debate on noexcept, Part I" (along with parts II and III) for a detailed discussion. The main point seems to be:
The vast experience of the last two decades shows that in practice, only two forms of exceptions specifications are useful:
The lack of an overt exception specification, which designates a function that can throw any type of exception:
int func(); //might throw any exception
A function that never throws. Such a function can be indicated by a throw() specification:
int add(int, int) throw(); //should never throw any exception
Note that noexcept checks for an exception thrown by a failing dynamic_cast and typeid applied to a null pointer, which can only be done at runtime. Other tests can indeed be done at compile time.
As other answers have stated, statements such as dynamic_casts can possibly throw but can only be checked at runtime, so the compiler can't tell for certain at compile time.
This means at compile time the compiler can either let them go (ie. don't compile-time check), warn, or reject outright (which wouldn't be useful). That leaves warning as the only reasonable thing for the compiler to do.
But that's still not really useful - suppose you have a dynamic_cast which for whatever reason you know will never fail and throw an exception because your program is written so. The compiler probably doesn't know that and throws a warning, which becomes noise, which probably just gets disabled by the programmer for being useless, negating the point of the warning.
A similar issue is if you have a function which is not specified with noexcept (ie. can throw exceptions) which you want to call from many functions, some noexcept, some not. You know the function will never throw in the circumstances called by the noexcept functions, but again the compiler doesn't: more useless warnings.
So there's no useful way for the compiler to enforce this at compile time. This is more in the realm of static analysis, which tend to be more picky and throw warnings for this kind of thing.
Consider a function
void fn() noexcept
{
foo();
bar();
}
Can you statically check if its correct? You would have to know whether foo or bar are going to throw exceptions. You could force all functions calls to be inside a try{} block, something like this
void fun() noexcept
{
try
{
foo();
bar();
}
catch(ExceptionType &)
{
}
}
But that is wrong. We have no way of knowing that foo and bar will only throw that type of exception. In order to make sure we catch anything we'd need to use "...". If you catch anything, what are you going to do with any errors you catch? If an unexpected error arises here the only thing to do is abort the program. But that is essentially what the runtime check provided by default will do.
In short, providing enough details to prove that a given function will never throw the incorrect exceptions would produce verbose code in cases where the compiler can't be sure whether or not the function will throw the wrong type. The usefulness of that static proof probably isn't worth the effort.
There is some overlap in the functionality of noexcept and throw(), but they're really coming from opposite directions.
throw() was about correctness, and nothing else: a way to specify the behavior if an unexpected exception was thrown.
The primary purpose of noexcept is optimization: It allows/encourages the compiler to optimize around the assumption that no exceptions will be thrown.
But yes, in practice, they're not far from being two different names for the same thing. The motivations behind them are different though.
C++ provides a syntax for checked exceptions, for example:
void G() throw(Exception);
void f() throw();
However, the Visual C++ compiler doesn't check them; the throw flag is simply ignored. In my opinion, this renders the exception feature unusable. So my question is: is there a way to make the compiler check whether exceptions are correctly caught/rethrown? For example a Visual C++ plugin or a different C++ compiler.
PS. I want the compiler to check whether exceptions are correctly caught, otherwise you end up in a situation where you have to put a catch around every single function call you make, even if they explicitly state they won't throw anything.
Update: the Visual C++ compiler does show a warning when throwing in a function marked with throw(). This is great, but regrettably, the warning doesn't show up when you call a subroutine that might throw. For example:
void f() throw(int) { throw int(13); }
void h() throw() { g(); } //no warning here!
What's funny is that Java has checked exceptions, and Java programmers hate those too.
Exception specifications in C++ are useless for 3 reasons:
1. C++ exception specifications inhibit optimization.
With the exception possibly of throw(), compilers insert extra code to check that when you throw an exception, it matches the exception specification of functions during a stack unwind. Way to make your program slower.
2. C++ exception specifications are not compiler-enforced
As far as your compiler is concerned, the following is syntactically correct:
void AStupidFunction() throw()
{
throw 42;
}
What's worse, nothing useful happens if you violate an exception specification. Your program just terminates!
3. C++ exception specifications are part of a function's signature.
If you have a base class with a virtual function and try to override it, the exception specifications must match exactly. So, you'd better plan ahead, and it's still a pain.
struct A
{
virtual int value() const throw() {return 10;}
}
struct B : public A
{
virtual int value() const {return functionThatCanThrow();} // ERROR!
}
Exception specifications give you these problems, and the gain for using them is minimal. In contrast, if you avoid exception specifications altogether, coding is easier and you avoid this stuff.
Exception specifications are pretty useless in C++.
It's not enforced that no other exceptions will be thrown, but merely that the global function unexpected() will be called (which can be set)
Using exception specifications mainly boils down to deluding yourself (or your peers) into some false sense of security. Better to simply not bother.
Have a look at this:
http://www.gotw.ca/publications/mill22.htm
basically exception specifications are unworkable/unusable but that doesn't make exceptions unworkable.
As for your question, there is no way to get the compiler to check that every type thrown is caught somewhere higher in the code, I expect compilation units make this difficult and it's impossible to do it for code intended to be used in a library (where the top level is not available at compile time). If you want to be sure everything is caught then stick a catch(...) at the very top of you code.
Because the standard says so. The exception declaration doesn't mean that no other exception will be thrown. It means that if an undeclared exception is thrown, there will be called a special global function called unexpected(), which by default terminates the program. Generally declaring exceptions in functions is discouraged (maybe except for empty exception list) as the standard behaviour is not very helpful.
To detect prior to runtime cases such as ...
extern void f() throw (class Mystery);
void g() throw() {
f() ;
}
... you need static analysis. Yes, the compiler is doing plenty of static analysis, but because the standard is "raise std::unexpected if the throw doesn't match," and it is perfectly legal to write a routine that throws an object that does not match the specifier, the compiler implementers neither warn nor remark.
Static analysis tools that claim to provide warning service include Gimpel Software's lint for C++ ...
1560 Uncaught exception 'Name' not on throw-list for function 'Symbol'
and, according to this answer to a prior question, QA C++.
I cannot check this for lack of a MSVC installation, but are you really sure the compiler ignores the throw() specification?
This MSDN page suggests that Microsoft is aware of throw() and expects their compiler to handle it correctly. Well, almost, see the note about how they depart from the ANSI/ISO standard in some details.
Edit: In practice, though, I agree with Patrick: Exception specifications are mostly useless.