For some standard library classes, access to parts of their contents may legitimately fail. Usually you have the choice between some potentially throwing method an one that is marked noexcept. The latter spares the check on the precondition, so if you want to take the responsibility yourself, you can. This can be used under circumstances where using exceptions are not permitted or when fixing a performance bottleneck.
Example 1: std::vector element access:
std::vector<int> vec;
vec.at(n) // throws std::out_of_range
vec[n] // potentially UB, thus your own responsibility
Example 2: std::optional access:
std::optional<int> optn;
optn.value() // throws std::bad_optional_access
*optn // potentially UB, thus your own responsibility
Now on to std::variant. Directly accessing an alternative somewhat follows this pattern:
std::variant<std::string, int> var;
std::get<int>(var) // potentially throwing std::bad_variant_access
*std::get_if<int>(&var) // potentially UB, thus your own responsibility
But this time the signature changes, we have to inject * and &. The downside of this is that we don't get automatic move semantics. One more thing to keep in your mind...
But it gets worse if you have a look at std::visit(Visitor&& vis, Variants&&... vars). There is no noexcept alternative for it, although it only throws
if any variant in vars is valueless_by_exception.
This means for visiting variants you cannot choose to take the responsibility yourself, and if you have no choice and must avoid exceptions, you cannot visit std::variants at all with standard tooling! (apart from the terrible workaround of switching on variant::index())
To me, this looks like a pretty bad design oversight... or there a reason for this? And in case I'm right about the oversight, is there an initiative to fix this in the standard?
This means for visiting variants you cannot choose to take the responsibility yourself
Sure you can. The "valueless-by-exception" state can only happen if you assign or emplace a value into an existing variant. Furthermore, by definition, it can only happen if an exception is actually thrown during these processes. That is not a state that ever just happens to a random variant.
If you take responsibility to ensure that you either never emplace/assign to a variant, or that the types you use never throw in those circumstances, or that you respond to any exceptions from doing so in such a way that the variant that provoked it is not being talked to (ie: if bad_alloc is thrown, your application doesn't catch it; it just shuts down), then you don't have to care about this possibility.
Basically, if you're already coding to avoid exceptions, the non-noexcept status of visit is irrelevant. No variant will ever get into the "valueless-by-exception" unless an exception is thrown.
Related
I saw that C++ 11 added the noexcept keyword. But I don't really understand why is it useful.
If the function throws when it's not supposed to throw - why would I want the program to crash?
So when should I use it?
Also, how will it work along with compiling with /Eha and using _set_se_translator? This means that any line of code can throw c++ exception - because it might throw a SEH exception (Because of accessing protected memory) and it will be translated to c++ exception.
What will happen then?
The primary use of noexcept is for generic algorithms, e.g., when resizing a std::vector<T>: for an efficient algorithm moving elements it is necessary to know ahead of time that none of the moves will throw. If moving elements might throw, elements need to be copied instead. Using the noexcept(expr) operator the library implementation can determine whether a particular operation may throw. The property of operations not throwing becomes part of the contract: if that contract is violated, all bets are off and there may be no way to recover a valid state. Bailing out before causing more damage is the natural choice.
To propagate knowledge about noexcept operations do not throw it is also necessary to declare functions as such. To this end, you'd use noexcept, throw(), or noexcept(expr) with a constant expression. The form using an expression is necessary when implementing a generic data structure: with the expression it can be determined whether any of the type dependent operations may throw an exception.
For example, std::swap() is declared something like this:
template <typename T>
void swap(T& o1, T& o2) noexcept(noexcept(T(std::move(o1)) &&
noexcept(o1 = std::move(o2)));
Based on noexcept(swap(a, b)) the library can then choose differently efficient implementations of certain operations: if it can just swap() without risking an exception it may temporarily violate invariants and recover them later. If an exception might be thrown the library may instead need to copy objects rather than moving them around.
It is unlikely that the standard C++ library implementation will depend on many operations to be noexcept(true). The probably the operations it will check are mainly those involved in moving objects around, i.e.:
The destructor of a class (note that destructors are by default noexcept(true) even without any declaration; if you have destructor which may throw, you need to declare it as such, e.g.: T::~T() noexcept(false)).
The move operators, i.e. move construction (T::T(T&&)) and move assignment (T::operator=(T&&)).
The type's swap() operations (swap(T&, T&) and possibly the member version T::swap(T&)).
If any of these operations deviates from the default you should declare it correspondingly to get the most efficient implementation. The generated versions of these operations declare whether they are throwing exceptions based on the respective operations used for members and bases.
Although I can imagine that some operations may be added in the future or by some specific libraries, I would probably not declaration operations as noexcept for now. If other functions emerge which make a difference being noexcept they can be declared (and possible changed as necessary) in the future.
The reason that the program may crash is because noexcept tells the optimizer your code won't throw. If it does - well, there's no way to predict what will happen with optimized code.
As for MSVC++, you'd have to check what happens when they implement noexcept. From a Standard viewpoint, SEH is undefined behavior. Accessing protected memory can already crash right now.
move_if_noexcept will:
return an rvalue -- facilitating a move -- if the move constructor is noexcept or if there is no copy constructor (move-only type)
return an lvalue -- forcing a copy -- otherwise
I found this rather surprising, as a move-only type that has a throwing move-ctor will still have this move-ctor invoked by code that uses move_if_noexcept.
Has there been given a thorough rationale for this? (Maybe directly or between the lines of N2983?)
Wouldn't code be better off not compiling rather than still having to face the unrecoverable move scenario? The vector example given in N2983 is nice:
void reserve(size_type n)
{
... ...
new ((void*)(new_begin + i)) value_type( std::move_if_noexcept( (*this)[i]) ) );
}
catch(...)
{
while (i > 0) // clean up new elements
(new_begin + --i)->~value_type();
this->deallocate( new_begin ); // release storage
throw;
}
*!* // -------- irreversible mutation starts here -----------
this->deallocate( this->begin_ );
this->begin_ = new_begin;
... ...
The comment given in the marked line is actually wrong - for move-only types that can throw on move construction, the - possibly failing - irreversible mutation actually already starts when we move the old elements into their new positions.
Looking at it briefly, I'd say that a throwing move-only type couldn't be put into a vector otherwise, but maybe it shouldn't?
Looking at it briefly, I'd say that a throwing move-only type couldn't
be put into a vector otherwise, but maybe it shouldn't?
I believe you've nicely summed up the choices the committee had for containers of move-only-noexcept(false)-types.
Allow them but with basic exception safety instead of strong for some operations.
Disallow them at compile time.
A. The committee absolutely felt that they could not silently change existing C++03 code from having the strong exception safety to having basic exception safety.
B. For those functions that have strong exception safety, the committee much preferred to have those members continue to have strong exception safety, even for code that could not possibly be written yet (e.g. for functions manipulating move-only types).
The committee realized it could accomplish both of the objectives above, except for the case in B) where the move-only type might throw during move construction. These cases are limited to a few member functions of vector IIRC: push_back, reserve. Note that other members of vector already offer only basic exception safety (even in C++98/03), e.g.: assignment, insert (unless inserting at the end), erase.
With all this in mind, it was the committee's decision that should the client create a vector of a move-only-noexcept(false)-type, it would be more useful to the client to relax the strong exception safety to basic (as it already is for other vector members), rather than to refuse to compile.
This would only be new code that the client writes for C++11, not legacy code, since move-only types do not exist prior to C++11. And no doubt the educators of C++11 should be encouraging their students to write noexcept(true) move members. However code with the basic exception safety guarantee is not so dangerous, nor unusual, such that it should be forbidden. After all, the std::lib is already chock full of code carrying only the basic exception safety guarantee.
As you might know C++11 has noexcept keyword. Now ugly part about it is this:
Note that a noexcept specification on a function is not a compile-time
check; it is merely a method for a programmer to inform the compiler
whether or not a function should throw exceptions.
http://en.cppreference.com/w/cpp/language/noexcept_spec
So is this a design failure on the committee part or they just left it as an exercise for the compile writers :) in a sense that decent compilers will enforce it, bad ones can still be compliant?
BTW if you ask why there isnt a third option ( aka cant be done) reason is that I can easily think of a (slow) way to check if function can throw or not. Problem is off course if you limit the input to 5 and 7(aka I promise the file wont contain anything beside 5 and 7) and it only throws when you give it 33, but that is not a realistic problem IMHO.
The committee pretty clearly considered the possibility that code that (attempted to) throw an exception not allowed by an exception specification would be considered ill-formed, and rejected that idea. According to $15.4/11:
An implementation shall not reject an expression merely because when executed it throws or might throw an exception that the containing function does not allow. [ Example:
extern void f() throw(X, Y);
void g() throw(X) {
f(); // OK
}
the call to f is well-formed even though when called, f might throw exception Y that g does not allow. —end example ]
Regardless of what prompted the decision, or what else it may have been, it seems pretty clear that this was not a result of accident or oversight.
As for why this decision was made, at least some goes back to interaction with other new features of C++11, such as move semantics.
Move semantics can make exception safety (especially the strong guarantee) much harder to enforce/provide. When you do copying, if something goes wrong, it's pretty easy to "roll back" the transaction -- destroy any copies you've made, release the memory, and the original remains intact. Only if/when the copy succeeds, you destroy the original.
With move semantics, this is harder -- if you get an exception in the middle of moving things, anything you've already moved needs to be moved back to where it was to restore the original to order -- but if the move constructor or move assignment operator can throw, you could get another exception in the process of trying to move things back to try to restore the original object.
Combine this with the fact that C++11 can/does generate move constructors and move assignment operators automatically for some types (though there is a long list of restrictions). These don't necessarily guarantee against throwing an exception. If you're explicitly writing a move constructor, you almost always want to ensure against it throwing, and that's usually even pretty easy to do (since you're normally "stealing" content, you're typically just copying a few pointers -- easy to do without exceptions). It can get a lot harder in a hurry for template though, even for simple ones like std:pair. A pair of something that can be moved with something that needs to be copied becomes difficult to handle well.
That meant, if they'd decided to make nothrow (and/or throw()) enforced at compile time, some unknown (but probably pretty large) amount of code would have been completely broken -- code that had been working fine for years suddenly wouldn't even compile with the new compiler.
Along with this was the fact that, although they're not deprecated, dynamic exception specifications remain in the language, so they were going to end up enforcing at least some exception specifications at run-time anyway.
So, their choices were:
Break a lot of existing code
Restrict move semantics so they'd apply to far less code
Continue (as in C++03) to enforce exception specifications at run time.
I doubt anybody liked any of these choices, but the third apparently seemed the last bad.
One reason is simply that compile-time enforcement of exception specifications (of any flavor) is a pain in the ass. It means that if you add debugging code you may have to rewrite an entire hierarchy of exception specifications, even if the code you added won't throw exceptions. And when you're finished debugging you have to rewrite them again. If you like this kind of busywork you should be programming in Java.
The problem with compile-time checking: it's not really possible in any useful way.
See the next example:
void foo(std::vector<int>& v) noexcept
{
if (!v.empty())
++v.at(0);
}
Can this code throw?
Clearly not. Can we check automatically? Not really.
The Java's way of doing things like this is to put the body in a try-catch block, but I don't think it is better than what we have now...
As I understand things (admittedly somewhat fuzzy), the entire idea of throw specifications was found to be a nightmare when it actually came time to try to use it in useful way.
Calling functions that don't specify what they throw or do not throw must be considered to potentially throw anything at all! So the compiler, were it to require that you neither throw nor call anything that might throw anything outside of the specification you're provided actually enforce such a thing, your code could call almost nothing whatsoever, no library in existence would be of any use to you or anyone else trying to make any use of throw specifications.
And since it is impossible for a compiler to tell the difference between "This function may throw an X, but the caller may well be calling it in such a way that it will never throw anything at all" -- one would forever be hamstrung by this language "feature."
So... I believe that the only possibly useful thing to come of it was the idea of saying nothrow - which indicates that it is safe to call from dtors and move and swap and so on, but that you're making a notation that - like const - is more about giving your users an API contract rather than actually making the compiler responsible to tell whether you violate your contract or not (like with most things C/C++ - the intelligence is assumed to be on the part of the programmer, not the nanny-compiler).
Today I learned that swap is not allowed to throw an exception in C++.
I also know that the following cannot throw exceptions either:
Destructors
Reading/writing primitive types
Are there any others?
Or perhaps, is there some sort of list that mentions everything that may not throw?
(Something more succinct than the standard itself, obviously.)
There is a great difference between cannot and should not. Operations on primitive types cannot throw, as many functions and member functions, including many operations in the standard library and/or many other libraries.
Now on the should not, you can include destructors and swap. Depending on how you implement them, they can actually throw, but you should avoid having destructors that throw, and in the case of swap, providing a swap operation with the no-throw guarantee is the simplest way of achieving the strong exception guarantee in your class, as you can copy aside, perform the operation on the copy, and then swap with the original.
But note that the language allows both destructors and swap to throw. swap can throw, in the simplest case if you do not overload it, then std::swap performs a copy construction, an assignment and a destruction, three operations that can each throw an exception (depending on your types).
The rules for destructors have changed in C++11, which means that a destructor without exception specification has an implicit noexcept specification which in turn means that if it threw an exception the runtime will call terminate, but you can change the exception specification to noexcept(false) and then the destructor can also throw.
At the end of the day, you cannot provide exception guarantees without understanding your code base, because pretty much every function in C++ is allowed to throw.
So this doesn't perfectly answer you question -- I searched for a bit out of my own curiosity -- but I believe that nothrow guaranteed functions/operators mostly originate from any C-style functions available in C++ as well as a few functions which are arbitrarily simple enough to give such a guarantee. In general it's not expected for C++ programs to provide this guarantee ( When should std::nothrow be used? ) and it's not even clear if such a guarantee buys you anything useful in code that makes regular use of exceptions. I could not find a comprehensive list of ALL C++ functions that are nothrow functions (please correct me if I missed a standard dictating this) other than listings of swap, destructors, and primitive manipulations. Also it seems fairly rare for a function that isn't fully defined in a library to require the user to implement a nothrows function.
So perhaps to get to the root of your question, you should mostly assume that anything can throw in C++ and take it as a simplification when you find something that absolutely cannot throw an exception. Writing exception safe code is much like writing bug free code -- it's harder than it sounds and honestly is oftentimes not worth the effort. Additionally there are many levels between exception unsafe code and strong nothrow functions. See this awesome answer about writing exception safe code as verification for these points: Do you (really) write exception safe code?. There's more information about exception safety at the boost site http://www.boost.org/community/exception_safety.html.
For code development, I've heard mixed opinions from Professors and coding experts on what should and shouldn't throw an exception and what guarantees such code should provide. But a fairly consistent assertion is that code which can easily throw an exception should be very clearly documented as such or indicate the thrown capability in the function definition (not always applicable to C++ alone). Functions that can possible throw an exception are much more common than functions that Never throw and knowing what exceptions can occur is very important. But guaranteeing that a function which divides one input by another will never throws a divide-by-0 exception can be quite unnecessary/unwanted. Thus nothrow can be reassuring, but not necessary or always useful for safe code execution.
In response to comments on the original question:
People will sometimes state that exception throwing constructors are evil when throw in containers or in general and that two-step initialization and is_valid checks should always be used. However, if a constructor fails it's oftentimes unfixable or in a uniquely bad state, otherwise the constructor would have resolved the problem in the first place. Checking if the object is valid is as difficult as putting a try catch block around initialization code for objects you know have a decent chance of throwing an exception. So which is correct? Usually whichever was used in the rest of the code base, or your personal preference. I prefer exception based code as it gives me a feeling of more flexibility without a ton of baggage code of checking every object for validity (others might disagree).
Where does this leave you original question and the extensions listed in the comments? Well, from the sources provided and my own experience worrying about nothrow functions in an "Exception Safety" perspective of C++ is oftentimes the wrong approach to handling code development. Instead keep in mind the functions you know might reasonably throw an exception and handle those cases appropriately. This is usually involving IO operations where you don't have full control over what would trigger the exception. If you get an exception that you never expected or didn't think possible, then you have a bug in your logic (or your assumptions about the function uses) and you'll need to fix the source code to adapt. Trying to make guarantees about code that is non-trivial (and sometimes even then) is like saying a sever will never crash -- it might be very stable, but you'll probably not be 100% sure.
If you want the in-exhaustive-detail answer to this question go to http://exceptionsafecode.com/ and either watch the 85 min video that covers just C++03 or the three hour (in two parts) video that covers both C++03 and C++11.
When writing Exception-Safe code, we assume all functions throw, unless we know different.
In short,
*) Fundamental types (including arrays of and pointers to) can be assigned to and from and used with operations that don't involve user defined operators (math using only fundamental integers and floating point values for example). Note that division by zero (or any expression whose result is not mathematically defined) is undefined behavior and may or may not throw depending on the implementation.
*) Destructors: There is nothing conceptually wrong with destructors that emit exceptions, nor does the standard prohibited them. However, good coding guidelines usually prohibit them because the language doesn't support this scenario very well. (For example, if destructors of objects in STL containers throw, the behavior is undefined.)
*) Using swap() is an important technique for providing the strong exception guarantee, but only if swap() is non-throwing. In general, we can't assume that swap() is non-throwing, but the video covers how to create a non-throwing swap for your User-Defined Types in both C++03 and C++11.
*) C++11 introduces move semantics and move operations. In C++11, swap() is implemented using move semantics and the situation with move operations is similar to the situation with swap(). We cannot assume that move operations do not throw, but we can generally create non-throwing move operations for the User-Defined Types that we create (and they are provided for standard library types). If we provide non-throwing move operations in C++11, we get non-throwing swap() for free, but we may choose to implement our own swap() any way for performance purposes. Again, this is cover in detail in the video.
*) C++11 introduces the noexcept operator and function decorator. (The "throw ()" specification from Classic C++ is now deprecated.) It also provides for function introspection so that code can be written to handle situations differently depending on whether or not non-throwing operations exist.
In addition to the videos, the exceptionsafecode.com website has a bibliography of books and articles about exceptions which needs to be updated for C++11.
The strong exception safety guarantee says that an operation won't change any program state if an exception occurs. An elegant way of implementing exception-safe copy-assignment is the copy-and-swap idiom.
My questions are:
Would it be overkill to use copy-and-swap for every mutating operation of a class that mutates non-primitive types?
Is performance really a fair trade for strong exception-safety?
For example:
class A
{
public:
void increment()
{
// Copy
A tmp(*this);
// Perform throwing operations on the copy
++(tmp.x);
tmp.x.crazyStuff();
// Now that the operation is done sans exceptions,
// change program state
swap(tmp);
}
int setSomeProperty(int q)
{
A tmp(*this);
tmp.y.setProperty("q", q);
int rc = tmp.x.otherCrazyStuff();
swap(tmp);
return rc;
}
//
// And many others similarly
//
void swap(const A &a)
{
// Non-throwing swap
}
private:
SomeClass x;
OtherClass y;
};
You should always aim for the basic exception guarantee: make sure that in the event of an exception, all resources are released correctly and the object is in a valid state (which can be undefined, but valid).
The strong exception guarantee (ie. "transactions") is something you should implement when you think it makes sense: you don't always need transactional behavior.
If it is easy to achieve transactional operations (eg. via copy&swap), then do it. But sometimes it is not, or it incurs a big perfomance impact, even for fundamental things like assignment operators. I remember implementing something like boost::variant where I could not always provide the strong guarantee in the copy assignment.
One tremendous difficulty you'll encounter is with move semantics. You do want transactions when moving, because otherwise you lose the moved object. However, you cannot always provide the strong guarantee: think about std::pair<movable_nothrow, copyable> (and see the comments). This is where you have to become a noexcept virtuoso, and use an uncomfortable amount of metaprogramming. C++ is difficult to master precisely because of exception safety.
As all matters of engineering, it is about balance.
Certainly, const-ness/immutability and strong guarantees increase confidence in one's code (especially accompanied with tests). They also help trim down the space of possible explanations for a bug.
However, they might have an impact on performance.
Like all performance issues, I would say: profile and get rid of the hot spots. Copy And Swap is certainly not the only way to achieve transactional semantics (it is just the easiest), so profiling will tell you where you should absolutely not use it, and you will have to find alternatives.
It depends on what environment your application is going to run in. If you just run it on your own machine (one end of the spectrum), it might not be worth to be too strict on exception safety. If you are writing a program e.g. for medical devices (the other end), you do not want unintentional side-effects left behind when an exception occurs. Anything in-between is dependent on the level of tolerance for errors and available resources for development (time, money, etc.)
Yes, the problem you are facing is that this idiom is very hard to scale. None of the other answers mentioned it, but another very interesting idiom invented by Alexandrescu called scopeGuards. It helps to enhance the economy of the code and makes huge improvements in readability of functions that need to conform to strong exception safety guarantees.
The idea of a scope guard is a stack instance that lets just attach rollback function objects to each resource adquisition. when the scope guard is destructed (by an exception) the rollback is invoked. You need to explicitly call commit() in normal flow to avoid the rollback invocation at scope exit.
Check this recent question from me that is related to designed a safe scopeguard using c++11 features.