How to check for and handle precondition violations? - c++

C++20 is delivering some amazing new features around contracts - which for templates is going to make life much better - where constraints around types or other compile-time requirements can be baked into the template definition and enforced with decent diagnostics by the compiler. Yay!
However, I'm very concerned by the push towards terminating unconditionally when a runtime precondition violation occurs.
https://en.cppreference.com/w/cpp/language/attributes/contract
A program may be translated with one of two violation continuation
modes:
off (default if no continuation mode is selected): after the execution
of the violation handler completes, std::terminate is called; on:
after the execution of the violation handler completes, execution
continues normally. Implementations are encouraged to not provide any
programmatic way to query, set, or modify the build level or to set or
modify the violation handler.
I've written extensive user-facing software which traps all exceptions to a core execution loop where errors are logged and the user is informed of the failure.
In many cases, the user is better off saving if possible and exiting, but in many other cases, the error can be addressed by changing something in the design / data file they're working on.
This is to say - simply by altering their design (e.g. a CAD design) - the operation they wished to perform will now succeed. E.g. it's possible that the code was executed with too tight of a tolerance which micomputed a result based on that. Simply rerunning the procedure after changing tolerance would succeed (the offending precondition somewhere in the underlying code would no longer be violated).
But the push to make preconditions simply terminate and have no capacity to trap such an error and retry the operation? This sounds like a serious degradation in feature set to me. Admittedly, there are domains in which this is exactly desirable. Fail fast, fail early, and for preconditions or postconditions, the problem is in the way the code is written, and the user cannot remedy the situation.
But... and this is a big but... most software executes against an unknown data set that is supplied at runtime - to claim that all software must terminate and that there is no way that a user can be expected to rectify the situation seems to be to be bizarre.
Herb Sutter's discussion at the ACCU seems to be aligned strongly with the perspective that precondition & postcondition violations are simply terminate conditions:
https://www.youtube.com/watch?v=os7cqJ5qlzo
I'm looking for what other C++ pros are thinking from whatever your experiences coding informs you?
I know that many projects disallow exceptions. If you're working on one such project, does that mean you write your code to simply terminate whenever invalid input occurs? Or do you back out using error states to some parent code point that is able to continue in some way?
Maybe more to the point - maybe I'm misunderstanding the nature of the intent of C++20 runtime contracts are intended for?
Please keep this civil - and if your suggestion is to close this - perhaps you could be so kind as to point to a more appropriate forum for this discussion?
Most generally, I'm trying to answer, to my satisfaction:
How to check for and handle precondition violations (using best possible practices)?

It really comes down to this question: what do you mean when you say the word "precondition"?
The way you seem to use the word is to refer to "a thing that gets checked when you call this function." The way Herb, the C++ standard, and therefore the C++ contract system mean it is "a thing which must be true for the valid execution of this function, and if it is not true, then you have done a wrong thing and the world is broken."
And this view really comes down to what a "contract" means. Consider vector::operator[] vs. vector::at(). at does not have a precondition contract in the C++ standard; it throws if the index is out-of-range. That is, it is part of the interface of at that you can pass it out-of-range values, and it will respond in an expected, predictable way.
That is not the case for operator[]. It is not part of the interface of that function that you can pass it out-of-range indices. As such, it has a precondition contract that the index is not out-of-range. If you pass it an out-of-range index, you get undefined behavior.
So, let's look at some simplistic examples. I'm going to build a vector and then read an integer from the user, then use that to access the vector I built in three different ways:
int main()
{
std::vector<int> ivec = {10, 209, 184, 96};
int ix;
std::cin >> ix;
//case 1:
try
{
std::cout << ivec.at(ix);
}
catch(const std::exception &)
{
std::cout << "Invalid input!\n";
}
//case 2:
if(0 <= ix && ix < ivec.size())
std::cout << ivec[ix];
else
std::cout << "Invalid Input!\n";
//case 3:
std::cout << ivec[ix];
return 0;
}
In case 1, we see the use of at. In the case of bad input, we catch the exception and process it.
In case 2, we see the use of operator[]. We check to see if the input is in the valid range, and if so, call operator[].
In case 3, we see... a bug in our code. Why? Because nobody sanitized the input. Someone had to, and operator[]'s precondition says that it is the caller's job to do it. The caller fails to sanitize its inputs and thus represents broken code.
That is what it means to establish a contract: if the code breaks the contract, it's the code's fault for breaking it.
But as we can see, a contract appears to be a fundamental part of a function's interface. If so, why is this part of the interface sitting in the standard's text instead of being in the function's visible declaration where people can see it? That right there is the entire point of the contracts language feature: to allow users to express this specific kind of thing within the language.
To sum up, contracts are assumptions that a piece of code makes about the state of the world. If that assumption is incorrect, then that represents a state of the world that should not exist, and therefore your program has a bug in it. That's the idea underlying the contract language feature design. If your code tests it, it's not something you assume, and you shouldn't use preconditions to define it.
If it's an error condition, then you should use your preferred error mechanism, not a contract.

Related

What should a function return that can fail?

My question is a bit vague and that is at least partially intentional. I'm looking for early advice about interface design, future expansion and easy to test-interfaces and not too much about exact code. Although, if it's a solution/advice that applies especially to C++(17), then that's perfect.
I've a simulation that solves a hard combinatorial problem. The simulation propagates step by step to a solution. But sometimes, a step is a dead end (or appears as such). For example, in the pseudo code below, the while loop propagates the simulation by feeding it with pieces. There there can be a case where a 'Piece' can not be used to propagate the simulation. E.g. it's to expensive, it's a piece that simply does not fit or the for what ever reason. Failure is a normal behaviour that can arise an no reason to abort.
while(!allNotHandledPieced.empty())
{
auto nextPiece = allNotHandledPieces.back();
allNotHandledPieces.pop_back();
auto status = simulation.propagate(nextPiece);
if(status == 0)
next; // ok
else if (status == 1)
dropPieceDueToTooBadStatus();
}
Actually, for the 'simulation' it doesn't matter why it failed. The state is still valid, there is no problem with using another piece, it can just continue. But later, I might see the need to know why the 'propagate' method failed. Then it might make sense to have a meaningful return value on failure.
Is there a good way to return 'failure'? Is there a best practice for situations like this?
From what I can gather, it seems like neither exceptions or "returning" are appropriate for your situation. In your question you talk about "returning a result" but your particular code example/use case suggests that that the algorithm should not halt or be interrupted if a "bad piece" is used. Both exceptions and "returning" from a function will result in more complex control flow, and in some cases, more complex state handling depending on how you deal with exceptions.
If you choose the exception route, I suggest modeling your "reporting" classes after system_error.
If you choose a non-control flow route, I suggest pushing an error code and other meta data about the piece (if the piece itself is too expensive to copy around) into another data structure where it can be consumed.
What you choose to do depends on whether or not the bad piece requires immediate action. If it doesn't, exceptions are not the way to go since that implies it needs to be handled at some point.

Why is throwing a non exception considered bad design?

I have some code to sort a vector of objects. If any of the objects is invalid I want to stop sorting immediately and report an error. In the error, I want to include the description of an invalid object (no matter which one if there are many).
This is my code (not complete, but I hope you can follow me):
int sortProc(const Bulk & b1, const Bulk & b2) {
if (!b1.isValid()) throw b1;
if (!b2.isValid()) throw b2;
return b1.compareTo(b2);
}
vector<Bulk> * items = getItems();
try {
sort(items->begin(), items->end(), sortProc);
} catch (const Bulk & item) {
cout << "Cannot sort item: " << item.description();
}
Now, I'm a bit unsure of my code because I've heard that all exceptions should subclass the exception class and it's considered bad practice to throw objects that are not instances of exception, but I don't really understand why. My code above works, is anything wrong with it? This is a serious question, so if you see any problems I'd be glad to know. I'm not looking for moral concerns.
I'm not looking for moral concerns.
You can't ask a style question then ban all answers based on "moral concerns", if you expect to figure it out.
Some people think that throwing only objects of types deriving std::exception provides consistency of interface, since you can invoke .what() on all of them and catch them all together at the top level of your program. You can also guarantee that other translation units — those who have never heard of your class Bulk — will be able to catch the exception if they want to (if only as a std::exception).
Is your program wrong? No.
Does it work? Yes, of course it does.
But sometimes simply "working" is not considered enough and we like to be a little more tidy about things.
That's really it...
Why is throwing a non exception considered bad design?
Because rarely do systems exist in a vacuum.
Exceptions serve a fundamental purpose: to generate a packet of information about an error or some other "exceptional" condition, and propagate that information across boundaries without regard to the locality of those boundaries.
Consider the last part carefully:
without regard to the locality of those boundaries.
You can think of an exceptional condition as one which must be attended to in some way. If there is a piece of code that can handle it, then that code should be given the opportunity. If no code exists which can attend to it, then the program must die.
In this context, the packet of information describing the exceptional condition must be free to flow through any part of the program -- even those that you did not personally write, or even thought might one day be written when your project was a mere glimmer in your eye. In order for this to work however, all exceptions must be written using a familiar protocol. That is to say, far-flung exception handlers must be able to understand at least the basic information contained in the packet. Everybody has to be speaking the same language.
The way this is generally accomplished in C++ is by deriving all exception objects from std::exception. This way, a handler in a far-flung part of the code -- perhaps even code you never even dreampt of writing -- can at the very least report on the exceptional occurrence before the program meets with it's demise. It might even be able to handle the situation and allow the program to live on, but this is often not possible.
It's not about moral. It's not about functionality. It's not about correctness either. It's about the logic behind your code.
Remember code reflects how you think. A clear minded developer would never throw an "exception" which is indeed not an exception cause that just confuses what the logic is.
Joker_vD is also right regarding the productivity of the code but I don't think you are there yet.
I see a (maybe serious) problem having Bulk transporting input and failure data.
Having a BulkException derived form std::exception is way cleaner!
While having a state indicating a failure is fine, I think using the class to transport the failure is no good.
A BulkException could gather additional information (like a stack trace) which is useless for normal operation.
Good design is about quality. Is your code provably correct? Does it minimize dependencies on external objects? is it reusable? Is it understandable? Does it eliminate redundancies? Can you test it? Can other people read it and know what it does? That last question is really hard, because you are not other people - you're the author.
Here is code very similar to yours, except it doesn't have random-access flow control:
std::unordered_set<std::string> errors;
auto sortProc = [&errors](const Bulk & b1, const Bulk & b2)->int {
if (!b1.isValid()) errors.insert(b1.description());
if (!b2.isValid()) errors.insert(b2.description());
if (!b1.isValid()||!b2.isValid())
return b1.isValid()<b2.isValid();
return b1.compareTo(b2);
}
vector<Bulk> * items = getItems();
sort(items->begin(), items->end(), sortProc);
if (!errors.empty()) {
std::cout << "Unsortable items found while sorting:\n";
for (auto const& e : errors) {
std::cout << e << "\n";
}
}
I prefer avoiding random access flow control except in exceptional circumstances. The sort function could continue oblivious to the failure of an element to be isValid, so it should by default rather than have interface decoupled exception based flow control.
While it is tempting to use exceptions as a way to return a "second return value", the result is that your code's interface becomes obscured and coupled with your implementation details of both your code, and the code that uses your code.
Depending on the program, truly exceptional circumstances might vary from "failure to allocate memory" through to "hard disk failure", where there is no way to recover or have your code act error-oblivious.
Note that you can even make the above code less coupled by noting that invalid elements are sorted first or last. Then after the sort, examine your list, and check if there are invalid elements that have been collected there, rather than relying on a log of cumulative errors.

Throwing a logical error exception or just abort in a library?

I like very much the assert behaviour for testing invariants and pre-conditions in my code. I use it a lot. But now I am developing a library (C++) and I like the client (the programmer who uses the library) to know when he violates the precondition. I think is easier to see the application crashes and fix the problem than just throwing in a undefined behaviour.
I could just use assert in this situation, but when my library is ready a release build will disable the assert macro. I know I can keep it but I'm not sure I want to because there is a lot of internal asserts don't need being tested in release build.
An instance:
Some state machine has a maximum number of states that can be added. The limit is setted by the client in the constructor. Then, the client calls the method addState to add specific states, but of course, he can't add more states than he said initially.
Should I:
Just ignore states after the limit and, probably, start a state machine with undefined behaviour (at least to client perspective)?
Keep assertions alive and put an assertion at that point?
Throw an exception (some std::logic_error, I presume)?
Just print a message to stderr and abort the program?
I don't like 1. very much. The idea is tell the client what is he doing wrong.
Is this a situation to throw a logical error exception?
Is there another, better possibility?
Thanks.
Definitely, if a problem is "detectable", you should do something to inform the "user" of the error (some things are very hard to identify that it's gone wrong)
You use assert when the programmer does something directly wrong, and that is unlikely to happen in "normal use" of the code. E.g. if you have a pointer parameter that mustn't be NULL, doing assert(ptr != NULL) would be a sensible thing, likewise if you have an int that is a count of something, it probably shouldn't be negative (but then it should probably be unsigned?). These type of things don't necessarily need to be that clearly documented - just that the precondition "ptr must not be NULL" or "count should not be negative".
You use exceptions for something that MAY happen in normal running conditions, but really shouldn't. Such as running out of memory, or the "user" trying to add too many things to something that they had a reason to give a reasonable size in the first place. Exceptions should be clearly documented by the description of the function - "If you try to add more states than you have reserved space for, the exception x will be thrown".
I personally would think that a "custom" exception would make more sense than std::logic_error. In this case, perhaps too_many_states? The more distinct your exception is, the easier it is to determine where it came from and what it means. Nothing worse than getting some "generic" exception and not knowing how you ended up there - and when you search for it, there are hundreds of places it could have come from...

Where should assert() be used in C resp. C++?

What are the places we should use the assert() function specifically? If it's a situation like determining if an integer value is greater than zero or a pointer is null, we can simply use a private function to check this. In this kind of situation, where should we use assert() over a custom written check?
Context: I write server software for a living, the kind that stays up for weeks before the next version is loaded. So my answers may be biaised toward highly defensive code.
The principle.
Before we delve into the specifics of where to use assert, it's important to understand the principle behind it.
assert is an essential tool in Defensive Programming. It helps validating assumptions (assert them actually) and thus catch programming errors (to be distinguished from user errors). The goal of assert is to detect erroneous situations, from which recovery is generally not immediately possible.
Example:
char const* strstr(char const* haystack, char const* needle) {
assert(haystack); assert(needle);
// ...
}
Alternatives.
In C ? There is little alternative. Unless your function has been designed to be able to pass an error code or return a sentinel value, and this is duly documented.
In C++, exceptions are a perfectly acceptable alternative. However, an assert can help produce a memory dump so that you can see exactly what state the program is in at the moment the erroneous situation is detected (which helps debugging), while an exception will unwind the stack and thus lose the context (oups...).
Also, an exception might (unfortunately) get caught by a high level handler (or an unsavory catch from a fellow developer (you would not do that, of course)), in which case you could miss completely the error until it's too late.
Where NOT to use it.
First, it should be understood that assert is only ever useful in Debug code. In Release, NDEBUG is defined and no code is generated. As a corollary, in Release assert has the same worth as a comment.
Never use it for checks that are necessary to the good behavior of the software. Error conditions should be checked and dealt with. Always.
Second, it should be understood that malformed input is part of your life. Would you want your compiler display an assert message each time you make an error ? Hum! Therefore:
Never use it for input data validation. Input data should be validated and errors appropriately reported to the user. Always.
Third, it should be understood that crashes are not appreciated. It is expected of your program that it will run smoothly. Therefore, one should not get tempted to leave asserts on in Release mode: Release code ends up in the end user hands and should never crash, ever. At worst, it should shutdown while displaying an error message. It is expected that no user data is lost during this process, and even better if upon restarting the user is taken back to where she was: that is what modern browsers do, for example.
Never leave asserts on in Release.
Note: for server code, upon "hitting" an assertion, we manage to get back in position for treating the next query in most cases.
Where to use it.
assert is on in Debug mode, and so should be used for Debugging. Whenever you test new code, whenever your test suite run, whenever software is in your (or your teammates) hands, whenever software is in you QA department hands. Asserts let you spot errors and gives you the full context of the error so that you can repair.
Use it during the development and testing cycles.
Even better. Since you know code will not be executed in Release you can afford to perform expensive checks.
Note: you should also test the Release binary, if only to check the performance.
And in Release ?
Well, in the codebase I work on, we replace the inexpensive asserts (the others are ignored) by specific exceptions that are only caught by a high level handler that will log the issue (with backtrace), return a pre-encoded error response and resume the service. The development team is notified automatically.
In software that is deployed, the best practices I have seen imply to create a memory dump and stream it back to the developers for analysis while attempting not to lose any user data and behave as courteously as possible toward the unfortunate user. I feel really blessed to be working server-side when I contemplate the difficulty of this task ;)
I'm gonna throw out my view of assert(). I can find what assert() does elsewhere, but stackoverflow provides a good forum for suggestions on how and when to use it.
Both assert and static_assert serve similar functions. Let's say you have some function foo. For example, lets say you have a function foo(void*) that assumes its argument is not null:
void foo(void* p) {
assert(p);
...
}
Your function has a couple people that care about it.
First, the developer who calls your function. He might just look at your documentation and maybe he will miss the part about not allowing a null pointer as the argument. He may not ever read the code for the function, but when he runs it in debug mode the assert may catch his inappropriate usage of your function (especially if his test cases are good).
Second (and more important), is the developer who reads your code. To him, your assert says that after this line, p is not null. This is something that is sometimes overlooked, but I believe is the most useful feature of the assert macro. It documents and enforces conditions.
You should use asserts to encode this information whenever it is practical. I like to think of it as saying "at this point in the code, this is true" (and it says this in a way so much stronger than a comment would). Of course, if such a statement doesn't actually convey much/any information then it isn't needed.
I think there's a simple and powerful point to be made:
assert () is for checking internal consistency.
Use it to check preconditions, postconditions, and invariants.
When there may be inconsistency due to external factors, circumstances which the code can't control locally, then throw an exception. Exceptions are for when postconditions cannot be satisfied given the preconditions. Good examples:
new int is ok up to its preconditions, so if memory is unavailable, throwing is the only reasonable response. (The postcondition of malloc is "a valid pointer or NULL")
The postcondition of a constructor is the existence of an object whose invariants are established. If it can't construct a valid state, throwing is the only reasonable response.
assert should not be used for the above. By contrast,
void sort (int * begin, int * end) {
// assert (begin <= end); // OPTIONAL precondition, possibly want to throw
for (int * i = begin, i < end; ++i) {
assert (is_sorted (begin, i)); // invariant
// insert *i into sorted position ...
}
}
Checking is_sorted is checking that the algorithm is behaving correctly given its preconditions. An exception is not a reasonable response.
To cut a long story short: assert is for things which WILL NEVER happen IF the program is LOCALLY correct, exceptions are for things which can go wrong even when the code is correct.
Whether or not invalid inputs trigger exceptions or not is a matter of style.
You usually use it when you want the program to abort and display a runtime error if a boolean condition is not true. It is usually used like this:
void my_func( char* str )
{
assert ( str != NULL );
/* code */
}
It can also be used with functions that return a NULL pointer on failure:
SDL_Surface* screen = SDL_SetVideoMode( 640, 480, 16, SDL_HWSURFACE );
assert ( screen != NULL );
The exact error message assert() gives, depends on you compiler but it usually goes along these lines:
Assertion failed: str, mysrc.c, line 5

Why should exceptions be used conservatively?

I often see/hear people say that exceptions should only be used rarely, but never explain why. While that may be true, rationale is normally a glib: "it's called an exception for a reason" which, to me, seems to be the sort of explanation that should never be accepted by a respectable programmer/engineer.
There is a range of problems that an exception can be used to solve. Why is it unwise to use them for control flow? What is the philosophy behind being exceptionally conservative with how they are used? Semantics? Performance? Complexity? Aesthetics? Convention?
I've seen some analysis on performance before, but at a level that would be relevant to some systems and irrelevant to others.
Again, I don't necessarily disagree that they should be saved for special circumstances, but I'm wondering what the consensus rationale is (if such a thing exists).
The primary point of friction is semantics. Many developers abuse exceptions and throw them at every opportunity. The idea is to use exception for somewhat exceptional situation. For example, wrong user input does not count as an exception because you expect this to happen and ready for that. But if you tried to create a file and there was not enough space on disk, then yes, this is a definite exception.
One other issue is that exceptions are often thrown and swallowed. Developers use this technique to simply "silence" the program and let it run as long as possible until completely collapsing. This is very wrong. If you don't process exceptions, if you don't react appropriately by freeing some resources, if you don't log the exception occurrence or at least not notify the user, then you're not using exception for what they are meant.
Answering directly your question. Exceptions should rarely be used because exceptional situations are rare and exceptions are expensive.
Rare, because you don't expect your program crash at every button press or at every malformed user input. Say, database may suddenly not be accessible, there may not be enough space on disk, some third party service you depend on is offline, this all can happen, but quite rarely, these would be clear exceptional cases.
Expensive, because throwing an exception will interrupt the normal program flow. The runtime will unwind the stack until it finds an appropriate exception handler that can handle the exception. It will also gather the call information all along the way to be passed to the exception object the handler will receive. It all has costs.
This is not to say that there can be no exception to using exceptions (smile). Sometimes it can simplify the code structure if you throw an exception instead of forwarding return codes via many layers. As a simple rule, if you expect some method to be called often and discover some "exceptional" situation half the time then it is better to find another solution. If however you expect normal flow of operation most of the time while this "exceptional" situation can only emerge in some rare circumstances, then it is just fine to throw an exception.
#Comments: Exception can definitely be used in some less-exceptional situations if that could make your code simpler and easier. This option is open but I'd say it comes quite rare in practice.
Why is it unwise to use them for control flow?
Because exceptions disrupt normal "control flow". You raise an exception and normal execution of the program is abandoned potentially leaving objects in inconsistent state and some open resources unfreed. Sure, C# has the using statement which will make sure the object will be disposed even if an exception is thrown from the using body. But let us abstract for the moment from the language. Suppose the framework won't dispose objects for you. You do it manually. You have some system for how to request and free resources and memory. You have agreement system-wide who is responsible for freeing objects and resources in what situations. You have rules how to deal with external libraries. It works great if the program follows the normal operation flow. But suddenly in the middle of execution you throw an exception. Half of the resources are left unfreed. Half have not been requested yet. If the operation was meant to be transactional now it is broken. Your rules for handling resources will not work because those code parts responsible for freeing resources simply won't execute. If anybody else wanted to use those resources they may find them in inconsistent state and crash as well because they could not predict this particular situation.
Say, you wanted a method M() call method N() to do some work and arrange for some resource then return it back to M() which will use it and then dispose it. Fine. Now something goes wrong in N() and it throws an exception you didn't expect in M() so the exception bubbles to the top until it maybe gets caught in some method C() which will have no idea what was happening deep down in N() and whether and how to free some resources.
With throwing exceptions you create a way to bring your program into many new unpredictable intermediate states which are hard to prognose, understand and deal with. It's somewhat similar to using GOTO. It is very hard to design a program that can randomly jump its execution from one location to the other. It will also be hard to maintain and debug it. When the program grows in complexity, you just going to lose an overview of what when and where is happening less to fix it.
While "throw exceptions in exceptional cirumstances" is the glib answer, you can actually define what those circumstances are: when preconditions are satisfied, but postconditions cannot be satisfied. This allows you to write stricter, tighter, and more useful postconditions without sacrificing error-handling; otherwise, without exceptions, you have to change the postcondition to allow for every possible error state.
Preconditions must be true before calling a function.
Postcondition is what the function guarantees after it returns.
Exception safety states how exceptions affect the internal consistency of a function or data structure, and often deal with behavior passed in from outside (e.g. functor, ctor of a template parameter, etc.).
Constructors
There's very little you can say about every constructor for every class that could possibly be written in C++, but there are a few things. Chief among them is that constructed objects (i.e. for which the constructor succeeded by returning) will be destructed. You cannot modify this postcondition because the language assumes it is true, and will call destructors automatically. (Technically you can accept the possibility of undefined behavior for which the language makes no guarantees about anything, but that is probably better covered elsewhere.)
The only alternative to throwing an exception when a constructor cannot succeed is to modify the basic definition of the class (the "class invariant") to allow valid "null" or zombie states and thus allow the constructor to "succeed" by constructing a zombie.
Zombie example
An example of this zombie modification is std::ifstream, and you must always check its state before you can use it. Because std::string, for example, doesn't, you are always guaranteed that you can use it immediately after construction. Imagine if you had to write code such as this example, and if you forgot to check for the zombie state, you'd either silently get incorrect results or corrupt other parts of your program:
string s = "abc";
if (s.memory_allocation_succeeded()) {
do_something_with(s); // etc.
}
Even naming that method is a good example of how you must modify the class' invariant and interface for a situation string can neither predict nor handle itself.
Validating input example
Let's address a common example: validating user input. Just because we want to allow for failed input doesn't mean the parsing function needs to include that in its postcondition. It does mean our handler needs to check if the parser fails, however.
// boost::lexical_cast<int>() is the parsing function here
void show_square() {
using namespace std;
assert(cin); // precondition for show_square()
cout << "Enter a number: ";
string line;
if (!getline(cin, line)) { // EOF on cin
// error handling omitted, that EOF will not be reached is considered
// part of the precondition for this function for the sake of example
//
// note: the below Python version throws an EOFError from raw_input
// in this case, and handling this situation is the only difference
// between the two
}
int n;
try {
n = boost::lexical_cast<int>(line);
// lexical_cast returns an int
// if line == "abc", it obviously cannot meet that postcondition
}
catch (boost::bad_lexical_cast&) {
cout << "I can't do that, Dave.\n";
return;
}
cout << n * n << '\n';
}
Unfortunately, this shows two examples of how C++'s scoping requires you to break RAII/SBRM. An example in Python which doesn't have that problem and shows something I wish C++ had – try-else:
# int() is the parsing "function" here
def show_square():
line = raw_input("Enter a number: ") # same precondition as above
# however, here raw_input will throw an exception instead of us
# using assert
try:
n = int(line)
except ValueError:
print "I can't do that, Dave."
else:
print n * n
Preconditions
Preconditions don't strictly have to be checked – violating one always indicates a logic failure, and they are the caller's responsibility – but if you do check them, then throwing an exception is appropriate. (In some cases it's more appropriate to return garbage or crash the program; though those actions can be horribly wrong in other contexts. How to best handle undefined behavior is another topic.)
In particular, contrast the std::logic_error and std::runtime_error branches of the stdlib exception hierarchy. The former is often used for precondition violations, while the latter is more suited for postcondition violations.
Expensive kernel calls (or other system API invocations) to manage kernel (system) signal interfaces
Hard to analyze Many of the problems of the goto statement apply to exceptions. They jump over potentially large amounts of code often in multiple routines and source files. This is not always apparent from reading the intermediate source code. (It is in Java.)
Not always anticipated by intermediate code The code that gets jumped over may or may not have been written with the possibility of an exception exit in mind. If originally so written, it may not have been maintained with that in mind. Think: memory leaks, file descriptor leaks, socket leaks, who knows?
Maintenance complications
It's harder to maintain code that jumps around processing exceptions.
Throwing an exception is, to some extent, similar to a goto statement. Do that for flow control, and you end with incomprehensible spaghetti code. Even worse, in some cases you do not even know where exactly the jump goes to (i.e. if you are not catching the exception in the given context). This blatantly violates the "least surprise" principle that enhances maintainability.
Exceptions make it harder to reason about the state of your program. In C++ for instance, you have to do extra thinking to ensure your functions are strongly exception safe, than you would have to do if they didn't need to be.
The reason is that without exceptions, a function call can either return, or it can terminate the program first. With exceptions, a function call can either return, or it can terminate the program, or it can jump to a catch block somewhere. So you can no longer follow the flow of control just by looking at the code in front of you. You need to know if the functions called can throw. You may need to know what can be thrown and where it's caught, depending on whether you care where control goes, or only care that it leaves the current scope.
For this reason, people say "don't use exceptions unless the situation is really exceptional". When you get down to it, "really exceptional" means "some situation has occurred where the benefits of handling it with an error return value are outweighed by the costs". So yes, this is something of an empty statement, although once you have some instincts for "really exceptional", it becomes a good rule of thumb. When people talk about flow control, they mean that the ability to reason locally (without reference to catch blocks) is a benefit of return values.
Java has a wider definition of "really exceptional" than C++. C++ programmers are more likely to want to look at the return value of a function than Java programmers, so in Java "really exceptional" might mean "I can't return a non-null object as the result of this function". In C++, it's more likely to mean "I very much doubt my caller can continue". So a Java stream throws if it can't read a file, whereas a C++ stream (by default) returns a value indicating error. In all cases, though, it is a matter of what code you are willing to force your caller to have to write. So it is indeed a matter of coding style: you have to reach a consensus what your code should look like, and how much "error-checking" code you want to write against how much "exception-safety" reasoning you want to do.
The broad consensus across all languages seems to be that this is best done in terms of how recoverable the error is likely to be (since unrecoverable errors result in no code with exceptions, but still need a check-and-return-your-own-error in code which uses error returns). So people come to expect "this function I call throws an exception" to mean "I can't continue", not just "it can't continue". That's not inherent in exceptions, it's just a custom, but like any good programming practice, it's a custom advocated by smart people who've tried it the other way and not enjoyed the results. I too have had bad experiences throwing too many exceptions. So personally, I do think in terms of "really exceptional", unless something about the situation makes an exception particularly attractive.
Btw, quite aside from reasoning about the state of your code, there are also performance implications. Exceptions are usually cheap now, in languages where you're entitled to care about performance. They can be faster than multiple levels of "oh, the result's an error, I'd best exit myself with an error too, then". In the bad old days, there were real fears that throwing an exception, catching it, and carrying on with the next thing, would make what you're doing so slow as to be useless. So in that case, "really exceptional" means, "the situation is so bad that horrific performance no longer matters". That's no longer the case (although an exception in a tight loop is still noticeable) and hopefully indicates why the definition of "really exceptional" needs to be flexible.
There really is no consensus. The whole issue is somewhat subjective, because the "appropriateness" of throwing an exception is often suggested by existing practices within the standard library of the language itself. The C++ standard library throws exceptions a lot less frequently than say, the Java standard library, which almost always prefers exceptions, even for expected errors such as invalid user input (e.g. Scanner.nextInt). This, I believe, significantly influences developer opinions about when it is appropriate to throw an exception.
As a C++ programmer, I personally prefer to reserve exceptions for very "exceptional" circumstances, e.g. out of memory, out of disk-space, the apocalypse happened, etc. But I don't insist that this is the absolute correct way to do things.
I don't think, that exceptions should rarely be used. But.
Not all teams and projects are ready to use exceptions. Usage of exceptions requires high qualification of programmers, special technics and lack of big legacy non exception-safe code. If you have huge old codebase, then it almost always is not exception-safe. I'm sure that you do not want to rewrite it.
If you are going to use exceptions extensively, then:
be prepared to teach your people about what exception safety is
you should not use raw memory management
use RAII extensively
From the other hand, using exceptions in new projects with strong team may make code cleaner, easier to maintain, and even faster:
you will not miss or ignore errors
you haven't to write that checks of return codes, without actually knowing what to do with wrong code at low-level
when you are forced to write exception-safe code, it becomes more structured
EDIT 11/20/2009:
I was just reading this MSDN article on improving managed code performance and this part reminded me of this question:
The performance cost of throwing an exception is significant. Although structured exception handling is the recommended way of handling error conditions, make sure you use exceptions only in exceptional circumstances when error conditions occur. Do not use exceptions for regular control flow.
Of course, this is only for .NET, and it's also directed specifically at those developing high-performance applications (like myself); so it's obviously not a universal truth. Still, there are a lot of us .NET developers out there, so I felt it was worth noting.
EDIT:
OK, first of all, let's get one thing straight: I have no intention of picking a fight with anyone over the performance question. In general, in fact, I am inclined to agree with those who believe premature optimization is a sin. However, let me just make two points:
The poster is asking for an objective rationale behind the conventional wisdom that exceptions should be used sparingly. We can discuss readability and proper design all we want; but these are subjective matters with people ready to argue on either side. I think the poster is aware of this. The fact is that using exceptions to control program flow is often an inefficient way of doing things. No, not always, but often. This is why it's reasonable advice to use exceptions sparingly, just like it's good advice to eat red meat or drink wine sparingly.
There is a difference between optimizing for no good reason and writing efficient code. The corollary to this is that there's a difference between writing something that is robust, if not optimized, and something that is just plain inefficient. Sometimes I think when people argue over things like exception handling they're really just talking past each other, because they are discussing fundamentally different things.
To illustrate my point, consider the following C# code examples.
Example 1: Detecting invalid user input
This is an example of what I'd call exception abuse.
int value = -1;
string input = GetInput();
bool inputChecksOut = false;
while (!inputChecksOut) {
try {
value = int.Parse(input);
inputChecksOut = true;
} catch (FormatException) {
input = GetInput();
}
}
This code is, to me, ridiculous. Of course it works. No one's arguing with that. But it should be something like:
int value = -1;
string input = GetInput();
while (!int.TryParse(input, out value)) {
input = GetInput();
}
Example 2: Checking for the existence of a file
I think this scenario is actually very common. It certainly seems a lot more "acceptable" to a lot of people, since it deals with file I/O:
string text = null;
string path = GetInput();
bool inputChecksOut = false;
while (!inputChecksOut) {
try {
using (FileStream fs = new FileStream(path, FileMode.Open)) {
using (StreamReader sr = new StreamReader(fs)) {
text = sr.ReadToEnd();
}
}
inputChecksOut = true;
} catch (FileNotFoundException) {
path = GetInput();
}
}
This seems reasonable enough, right? We're trying to open a file; if it's not there, we catch that exception and try to open a different file... What's wrong with that?
Nothing, really. But consider this alternative, which doesn't throw any exceptions:
string text = null;
string path = GetInput();
while (!File.Exists(path)) path = GetInput();
using (FileStream fs = new FileStream(path, FileMode.Open)) {
using (StreamReader sr = new StreamReader(fs)) {
text = sr.ReadToEnd();
}
}
Of course, if the performance of these two approaches were actually the same, this really would be purely a doctrinal issue. So, let's take a look. For the first code example, I made a list of 10000 random strings, none of which represented a proper integer, and then added a valid integer string onto the very end. Using both of the above approaches, these were my results:
Using try/catch block: 25.455 seconds
Using int.TryParse: 1.637 milliseconds
For the second example, I did basically the same thing: made a list of 10000 random strings, none of which was a valid path, then added a valid path onto the very end. These were the results:
Using try/catch block: 29.989 seconds
Using File.Exists: 22.820 milliseconds
A lot of people would respond to this by saying, "Yeah, well, throwing and catching 10,000 exceptions is extremely unrealistic; this exaggerates the results." Of course it does. The difference between throwing one exception and handling bad input on your own is not going to be noticeable to the user. The fact remains that using exceptions is, in these two case, from 1,000 to over 10,000 times slower than the alternative approaches that are just as readable -- if not more so.
That's why I included the example of the GetNine() method below. It isn't that it's intolerably slow or unacceptably slow; it's that it's slower than it should be... for no good reason.
Again, these are just two examples. Of course there will be times when the performance hit of using exceptions is not this severe (Pavel's right; after all, it does depend on the implementation). All I'm saying is: let's face the facts, guys -- in cases like the one above, throwing and catching an exception is analogous to GetNine(); it's just an inefficient way of doing something that could easily be done better.
You are asking for a rationale as if this is one of those situations where everyone's jumped on a bandwagon without knowing why. But in fact the answer is obvious, and I think you know it already. Exception handling has horrendous performance.
OK, maybe it's fine for your particularly business scenario, but relatively speaking, throwing/catching an exception introduces way more overhead than is necessary in many, many cases. You know it, I know it: most of the time, if you're using exceptions to control program flow, you're just writing slow code.
You might as well ask: why is this code bad?
private int GetNine() {
for (int i = 0; i < 10; i++) {
if (i == 9) return i;
}
}
I would bet that if you profiled this function you'd find it performs quite acceptably fast for your typical business application. That doesn't change the fact that it's a horribly inefficient way of accomplishing something that could be done a lot better.
That's what people mean when they talk about exception "abuse."
All of the rules of thumb about exceptions come down to subjective terms. You shouldn't expect to get hard and fast definitions of when to use them and when not to. "Only in exceptional circumstances". Nice circular definition: exceptions are for exceptional circumstances.
When to use exceptions falls into the same bucket as "how do I know whether this code is one class or two?" It's partly a stylistic issue, partly a preference. Exceptions are a tool. They can be used and abused, and finding the line between the two is part of the art and skill of programming.
There are lots of opinions, and tradeoffs to be made. Find something that speaks to you, and follow it.
It's not that exceptions should rarely be used. It's just that they should only be thrown in exceptional circumstances. For example, if a user enters the wrong password, that's not exceptional.
The reason is simple: exceptions exit a function abruptly, and propagate up the stack to a catch block. This process is very computationally expensive: C++ builds its exception system to have little overhead on "normal" function calls, so when an exception is raised, it has to do a lot of work to find where to go. Moreover, since every line of code could possibly raise an exception. If we have some function f that raises exceptions often, we now have to take care to use our try/catch blocks around every call of f. That's a pretty bad interface/implementation coupling.
I mentioned this issue in an article on C++ exceptions.
The relevant part:
Almost always, using exceptions to affect the "normal" flow is a bad idea. As we already discussed in section 3.1, exceptions generate invisible code paths. These code paths are arguably acceptable if they get executed only in the error handling scenarios. However, if we use exceptions for any other purpose, our "normal" code execution is divided into a visible and invisible part and it makes code very hard to read, understand and extend.
My approach to error handling is that there are three fundamental types of errors:
An odd situation that can be handled at the error site. This might be if a user inputs an invalid input at a command line prompt. The correct behavior is simply to complain to the user and loop in this case. Another situation might be a divide-by-zero. These situations aren't really error situations, and are usually caused by faulty input.
A situation like the previous kind, but one that can't be handled at the error site. For instance, if you have a function that takes a filename and parses the file with that name, it might not be able to open the file. In this case, it can't deal with the error. This is when exceptions shine. Rather than use the C approach (return an invalid value as a flag and set a global error variable to indicate the problem), the code can instead throw an exception. The calling code will then be able to deal with the exception - for instance to prompt the user for another filename.
A situation that Should Not Happen. This is when a class invariant is violated, or a function receives an invalid paramter or the like. This indicates a logic failure within the code. Depending on the level of failure, an exception may be appropriate, or forcing immediate termination may be preferable (as assert does). Generally, these situations indicate that something has broken somewhere in the code, and you effectively cannot trust anything else to be correct - there may be rampant memory corruption. Your ship is sinking, get off.
To paraphrase, exceptions are for when you have a problem you can deal with, but you can't deal with at the place you notice it. Problems you can't deal with should simply kill the program; problems you can deal with right away should simply be dealt with.
I read some of the answers here.
I'm still amazed on what all this confusion is about.
I strongly disagree with all this exceptions==spagetty code.
With confusion I mean, that there are people, which don't appreciate C++ exception handling.
I'm not certain how I learned about C++ exception handling -- but I understood the implications within minutes.
This was around 1996 and I was using the borland C++ compiler for OS/2.
I never had a problem to decide, when to use exceptions.
I usually wrap fallible do-undo actions into C++ classes.
Such do-undo actions include:
creating/destroying a system handle (for files, memory maps, WIN32 GUI handles, sockets, and so on)
setting/unsetting handlers
allocating/deallocating memory
claiming/releasing a mutex
incrementing/decrementing a reference count
showing/hiding a window
Than there are functional wrappers. To wrap system calls (which do not fall into the former category) into C++. E.g. read/write from/to a file.
If something fails, an exception will be thrown, which contains full information about the error.
Then there is catching/rethrowing exceptions to add more information to a failure.
Overall C++ exception handling leads to more clean code.
The amount of code is drasticly reduced.
Finally one can use a constructor to allocate fallible resources and still maintain a corruption free environment after such a failure.
One can chain such classes into complex classes.
Once a constructor of some member/base object is exectued, one can rely on that all other constructors of the same object (executed before) executed successfully.
Exceptions are a very unusual method of flow control compared to the traditional constructs (loops, ifs, functions, etc.) The normal control flow constructs (loops, ifs, function calls, etc.) can handle all the normal situations. If you find yourself reaching for an exception for a routine occurrence, then perhaps you need to consider how your code is structured.
But there are certain types of errors that cannot be handled easy with the normal constructs. Catastrophic failures (like resource allocation failure) can be detected at a low level but probably can't be handled there, so a simple if-statement is inadequate. These types of failures generally need to be handled at a much higher level (e.g., save the file, log the error, quit). Trying to report an error like this through traditional methods (like return values) is tedious and error-prone. Furthermore, it injects overhead into layers of mid-level APIs to handle this bizarre, unusual failure. The overhead distracts client of these APIs and requires them to worry about issues that are beyond their control. Exceptions provide a way to do non-local handling for big errors that's mostly invisible to all the layers between the detection of the problem and the handler for it.
If a client calls ParseInt with a string, and the string doesn't contain an integer, then the immediate caller probably cares about the error and knows what to do about it. So you'd design ParseInt to return a failure code for something like that.
On the other hand, if ParseInt fails because it couldn't allocate a buffer because memory is horribly fragmented, then the caller isn't going to know what to do about that. It would have to bubble this unusual error up and up to some layer that deals with these fundamental failures. That taxes everyone in between (because they have to accommodate the error passing mechanism in their own APIs). An exception makes it possible to skip over those layers (while still ensuring necessary clean-up happens).
When you're writing low-level code, it can be hard to decide when to use traditional methods and when to throw exceptions. The low-level code has to make the decision (throw or not). But it's the higher level code that truly knows what's expected and what's exceptional.
There's several reasons in C++.
First, it's frequently hard to see where exceptions are coming from (since they can be thrown from almost anything) and so the catch block is something of a COME FROM statement. It's worse than a GO TO, since in a GO TO you know where you're coming from (the statement, not some random function call) and where you're going (the label). They're basically a potentially resource-safe version of C's setjmp() and longjmp(), and nobody wants to use those.
Second, C++ doesn't have garbage collection built in, so C++ classes that own resources get rid of them in their destructors. Therefore, in C++ exception handling the system has to run all the destructors in scope. In languages with GC and no real constructors, like Java, throwing exceptions is a lot less burdensome.
Third, the C++ community, including Bjarne Stroustrup and the Standards Committee and various compiler writers, has been assuming that exceptions should be exceptional. In general, it's not worth going against language culture. The implementations are based on the assumption that exceptions will be rare. The better books treat exceptions as exceptional. Good source code uses few exceptions. Good C++ developers treat exceptions as exceptional. To go against that, you'd want a good reason, and all the reasons I see are on the side of keeping them exceptional.
This is a bad example of using exceptions as control flow:
int getTotalIncome(int incomeType) {
int totalIncome= 0;
try {
totalIncome= calculateIncomeAsTypeA();
} catch (IncorrectIncomeTypeException& e) {
totalIncome= calculateIncomeAsTypeB();
}
return totalIncome;
}
Which is very bad, but you should be writing:
int getTotalIncome(int incomeType) {
int totalIncome= 0;
if (incomeType == A) {
totalIncome= calculateIncomeAsTypeA();
} else if (incomeType == B) {
totalIncome= calculateIncomeAsTypeB();
}
return totalIncome;
}
This second example obviously needs some refactoring (like using the design pattern strategy), but illustrates well that exceptions are not meant for control flow.
Exceptions also have some performance penalties associated, but performance problems should follow the rule: "premature optimization is the root of all evil"
Maintainability: As mentioned by people above, throwing exceptions at a drop of a hat is akin to using gotos.
Interoperability: You can't interface C++ libraries with C/Python modules (atleast not easily) if you are using exceptions.
Performance degradation: RTTI is used to actually find the type of the exception which imposes additional overhead. Thus exceptions are not suitable for handling commonly occurring use cases(user entered int instead of string etc).
I would say that exceptions are a mechanism to get you out of current context (out of current stack frame in the simplest sense, but it's more than that) in a safe way. It's the closest thing structured programming got to a goto. To use exceptions in the way they were intended to be used, you have to have a situation when you can't continue what you're doing now, and you can't handle it at the point where you are now. So, for example, when user's password is wrong, you can continue by returning false. But if the UI subsystem reports that it can't even prompt the user, simply returning "login failed" would be wrong. The current level of code simply does not know what to do. So it uses an exception mechanism to delegate the responsibility to someone above who may know what to do.
One very practical reason is that when debugging a program I often flip on First Chance Exceptions (Debug -> Exceptions) to debug an application. If there are a lot of exceptions happening it's very difficult to find where something has gone "wrong".
Also, it leads to some anti-patterns like the infamous "catch throw" and obfuscates the real problems. For more information on that see a blog post I made on the subject.
I prefer to use exceptions as little as possible. Exceptions force the developer to handle some condition that may or may not be a real error. The definition of whether the exception in question is a fatal problem or a problem that must be handled immediately.
The counter argument to that is it just requires lazy people to type more in order to shoot themselves in their feet.
Google's coding policy says to never use exceptions, especially in C++. Your application either isn't prepared to handle exceptions or it is. If it isn't, then the exception will probably propagate it up until your application dies.
It's never fun to find out some library you have used throws exceptions and you were not prepared to handle them.
Legitimate case to throw an exception:
You try to open a file, it's not there, a FileNotFoundException is thrown;
Illegitimate case:
You want to do something only if a file doesn't exist, you try to open the file, and then add some code to the catch block.
I use exceptions when I want to break the flow of the application up to a certain point. This point is where the catch(...) for that exception is. For example, it's very common that we have to process a load of projects, and each project should be processed independently of the others. So the loop that process the projects has a try...catch block, and if some exception is thrown during the project processing, everything is rolled back for that project, the error is logged, and the next project is processed. Life goes on.
I think you should use exceptions for things like a file that doesn't exist, an expression that is invalid, and similar stuff. You should not use exceptions for range testing/ data type testing/ file existence/ whatever else if there's an easy/ cheap alternative to it. You should not use exceptions for range testing/ data type testing/ file existence/ whatever else if there's an easy/ cheap alternative to it because this sort of logic makes the code hard to understand:
RecordIterator<MyObject> ri = createRecordIterator();
try {
MyObject myobject = ri.next();
} catch(NoSuchElement exception) {
// Object doesn't exist, will create it
}
This would be better:
RecordIterator<MyObject> ri = createRecordIterator();
if (ri.hasNext()) {
// It exists!
MyObject myobject = ri.next();
} else {
// Object doesn't exist, will create it
}
COMMENT ADDED TO THE ANSWER:
Maybe my example wasn't very good - the ri.next() should not throw an exception in the second example, and if it does, there's something really exceptional and some other action should be taken somewhere else. When the example 1 is heavily used, developers will catch a generic exception instead of the specific one and assume that the exception is due to the error that they're expecting, but it can be due to something else. In the end, this leads to real exceptions being ignored as exceptions became part of the application flow, and not an exception to it.
The comments on this may add more than my answer itself.
Basically, exceptions are an unstructured and hard to understand form of flow control. This is necessary when dealing with error conditions that are not part of the normal program flow, to avoid having error handling logic clutter up the normal flow control of your code too much.
IMHO exceptions should be used when you want to provide a sane default in case the caller neglects to write error handling code, or if the error might best be handled further up the call stack than the immediate caller. The sane default is to exit the program with a reasonable diagnostic error message. The insane alternative is that the program limps along in an erroneous state and crashes or silently produces bad output at some later, harder to diagnose point. If the "error" is enough a normal part of program flow that the caller could not reasonably forget to check for it, then exceptions should not be used.
I think, "use it rarely" ist not the right sentence. I would prefer "throw only in exceptional situations".
Many have explained, why exceptions should not used in normal situations. Exceptions have their right for error handling and purely for error handling.
I will focus on an other point:
An other thing is the performance issue. Compilers struggled long to get them fast. I am not sure, how the exact state is now, but when you use exceptions for control flow, than you will get an other trouble: Your program will become slow!
The reason is, that exceptions are not only very mighty goto-statements, they also have to unwind the stack for all the frames they leave. Thus implicitely also the objects on stack have to be deconstructed and so on. So without be aware of it, one single throw of an exception will really get a whole bunch of mechanics be involved. The processor will have to do a mighty lot.
So you will end up, elegantly burning your processor without knowing.
So: use exceptions only in exceptional cases -- Meaning: When real errors occured!
The purpose of exceptions is to make software fault tolerant. However having to provide a response to every exception thrown by a function leads to suppression. Exceptions are just a formal structure forcing programmers to acknowledge that certain things can go wrong with a routine and that the client programmer needs to be aware of these conditions and cater for them as necessary.
To be honest, exceptions are a kludge added to programming languages to provide developers with some formal requirement that shifts the responsibility of handling error cases from the immediate developer to some future developer.
I believe that a good programming language does not support exceptions as we know them in C++ and Java. You should opt for programming languages that can provide alternative flow for all sorts of return values from functions. The programmer should be responsible for anticipating all forms of outputs of a routine and handle them in a seperate code file if I could have my way.
I use exceptions if:
an error occured that cannot be recovered from locally AND
if the error is not recovered from the program should terminate.
If the error can be recovered from (the user entered "apple" instead of a number) then recover (ask for the input again, change to default value, etc.).
If the error cannot be recovered from locally but the application can continue (the user tried to open a file but the file does not exist) then an error code is appropriate.
If the error cannot be recovered from locally and the application cannot continue without recovering (you are out of memory/disk space/etc.), then an exception is the right way to go.
Who said they should be used conservatively ? Just never use exceptions for flow control and thats it.
And who cares about the cost of exception when it already thrown ?
My two cents:
I like to use exceptions, because it allows me to program as if no errors will happen. So my code remains readable, not scattered with all kinds of error-handling. Of course, the error handling (exception handling) is moved to the end (the catch block) or is considered the responsability of the calling level.
A great example for me, is either file handling, or database handling. Presume everything is ok, and close your file at the end or if some exception occurs. Or rollback your transaction when an exception occurred.
The problem with exceptions, is that it quickly gets very verbose. While it was meant to allow your code to remain very readable, and just focus on the normal flow of things, but if used consistently almost every function call needs to be wrapped in a try/catch block, and it starts to defeat the purpose.
For a ParseInt as mentioned before, i like the idea of exceptions. Just return the value. If the parameter was not parseable, throw an exception. It makes your code cleaner on the one hand. At the calling level, you need to do something like
try
{
b = ParseInt(some_read_string);
}
catch (ParseIntException &e)
{
// use some default value instead
b = 0;
}
The code is clean. When i get ParseInt like this scattered all over, i make wrapper functions that handle the exceptions and return me default values. E.g.
int ParseIntWithDefault(String stringToConvert, int default_value=0)
{
int result = default_value;
try
{
result = ParseInt(stringToConvert);
}
catch (ParseIntException &e) {}
return result;
}
So to conclude: what i missed througout the discussion was the fact that exceptions allow me to make my code easier/more readable because i can ignore the error conditions more. Problems:
the exceptions still need to be handled somewhere. Extra problem: c++ does not have the syntax that allows it to specify which exceptions a function might throw (like java does). So the calling level is not aware which exceptions might need to be handled.
sometimes code can get very verbose, if every function needs to be wrapped in a try/catch block. But sometimes this still makes sense.
So that makes it hard to find a good balance sometimes.
I'm sorry but the answer is "they are called exceptions for a reason." That explanation is a "rule of thumb". You can't give a complete set of circumstances under which exceptions should or should not be used because what a fatal exception (English definition) is for one problem domain is normal operating procedure for a different problem domain. Rules of thumb are not designed to be followed blindly. Instead they are designed to guide your investigation of a solution. "They are called exceptions for a reason" tells you that you should determine ahead of time what is a normal error the caller can handle and what is an unusual circumstance the caller cannot handle without special coding (catch blocks).
Just about every rule of programming is really a guideline saying "Don't do this unless you have a really good reason": "Never use goto", "Avoid global variables", "Regular expressions pre-increment your number of problems by one", etc. Exceptions are no exception....