Related
Note: I'm not playing the devil's advocate or anything like that here - I'm just genuinely curious since I'm not in this camp myself.
Most types in the standard library have either mutating functions that can throw exceptions (for instance if memory allocation fails) or non-mutating functions that can throw exceptions (for instance out of bounds indexed accessors). In addition to that, many free functions can throw exceptions (for instance operator new and dynamic_cast<T&>).
How do you practically deal with this in the context of "we don't use exceptions"?
Are you trying to never call a function that can throw? (I can't see how that'd scale, so I'm very interested to hear how you accomplish this if this is the case)
Are you ok with the standard library throwing and you treat "we don't use exceptions" as "we never throw exceptions from our code and we never catch exceptions from other's code"?
Are you disabling exception handling altogether via compiler switches? If so, how do the exception-throwing parts of the standard library work?
EDIT Your constructors, can they fail, or do you by convention use a 2-step construction with a dedicated init function that can return an error code upon failure (which the constructor can't), or do you do something else?
EDIT Minor clarification 1 week after the inception of the question... Much of the content in comments and questions below focus on the why aspects of exceptions vs "something else". My interest is not in that, but when you choose to do "something else", how do you deal with the standard library parts that do throw exceptions?
I will answer for myself and my corner of the world. I write c++14 (will be 17 once compilers have better support) latency critical financial apps that process gargantuan amounts of money and can't ever go down. The ruleset is:
no exceptions
no rtti
no runtime dispatch
(almost) no inheritance
Memory is pooled and pre-allocated, so there are no malloc calls after initialization. Data structures are either immortal or trivially copiable, so destructors are nearly absent (there are some exceptions, such as scope guards). Basically, we are doing C + type safety + templates + lambdas. Of course, exceptions are disabled via the compiler switch. As for the STL, the good parts of it (i.e.: algorithm, numeric, type_traits, iterator, atomic, ...) are all useable. The exception-throwing parts coincide with the runtime-memory-allocating parts and the semi-OO parts nicely so we get to get rid of all the cruft in one go: streams, containers except std::array, std::string.
Why do this?
Because like OO, exception offers illusory cleanliness by hiding or moving the problem elsewhere, and makes the rest of the program harder to diagnose. When you compile without "-fno-exceptions", all your clean and nicely behaved functions have to endure the suspicion of being failable. It is much easier to have extensive sanity checking around the perimeter of your codebase, than to make every operation failable.
Because exceptions are basically long range GOTOs that have an unspecified destination. You won't use longjmp(), but exceptions are arguably much worse.
Because error codes are superior. You can use [[nodiscard]] to force calling code to check.
Because exception hierarchies are unnecessary. Most of the time it makes little sense to distinguish what errored, and when it does, it's likely because different errors require different clean-up and it would have been much better to signal explicitly.
Because we have complex invariants to maintain. This means that there are code, however deep down in the bowels, that need to have transnational guarantees. There are two ways of doing this: either you make your imperative procedures as pure as possible (i.e.: make sure you never fail), or you have immutable data structures (i.e.: make failure recovery possible). If you have immutable data structures, then of course you can have exceptions, but you won't be using them because when you will be using sum types. Functional data structures are slow though, so the other alternative is to have pure functions and do it in an exception-free language such as C, no-except C++, or Rust. No matter how pretty D looks, as long as it isn't cleansed of GC and exceptions, it's an non-option.
Do you ever test your exceptions like you would an explicit code path? What about exceptions that "can never happen"? Of course you don't, and when you actually hit those exceptions you are screwed.
I have seen some "beautiful" exception-neutral code in C++. That is, it performs optimally with no edge cases regardless of whether the code it calls uses exceptions or not. They are really hard to write and I suspect, tricky to modify if you want to maintain all your exception guarantees. However, I have not seen any "beautiful" code that either throws or catches exceptions. All code that I have seen that interacts with exceptions directly have been universally ugly. The amount of effort that went into writing exception-neutral code completely dwarfs the amount of effort that was saved from the crappy code that either throws or catches exceptions. "Beautiful" is in quotes because it is not actual beauty: it is usually fossilized because editing it requires the extra burden of maintaining exception-neutrality. If you don't have unit tests that deliberately and comprehensively misuse exceptions to trigger those edge cases, even "beautiful" exception-neutral code decays into manure.
In our case, we disable the exceptions via the compiler (e.g -fno-exceptions for gcc).
In the case of gcc, they use a macro called _GLIBCXX_THROW_OR_ABORT which is defined as
#ifndef _GLIBCXX_THROW_OR_ABORT
# if __cpp_exceptions
# define _GLIBCXX_THROW_OR_ABORT(_EXC) (throw (_EXC))
# else
# define _GLIBCXX_THROW_OR_ABORT(_EXC) (__builtin_abort())
# endif
#endif
(you can find it in libstdc++-v3/include/bits/c++config on latest gcc versions).
Then you juste have to deal with the fact that exceptions thrown just abort. You can still catch the signal and print the stack (there is a good answer on SO that explains this), but you have better avoid this kind of things to happen (at least in releases).
If you want some example, instead of having something like
try {
Foo foo = mymap.at("foo");
// ...
} catch (std::exception& e) {}
you can do
auto it = mymap.find("foo");
if (it != mymap.end()) {
Foo foo = it->second;
// ...
}
I also want to point out, that when asking about not using exceptions, there's a more general question about standard library: Are you using standard library when you're in one of the "we don't use exceptions" camps?
Standard library is heavy. In some "we don't use exceptions" camps, like many GameDev companies for example, better suited alternatives for STL are used - mostly based on EASTL or TTL. These libraries don't use exceptions anyway and that's because eighth generation consoles didn't handle them too well (or even at all). For a cutting edge AAA production code, exceptions are too heavy anyway, so it's a win - win scenario in such cases.
In other words, for many programmers, turning exceptions off goes in pair with not using STL at all.
Note I use exceptions... but I have been forced not to.
Are you trying to never call a function that can throw? (I can't see how that'd scale, so I'm very interested to hear how you accomplish this if this is the case)
This would probably be infeasible, at least on a large scale. Many functions can land up throwing, avoid them entirely cripples your code base.
Are you ok with the standard library throwing and you treat "we don't use exceptions" as "we never throw exceptions from our code and we never catch exceptions from other's code"?
You pretty much have to be ok with that... If the library code is going to throw an exception and your code is not going to handle it, termination is the default behaviour.
Are you disabling exception handling altogether via compiler switches? If so, how does the exception-throwing parts of the standard library work?
This is possible (back in the day it was sometime popular for some project types); compilers do/may support this, but you will need to consult their documentation for what the result(s) would and could be (and what language features are supported under those conditions).
In general, when an exception would be thrown, the program would need to abort or otherwise exit. Some coding standards still require this, the JSF coding standard comes to mind (IIRC).
General strategy for those who "don't use exceptions"
Most functions have a set of preconditions that can be checked for before the call is made. Check for those. If they are not met, then don't make the call; fall back to whatever the error handling is in that code. For those functions that you can't check to ensure the preconditions are met... not much, the program will likely abort.
You could look to avoid libraries that throw exceptions - you asked this in the context of the standard library, so this doesn't quite fit the bill, but it remains an option.
Other possible strategies; I know this sounds trite, but pick a language that doesn't use them. C could do nicely...
...crux of my question (your interaction with the standard library, if any), I'm quite interested in hearing about your constructors. Can they fail, or do you by convention use a 2-step construction with a dedicated init function that can return an error code upon failure (which the constructor can't)? Or what's your strategy there?
If constructors are used, there are generally two approaches that are used to indicate the failure;
Set an internal error code or enum to indicate the failure and what the failure is. This can be interrogated after the object's construction and appropriate action taken.
Don't use a constructor (or at least only construct what cannot fail in the constructor - if anything) and then use an init() method of some sort to do (or complete) the construction. The member method can then return an error if there is some failure.
The use of the init() technique is generally favored as it can be chained and scales better than the internal "error" code.
Again, these are techniques that come from environments where exceptions do not exist (such as C). Using a language such as C++ without exceptions limits its usability and the usefulness of the breadth of the standard library.
Not trying to fully answer the questions you have asked, I will just give google as an example for code base which does not utilize exceptions as a mechanism to deal with errors.
In Google C++ code base, every functions which may fail return a status object which have methods like ok to specify the result of the callee.
They have configurated GCC to fail the compilation if the developer ignored the return status object.
Also, from the little open source code they provide (such as LevelDB library), it seems they are not using STL that much anyway, so exception handling become rare. as Titus Winters says in his lectures in CPPCon, they "Respect the standard, but don't idolize it".
I think this is an attitude question. You need to be in the camp of "I don't care if something fails".
This usually results in code, for which one needs a debugger (at the customer site) to find out, why suddenly something is not working anymore.
Also potentially people which are doing software "engineering" in this way, do not use very complex code. E.g. one would be unable to write code, which relies on the fact that it is only executed, if all n resources it relies on have been successfully allocated (while using RAII for these resources).
Thus: Such coding would result in either:
an unmanageable amount of code for error handling
an unmanageable amount of code to avoid executing code, which relies on successful allocation of some resources
no error handling and thus considerable higher amount of support and developer time
Note, that I'm talking about modern code, loading customer-provided dlls on demand and using child processes. There are many interfaces on which something can fail. I'm not talking about some replacement for grep/more/ls/find.
Are there any official C++ recommendations that concern with the amount of information that should be disclosed in a method name? I am asking because I can find plenty of references in Internet but none that really explains this.
I'm working on a C++ class with a method called calculateIBANAndBICAndSaveRecordChainIfChanged, which pretty well explains what the method does. A shorter name would be easier to remember and would need no intellisense or copy & paste to type. It would be less descriptive, true, but functionality is supposed to be documented.
calculateIBANAndBICAndSaveRecordChainIfChanged considered to be a bad function name, it breaks the rule of one-function-does-one-thing.
Reduce complexity
The single most important reason to create a routine is to reduce a program's complexity. Create a routine to hide information so that you won't need to think about it. Sure, you'll need to think about it when you write the routine. But after it's written, you should be able to forget the details and use the routine without any knowledge of its internal workings. Other reasons to create routines—minimizing code size, improving maintainability, and improving correctness—are also good reasons, but without the abstractive power of routines, complex programs would be impossible to manage intellectually.
You could simply break this function into below functions:
CalculateIBAN
CalculateBIC
SaveRecordChain
IsRecordChainChanged
To name a procedure, use a strong verb followed by an object
A procedure with functional cohesion usually performs an operation on an object. The name should reflect what the procedure does, and an operation on an object implies a verb-plus-object name. PrintDocument(), CalcMonthlyRevenues(), CheckOrderlnfo(), and RepaginateDocument() are samples of good procedure names.
Describe everything the routine does
In the routine's name, describe all the outputs and side effects. If a routine computes report totals and opens an output file, ComputeReportTotals() is not an adequate name for the routine. ComputeReportTotalsAndOpen-OutputFile() is an adequate name but is too long and silly. If you have routines with side effects, you'll have many long, silly names. The cure is not to use less-descriptive routine names; the cure is to program so that you cause things to happen directly rather than with side effects.
Avoid meaningless, vague, or wishy-washy verbs
Some verbs are elastic, stretched to cover just about any meaning. Routine names like HandleCalculation(), PerformServices(), OutputUser(), ProcessInput(), and DealWithOutput() don't tell you what the routines do. At the most, these names tell you that the routines have something to do with calculations, services, users, input, and output. The exception would be when the verb "handle" was used in the specific technical sense of handling an event.
Most of above points are referred from Code complete II. Other good books are Clean Code, The Clean Coder from Robert C. Martin
To answer the direct question, I don't think function names need to be memorable. It's nice if they are, but like you say this stuff is supposed to be documented. I can look it up.
calculateIBANAndBICAndSaveRecordChainIfChanged is too long for my taste. Aside from the inconvenience of having to c/p or auto-complete to even use them, my fear with long function names is that I don't read them properly either, so names with similar "shapes" start to look confusingly similar to one another.
So I would advise looking for a shorter name. There must be some reason why these operations (calculating two things, and conditionally saving a record chain) have been grouped together. That reason isn't described in the question, it lies somewhere in the specification or the history of your project. You should identify that reason and look to it for a more succinct function name.
When naming a function you can also consider what reasons[*] the function might change in future. Why are there two things (IBANA and BIC) that are calculated at the same time? What is the relationship between them? Can you identify the reason for doing both at once and then saving?
For example: they are the "acronyms" for this object, it's common to want to recalculate the acronyms all at once, and if you recalculate then naturally the changes need saving. Then call the function refreshAcronyms. Maybe there will be a third acronym in future.
For another example: what callers really want is to save the object if changed, and it's an additional chore that to preserve integrity of the stored data, I must always recalculate the IBANA and the BIC before saving. In that case, all the rest is necessary precursors to saving, so I can call the function saveRecordChain. Users of the public interface just need to know that the save function does what needs to be done. There might be a serializeToFile() function in the private interface that saves if changed without doing the extra stuff.
[*] I say "reasons" plural, but Robert C Martin defines the "single responsibility principle" to be that there is only one possible reason to change a well-designed function.
Ideally one method should do only one thing. And your method name should reflect what it does (that one thing), then only your program become readable.
It''s a matter of personal preference although I would think that calculateIBANAndBICAndSaveRecordChainIfChanged is too long and therefore difficult to read and code with (unless you're using a smart editor that can auto-complete)
Two further points:
The function needs to be broken down into smaller parts, as other
posters have suggested.
There's no law against commenting your headers to give a more
detailed description of the function there so you don't have to
build every aspect of its functionality into the name.
You read and write too many methods over the course of your career to remember their names. Most programmers would need to look up a name of a function from their language's standard library, let alone names of functions that their or their team developed! The most memorable function name would be of no use to someone maintaining your code and seeing the call for the first time. Moreover, good chances are that in six months you wouldn't remember it either!
That is why I recommend going for descriptive names first, and not worrying about the ease of memorization: after all, IDEs with intellisense are not going away any time soon (and they were introduced for a good reason - to address our memory limitations).
For personal interaction that would be enough and useful, but any way after completing the app you have to re-factor every function name to exactly what they intend to do. And if working in a group or in company make it sure that function name reflects what its functionality is.
And in your eg function name i may name it like: saveRecordWithRespctToIBANandBIC()
We have a convention to validate all parameters of constructors and public functions/methods. For mandatory parameters of reference type, we mainly check for non-null and that's the chief validation in constructors, where we set up mandatory dependencies of the type.
The number one reason why we do this is to catch that error early and not get a null reference exception a few hours down the line without knowing where or when the faulty parameter was introduced. As we start transitioning to more and more TDD, some team members feel the validation is redundant.
Uncle Bob, who is a vocal advocate of TDD, strongly advices against doing parameter validation. His main argument seems to be "I have a suite of unit tests that makes sure everything works".
But I can for the life of it just not see in what way unit tests can prevent our developers from calling these methods with bad parameters in production code.
Please, unit testers out there, if you could explain this to me in a rational way with concrete examples, I'd be more than happy to seize this parameter validation!
My answer is "it can't." Basically it sounds like I disagree with Uncle Bob on this (amongst other things).
It's all too easy to imagine a situation where you've unit tested your library code for non-null arguments, and you've unit tested your calling code for a path which happens to provide a null argument to the library without you being aware of it, but which also happens not to cause any problems for that particular path. You can have 100% coverage and actually a pretty good set of tests, and still not notice the problem.
Is everything fine? No, of course it isn't - because you're violating the library contract (don't give me a non-null value) without being aware of it. Can you be comfortable that the only situations in which you're providing a null argument are ones where it won't matter? I don't think so - especially if you weren't even aware that the argument was null.
In my view, public APIs should validate their arguments regardless of whether the calling code and the API itself is unit tested. Problems in calling code should be exposed, and exposed as early as possible.
That's a question I've been asking myself for ages, and still haven't got a satisfying answer to.
But I believe that when it comes to argument validation, you need to distinguish between two cases:
Are you validating the argument to catch logical programming errors?
if (foo == null) throw new ArgumentNullException("foo");
is quite likely an example of that.
Are you validating the argument because it is some external input (supplied by the user, or read from a configuration file, or from a database), which could be invalid and must be rejected?
if (customerDateOfBirth == new DateTime(1900, 1, 1)) throw …;
might be of this type of argument check.
(If you're exposing an API consumed by someone outside your team, point 2 roughly applies as well.)
I suspect that methodologies such as unit testing, design by contract, and to some extent "fail early" focus mostly on the first type of argument validation. That is, they attempt to detect logical programming errors, not invalid input.
If that is the case, then I dare say it doesn't actually matter which method of error detection you follow; each has its own advantages and disadvantages.† In the extreme case (for instance, when you have absolute trust in your abilities to write bug-free code), you could even drop these checks completely.
However, whatever method you choose for detecting logical errors in your code, you still need to validate user input etc., thus the need to distinguish between the two kinds of argument checks.
†) An amateur's incomplete attempt at comparing the relative advantages and disadvantages of Design by Contract, unit testing, and "fail early":
(Though you didn't ask for it... I'll just mention a few key differences.)
Fail early (e.g. explicit argument validation at start of method):
writing basic checks such as guards against null are easy to write
might mix up guards against logical errors and validation of external input with the same syntax
doesn't allow you to test the interaction of methods
does not encourage you to define (and thus think about) your methods' contracts rigorously
Unit testing:
allows you to test code in isolation, without running the actual application, so detecting bugs can be quicker
if a logical error occurs, you won't have to trace the stack to find the cause, because each unit test stands for a specific "use case" of your code.
allows you test more than just single methods, e.g. even the interaction between several objects (think stubs & mocks)
writing easy tests (such as guards against null) is more work than with the "fail early" approach (if you strictly adhere to the Arrange-Act-Assert pattern)
Design by Contract:
forces you to explicitly state the contract of your classes (though this is possible with unit tests, too — just in a different way)
allows you to easily state class invariants (internal conditions that must always hold true)
not as well supported by many programming languages / frameworks as the other approaches
It all depends on the type of application you are developing.
I have spent most of my time writing applications that do not expose public APIs, in this case, the application must be deterministic in a sense that all parameters must and will be different than null. In a nutshell, you should be performing input validation at your system boundaries not to let these invalid inputs sneak into your application which might end up in null references and such. In this kind of application, you have full control of checking your application's input right where you acquire them.
If you are writing public APIs, then not checking for null references is not recommended. Just have a look at all the MSDN class methods that can throw exceptions, all of that happens inside the API as precondition checks, you can read the C# Framework design guidelines for more info.
In my opinion, be it an exposed (or not) API application, having preconditions for your methods is always a good thing (those contracts are documentations for your peers who will work on your code in the future)
I aggree with Uncle Bob on almost everything, but this not this one. I vote for the "fail fast and fail hard"-policy.
This has nothing to do with TDD.
For public APIs, yes, we should do argument checks, as fast as possible.
All constructor argument checks seem completely unnecessary to me, because it's NOT consumed by anyone else outside the team. Why we had null checks? We had no trust in the code that is calling these methods.
So what are public APIs? All public methods? If so, there is no such a thing called internal APIs then I guess. So why use word public then? Why just say all public methods should do null/boundary checks.
I think the root cause of the problem is lacking of trust in our own code and team members, and apparently we are solve the problem in the wrong way.
When writing exception safe code, it is necessary to consider the exception safety guarantee (none, basic, strong or no-throw) of all the functions called. Since the compiler offers no help, I was thinking that a function naming convention might be helpful here. Is there any kind of established notational standard indicating the level of exception safety guarantee offered by functions? I was thinking along the lines of something hungarian-like:
void setFooB(Foo const& s); // B, offers basic guarantee
int computeSomethingS(); // S, offers strong guarantee
int getDataNT() throws(); // NT, offers no-throw
void allBetsAreOffN(); // N, offers no guarantee
Edit: I agree with comments that this kind of naming convention is ugly, so allow me to elaborate on my reasons for suggesting in.
Say I refactor some code, and in that process, change the level of exception safety offered by a function. If the guarantee has changed from, say, strong to basic (justified perhaps by improvement in speed), then every function that calls the refactored function must be reconsidered for their exception safety. If the change in guarantee triggered a change in the function name as well, it would allow the compiler to help me out a little bit in at least flagging all uses of the changed function. This was my rationale for suggesting the naming convention above, problematic as it is. This is very similar to const, where a change in the const-ness of a function has cascading effects on other calling functions, but in that situation the compiler gives very effective assistance.
So I guess my question is, what kind of work habits have people developed in order to ensure that code actually fullfills their intended exception guarantees, especially during code maintenance and refactoring.
I usually find myself documenting that in comments (doxygen), excepts the no throw guarantee, that I often tag with the throw() exception specification if and only when sure that the function is guaranteed not to throw, and exception safety is important.
That is, I usually worry more about exceptions in parts of the code where an unhandled exception would cause problems, and deal with that locally (ensure that your code is exception safe by other means, as RAII, performing the work outside and then merging the results with a no throw operation --i.e. no throw swap, which is about the only function that I actively mark as throw().
Other people might have other experiences, but I find that to be sufficient for my daily work.
I don't think you need to do anything special.
The only ones I really document are no-throw and that is because the syntax of the language allows it.
There should be no code in your project that provides no guarantee. So that only leaves strong/basic to document. For these I don't think you need to explicitly call it out as it not really about the methods themselves but the class as a whole (for these two guarantees). The guarantees they provide are really dependent on usage.
I would like to think I provide the strong guarantee on everything (but I don't) sometimes it is too expensive sometimes its just not worth the effort (if things are going to throw they will be destroyed anyway).
I understand your willingness to do well, but I am unsure about such a naming convention.
I am, in general, wary of naming conventions that are not enforced by the language: they are prone to become the greatest liars.
If you truly need such things, my suggestion is to get your hands on a compiler (Clang for example) and add a new set of attributes. Do note that you'll need to edit your Standard Library provided headers, and all 3rd party headers you rely on, to annotate them so that you can get those guarantees from the ground up.
Then you can have the compiler check the annotations (won't be trivial either...), and then the annotations become useful, because they cannot lie.
I am thinking about adding
#par Exception Safety
Strong guarantee
to my javadocs where appropriate.
I am working on a code-base with a bunch of developers who aren't primarily Computer Science or Software Engineering (Mostly Computer Engineering)
I am looking for a good article about when exceptions should be caught and when one should try to recover from them. I found an article a while ago that I thought explained things well, but google isn't helping me find it again.
We are developing in C++. Links to articles are an acceptable form of answer, as are summaries with pointers. I'm trying to teach here, so tutorial format would be good. As would something that was written to be accessible to non-software engineers. Thanks.
Herb Sutter has an excellent article that may be useful to you. It does not answer your specific question (when/how to catch) but does give a general overview and guidelines for handling exceptional conditions.
I've copied his summary here verbatim
Distinguish between errors and
nonerrors. A failure is an error if
and only if it violates a function's
ability to meet its callees'
preconditions, to establish its own
postconditions, or to reestablish an
invariant it shares responsibility for
maintaining. Everything else is not an
error.
Ensure that errors always leave your
program in a valid state; this is the
basic guarantee. Beware of
invariant-destroying errors
(including, but not limited to,
leaks), which are just plain bugs.
Prefer to additionally guarantee that
either the final state is either the
original state (if there was an error,
the operation was rolled back) or
intended target state (if there was no
error, the operation was committed);
this is the strong guarantee.
Prefer to additionally guarantee that
the operation can never fail. Although
this is not possible for most
functions, it is required for
functions such as destructors and
deallocation functions.
Finally, prefer to use exceptions
instead of error codes to report
errors. Use error codes only when
exceptions cannot be used (when you
don't control all possible calling
code and can't guarantee it will be
written in C++ and compiled using the
same compiler and compatible compile
options), and for conditions that are
not errors.
Read the chapter "Exception Handling" from the Book
Thinking in C++, Volume 2 - Bruce Eckel
May be this MSDN section will help you...
The most simplistic advice:
If you don't know whether or not catching an exception, don't catch it and let it flow, someone will at one point.
The point about exceptions is that they are exceptional (think std::bad_alloc). Apart from some weird uses for "quick exit" of deeply nested code blocks (that I don't like much), exceptions should be used only when you happen to remark something that you have no idea how to deal with.
Let's pick examples:
file = open('littlefile.txt', open.mode.Read)
It does seem obvious, to me, that this may fail, and in a number of conditions. While reporting the cause of failure is important (for accurate diagnostic), I find that throwing an exception here is NOT good practice.
In C++ I would write such a function as:
boost::variant<FileHandle,Error> open(std::string const& name, mode_t mode);
The function may either return a file handle (great) or an error (oups). But since it's expected, better deal with it now. Also it has the great advantage of being explicit, looking at the signature means that you know what to expect (not talking about exception specifications, it's a broken feature).
In general I tend to think of these functions as find functions. When you search for something, it is expected that the search may fail, there is nothing exceptional here.
Think about the general case of an associative container:
template <typename Key, typename Value>
boost::optional<Value const&> Associative::GetItem(Key const& key) const;
Once again, thanks to Boost, I make it clear that my method may (or not) return the expected value. There is no need for a ElementNotFound exception to be thrown.
For yet another example: user input validation is expected to fail. In general, inputs are expected to be hostile / ill formed / wrong. No need for exceptions here.
On the other hand, suppose my software deal with a database and cannot possibly run without it. If the database abstraction layer loses the connection to the database and cannot establish a new one, then it makes sense to raise an exception.
I reserve exceptions for technical issues (lost connection, out of memory, etc...).