Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a small project for numerical integration. I've been reading topics like this one and I couldn't decide if I should throw exceptions when the user supplies bad integration limits, requests impossible precision and things like that.
Also, what if a procedure fails to converge for the provided conditions? Should I throw an exception?
Right now what I have are error codes represented in integer variables, which I pass around between many routines, but I'd like to reduce the number of variables the user must declare and whose value they must consult.
I was thinking about a scenario where exceptions are thrown by the library routines in the circumstances I mentioned above and caught by the user. Would that be acceptable? If not, what would be a good approach?
Thanks.
While the question is too broad, I will try to give some general guidance, since this is one of the topics I often talk about.
In general, exceptions are useful when something happened which really shouldn't have happened.
I can give a couple of examples. A programmatic error would be one - when a function call contract is broken, I usually advice throwing an exception rather than returning an error.
An unrecoverable error should trigger an exception, however, judging between recoverable error and non-recoverable error is not always possible at the call site. For example, an attempt to open a non-existing file is usually a recoverable error, which warrants a failure code. But sometimes, the file simply must be there, and there is nothing calling code can do when it is not - so the error becomes unrecoverable. In the latter case, calling code might want the file opening function to throw an exception rather than returning a code.
This introduces the whole topic of exception policies - functions are told if they need to throw exception or return errors.
Before C++11 exceptions were avoided in projects where performance mattered (-fno-exceptions). Now, it appears that exceptions do not impact performance (see this and this), thus there is no reason not to use them.
A paranoid, but old, approach would be: divide your program in two parts, ui and numerical library. UI could be written in any language and use exceptions. Numerical library would be c or c++ and use no exceptions. For instance (win, but doesn't matter), you could have an UI in c# with exceptions that calls an "unsafe" c++ .dll where exceptions are not used.
Alternative to exception is the classic return -1;, the caller has to check return value of every call (even with optional, caller still has to check for errors). When a serie of nested function calls is executed and an error arise in the deepest function, you would have to propagate the error all the way up: you would have to check for errors in every call you do.
With Exceptions, you use a try{} block and that handle errors inside at any call-depth. Code to handle errors appears only once and does not pollute your numerical-library (or whatever you are creating)
Use exceptions!
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Is it considered good practice to return an error to the caller function like in Go, or should my program throw an error instead when it encounters it?
There are different common practices w.r.t. error handling in C++ - as it is a multi-paradigmatic language. For example:
Return status/error codes rather than results.
Return results only, throw exceptions on error.
Return value-or-error objects, such as std::expected.
Each of these has pros and cons. The most important thing is to be consistent in your program, and to coordinate with whoever calls your functions* - so that you meet their needs.
For a detailed presentation of current options and a future potential alternative, see this talk by Brand & Nash at the annual C++ conference CppCon:
CppCon 2018: "What Could Possibly Go Wrong?: A Tale of Expectations and Exceptions"
Depends on your viewpoint. Some people swear by throwing exceptions, others will point out some of the following:
C++ has been designed to support exception-free operation with zero overhead, with the consequence that the throwing code paths are more involved. Exceptions are exceptionally slow when thrown. So, at the very least, you should avoid throwing exceptions within the performance critical code paths.
Another argument against exceptions is, that correct error handling is as much of a code feature as anything else, and that it helps to have the error code paths explicit.
A third argument against exceptions is, that C++ has been designed to allow overloading of any operators, allowing such straightforward statements like a = b; to throw. As such, code written to use exceptions must be written in a special exception safe way (construct in local variables and swap() to commit changes) if exceptions are allowed in a program.
As a corollary, code written to use exceptions simply does not mix well with code that avoids exceptions. The later will not be written in an exception safe style, and consequently blow up in your face when an exception skips parts of its execution.
Sorry, I don't know any good arguments for using exceptions. All the arguments I've seen ("it makes the code cleaner" and such) don't really seem to cut it, imho. Please refer to some exception enthusiasts for arguments for using exceptions.
Bottom line:
There are many projects out there that fully embrace exceptions, and there are other projects that ban them from their code. And because the codes of these two camps don't mix well, you will need to stick to how the project that you are working does it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm currently using std::error_code to give feedback to the users of my API when something goes wrong. Would it be semantically acceptable to add an std::error_condition of type warning to notify my users that there was a minor issue but that operations will continue? Or should I only use logging for this?
If I got it correctly, you're asking if returning a warning should be considered abusing std::error_code semantics or not.
Now, the standard introduces error_code as part of the standard diagnostics library
[diagnostics.general] This Clause describes components that C++ programs may use to detect and report error conditions.
and, as far as I know, poses no semantical requirements on what an "error condition" is, we can just assume that these are to be used to report that something went wrong, but it does not seem imposing what the effects of a partial fulfillment of an operation specification should be, the operation should tell you.
The only semantical requirement I see, is that error_code (and error_condition) is boolean convertible, that is, a 'zero' error code should always mean success.
Now, given that you supposedly want an operation completing with a warning to be considered successful, for this reason I would not consider valid to return such a warning via an error code;
that said, you may always let your operation return two error codes (in the way you like, maybe belonging to different categories), documenting that only the first one reports the fulfillment of the operation effects:
auto [err,war] = some_operation();
if(err) call_the police(); // some_operation failed
else if(war) // some_operation complains
{
std::cerr << "hold breath...";
if( war == some_error_condition )
thats_unacceptable();
//else ignore
}
That said, note that there exist real use cases deviating from my reasoning above; indeed, things like HTTP result codes and libraries (like Vulkan) do use non zero 'result codes' for successful or partially successful conditions ...
moreover, here one of the very authors of the diagnostic library both claims that "the facility uses a convention where zero means success." and at the same time uses error_code to model HTTP errors (200status code included).
This sheds some doubts either on the actual semantics of error_code::operator bool() (the meaning of which is not explicitly laid out in the standard) or on the effective ability of the standard diagnostic library to model the error code concept in a general way. YMMV.
There are several options for a library to tell the user something went wrong or is not in line with what the function call expected.
exceptions. But there's the exception overhead, try/catch...
boost/std::optional. If there was an error/warning you can issue it as a return value (in/out or out param), otherwise the optional will be false
std::pair/std::tuple. That way you can encode more information in the return value (though a custom struct might also do it more explicitly)
You can introduce your own error data structure (don't use std::error_code as it's OS dependent).
Killing the application from within a library is not very practical either. Even if it's an unrecoverable error in the library, it doesn't have to have much of an impact in the actual calling application/process/whatever. Let the caller decide what to do.
But all that is not generally applicable. There is no one-fits-all solution to error handling. It can be very specific to where/how/when your library is used, so you wanna check what fits your purpose and how strong the calling constraints must/should be.
In all cases be clear about what the caller can expect from your error handling and don't make it feel like rocket science. Minimal design is very helpful here imo.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have seen several c++ projects written by seniors without any exception handling.
ex: I see : className* ptr = new className();
instead of :
try
{
className* ptr = new className();
//some code/throug exception
}
catch(bad_alloc ba) //or catch some other exception
{
//some code
}
Usually why people are leaving this try-catch block even if we know that there is chance of exception.
And one more thing, should we use this try/catch format when we are using new ?
When we should go for exception handling exactly(this can be a stupid question, but still I want some ideas as I am confused with exception handling).
Thanks in Advance.
In C#, Java, Python and many other languages it's generally necessary to use try-catch (or the language's equivalent) in order to be able to clean up properly when an exception occurs. For example, freeing already allocated resources. All three languages mentioned now support a simplified form called using in C#, with in Python, and part of the try (IIRC) in Java, but that's just syntactic sugaring.
In contrast, in C++ object destructors are called automatically when an exception passes through, and they deal with the cleanup chores. This is generally called RAII, which (misleadingly) is short for Resource Acquisition Is Initialization. It's based on deterministic, guaranteed calls of destructors, which you don't have in Java and C#.
So in C++ there's only a need for try-catch where you want to report, retry or suppress. Or, translate an exception to some other exception or failure reporting scheme.
std::bad_alloc exceptions are ideally handled in main() or in a code fragment which calls a larger routine. Often, you can not handle allocation failures inside small "subfunctions" in a way it is helpful. But for example, if you write something like:
try
{
ImageProcessor img(resource);
img.startLargeProcessingRoutine(); // maybe some deeper code functions throw bad_alloc
}
catch(std::bad_alloc &e)
{
std::cerr << "Not enough memory for processing this resource" << std::endl;
}
you can make good choices about what should happen when a specific operation fails.
IMO it heavily depends on the actual case where you might want to check or ommit something.
Or in other words: If the allocation fails, is there even some way to recover in a graceful way?
You don't have to catch the exception if all you do is showing a popup and closing the program. In fact, Microsoft even doesn't want you to catch exceptions you don't handle gracefully, so they're caught by Windows' own error handling routine (which might also provide you as the dev with minidumps and the like if you have some partnership; never had a closer look at this though, more information can be found here).
I think in the end it boils down to this:
If you've got some way to recover, then use the exception block. Use it locally, where it makes sense.
If you don't, try to handle your errors at one location. That's also what exceptions are great at. If you'd handle it locally, just show a popup and then kill the program, you wouldn't have to use exceptions to begin with (which might also mean faster code).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
(not sure if it's only a C++ thing)
Exception handling is hard to learn in C++ and is certainly not a perfect solution but in most cases (other than some specific embedded software contexts) it's certainly the better solution we currently have for exception handling.
What about the future?
Are there other known ways to handle errors that are not implemented in most languages, or are only academical research?
Put another way: are there (suposedly) known better (unperfect is ok) ways to handle errors in programming languages?
Well, there's always been return codes, errno and the like. The general problem is that these can be ignored or forgotten about by programmers who are unaware that a particular call can fail. Exceptions are frequently ignored or missed by programmers too. The difference is that if you don't catch an exception the program dies. If you don't check a return code, the program continues on operating on invalid data. Java tried to force programmers to catch all of their exceptions by creating checked exceptions, which cause a compilation error if you don't specify exactly when they can be propagated and catch them eventually. This turned out to be insanely annoying, so programmers catch the exceptions with catch(...){/* do nothing*/} (in C++ parlence) as close to their source as possible, and the result is no better than ignoring a return code.
Besides these two error techniques, some of the functional languages support the use of various monadic return types which can encapsulate both errors and return values (e.g. Scala's Either type, Option type, or a monad that lets you return an approximate answer along with failure log). The advantage to these is that the only way to work with the successful return value is to execute code inside the monad, and the monad ensures that the code isn't run if there was a failure. (It's rather complicated to explain for someone who isn't a Haskell or Scala programmer.) I haven't worked with this model so much, but I expect it would be as annoying to some people as checked exceptions are.
Basically, IMO, error checking is a matter of attitude. You have three options:
Realize you have to deal with it, accept that fact cheerfully, and take the effort to write correct error handling code. (Any of them)
Use language features that force you to deal with it, and get annoyed because you don't want to deal with it, particularly when you're sure the error will never happen. (Checked Exceptions, Monads)
Use langauge features that allow you to ignore it easily, and write unsafe code because you ignored it. (Unchecked Exceptions, Return Codes)
Get the worst of both options 2 and 3 by using language features that force you to deal with it, but deali with every error in a way that explicitly ignores it. (Checked Exceptions, Monads)
Obviously, you should try to be a #1 type programmer.
Assuming you want your code to do different things according to whether an error occurs or not, you have basically three options:
1) Make this explicit everywhere in the code (C-style error return value checking). The main perceived disadvantage is that it's verbose.
2) Use non-local control flow to separate error-handling code from the "usual path" (exceptions). The main perceived disadvantage is keeping track of all the places your code can go next, especially if documented interfaces don't always list them all. Java's experiment with checked exceptions to "deal with" the latter issue weren't entirely successful either.
3) Sit on errors until "later" (IEEE-style sticky error bits and quiet NaNs, C++ error flags on streams), and check them only when convenient for the caller. The main perceived disadvantage is that setting and clearing errors requires careful use by everyone, and also that information available at the site of the error may be lost by the time it's handled.
Take your pick. (1) looks bloated and complex, and newbies mess it up by not checking for errors properly, but each line of code is easy to reason about. (2) looks small and simple, but each line of code might cause a jump to who-knows-where, so newbies mess it up by not implementing exception guarantees properly, and everyone sometimes catches exceptions in the wrong places or not at all. (3) is great when designed well, but you never know which of several possibilities each line of code is actually doing, so in a UB-rich environment like C++ that's easy to mess up too.
I think the underlying problem is basically hard: handling errors explicitly increases the branches in your code. Handling errors quietly increases the amount of state that you need to reason about, in a particular bit of code.
Exceptions also have the "is it truly exceptional?" problem. You could prevent exceptions from causing confusing control flow, by throwing them only in cases that your entire program can't recover from. But then you can't use them for errors which are recoverable from the POV of your program but not from the POV of the subsystem, so for those cases you fall back to the disadvantages of either (1) or (3).
I can't say that it is better than exceptions, but one alternative is the way that Erlang developers implement fault tolerance, known as "Let it fail". To summarize: each task gets spawned off as a separate "process" (Erlang's term for what most people call "threads"). If a process encounters an error, it just dies and a notification is back to the controlling process, which can either ignore it or take some sort of corrective action.
This supposedly leads to less complex and more robust code, as the entire program won't crash or exit due to lack of error handling. (Note that this robustness relies on some other features of the Erlang language and run-time environment.)
Joe Armstrong's thesis, which includes a section on how he envisions fault-tolerant Erlang systems, is available for download: http://www.erlang.org/download/armstrong_thesis_2003.pdf
Common Lisp's condition system is regarded as being a powerful superset beyond what exceptions let you do.
The fundamental problem with exception handling in systems I've seen is that if routine X calls routine Y, which calls routine Z, which throws an exception, there's no clean way for Y to let its caller distinguish among a number of situations:
The call failed for some reason that Y doesn't know about, but X might; from Y's perspective, if X knows why Z failed, X should expect to recover.
The call failed for some reason that Y doesn't know about, but its failure caused Y to leave some data structures in an invalid state.
The call failed for some reason that Y does know about; from its perspective, if the caller can handle the fact that the call won't return the expected result, X should recover.
The call failed because the CPU is catching fire.
This difficulty stems, I think, from the fact that exception types are centered around the question of what went wrong--a question which in many cases is largely orthogonal to the question of what to do about it. What I would like to see would be for exceptions to include a virtual "isSatisfied" method, and an effort to swallow an exception whose isSatisfied method returns false to throw a wrapped exception whose isSatisfied method would chain the nested one. Some types of exceptions like trying to add a duplicate key to a non-corrupted dictionary would provide a parameterless AcknowledgeException() method to set isSatisfied. Other exceptions implying data corruption or other problems would require the that either an AcknowledgeCoruption() method be passed the corrupted data structure, or that the corrupt data structure be destroyed. Once a corrupt data structure is destroyed in the process of stack unwinding, the universe would be happy again.
I'm not sure what the best architecture would be, but providing a means by which exceptions can communicate the extent to which the system state is corrupt or intact would go a long way toward alleviating the problems with existing architectures.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I used to work for a company where some of the lead architect/developers had mandated on various projects that assertions were not to be used, and they would routinely be removed from code and replaced with exceptions.
I feel they are extremely important in writing correct code. Can anyone suggest how such a mandate could be justified? If so, what's wrong with assertions?
We use a modified version of assert, as per JaredPar's comment, that acts like a contract. This version is compiled into the release code so there is a small size overhead, but disabled unless a diagnostics switch is set, such that performance overhead is minimized. Our assert handler in this instance can be set to disabled, silent mode (e.g. log to file), or noisy mode (e.g. display on screen with abort / ignore, where abort throws an exception).
We used automated regression testing as part of our pre-release testing, and asserts are hugely important here as they allow us to find potential internal errors that cannot be picked up at a GUI level, and may not be initially fatal at a user level. With automation, we can run the tests both with and without diagnostics, with little overhead other than the execution time, so we can also determine if the asserts are having any other side effects.
One thing to be careful of with asserts is side effects. For example, you might see something like assert(MyDatabasesIsOk()), which inadvertently corrects errors in the database. This is a bug, as asserts should never change the state of the running application.
The only really negative thing I can say about assertions is they don't run in retail code. In our team we tend to avoid assertions because of this. Instead we use contracts, which are assertions that run in both retail and debug.
The only time we use assertions now is if one of the following are true.
The assertion code has a noticable performance impact
The particular condition is not fatal
Occasionally there is a piece of code that may or may not be dead. We will add an assertion that essentially says "how did you get here." Not firing does not mean the code is indeed dead but if QA emails me and says "what does this assertion mean," we now have a repro to get to a particular piece of code (it's immediately documented of course).
assertions and exceptions are used for two different things.
Assertions are used for states that should never happen. For example, a signalton pointer should never be null and this error should be picked up during development using an assert. Handling it with an exception is alot more work for nothing.
On the other hand exceptions are used for rare states that could happen in the normal running of an application. For example using fopen and it returns a null pointer. It could happen but most times it will return a valid pointer.
Using assertions is nether wrong nor right but it comes down to personal preference as at the end of the day it is a tool to make programing easier and can be replaced by other tools.
It depends on the criticality of your system: assertions are a failfast strategy, while exceptions can be used when the system can perform some kind of recovery.
For instance, I won't use assertions in a banking application or a telecommunication system : I'd throw an exception, that will be catched upper in the call stack. There, resources can be cleaned, and the next call/transaction can be processed ; only one will be lost.
Assertions are an excellent thing, but not to be confused with parameter/return value checking. You use them in situations that you don't believe will occur, not in situations that you expect could occur.
My favourite place to use them is in code blocks that really shouldn't be reached - such as a default case in switch-statement over an enum that has a case for every possible enum value.
It's relatively common that you might extend the enum with new values, but don't update all switch-statements involving the enum, you'll want to know that as soon as possible. Failing hard and fast is the best you can wish for in such circumstances.
Granted, in those places you usually want something that breaks in production builds as well. But the principle of abort()ing under such conditions is highly recommended. A good stack trace in the debugger gives you the information to fix your bug faster than guessing around.
Is it true that an assertion exists in the debug build, but not in the release build?
If you want to verify/assert something, don't you want to do that in the release build as well as in the debug build?
The only guess is that because an exception is often non-fatal that it makes for a codebase that does not die in some odd state. The counter-point is that the fatality of an assertions points right to where the problem is, thus easy to debug.
Personally I prefer to take the risk of an assertion as I feel that it leads to more predictable code that is easier to debug.
Assertions can be left on simply by not defining NDEBUG, so that's not really an issue.
The real problem is that assertions call abort(), which instantly stops the program. This can cause problems if there is critical cleanup your program must do before it quits. Exceptions have the advantage that destructors are properly called, even if the exception is never caught.
As a result, in a case where cleanup really matters, exceptions are more appropriate. Otherwise, assertions are just fine.
We use assertions to document assumptions.
We ensure in code review that no application logic is performed in the asserts, so it is quite safe to turn them off just shortly before release.
One reason to veto assert() is that it's possible to write code that works correctly when NDEBUG is defined, but fails when NDEBUG is not defined. Or vice versa.
It's a trap that good programmers shouldn't fall into very often, but sometimes the causes can be very subtle. For example, the code in the assert() might nudge memory assignments or code positions in the executable such that a segmentation fault that would happen, does not (or vice versa).
Depending on the skill level of your team, it can be a good idea to steer them away from risky areas.
Note, throwing an exception in a destructor is undefined behaviour.