Is std::error_code a good way to issue warnings? [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm currently using std::error_code to give feedback to the users of my API when something goes wrong. Would it be semantically acceptable to add an std::error_condition of type warning to notify my users that there was a minor issue but that operations will continue? Or should I only use logging for this?

If I got it correctly, you're asking if returning a warning should be considered abusing std::error_code semantics or not.
Now, the standard introduces error_code as part of the standard diagnostics library
[diagnostics.general] This Clause describes components that C++ programs may use to detect and report error conditions.
and, as far as I know, poses no semantical requirements on what an "error condition" is, we can just assume that these are to be used to report that something went wrong, but it does not seem imposing what the effects of a partial fulfillment of an operation specification should be, the operation should tell you.
The only semantical requirement I see, is that error_code (and error_condition) is boolean convertible, that is, a 'zero' error code should always mean success.
Now, given that you supposedly want an operation completing with a warning to be considered successful, for this reason I would not consider valid to return such a warning via an error code;
that said, you may always let your operation return two error codes (in the way you like, maybe belonging to different categories), documenting that only the first one reports the fulfillment of the operation effects:
auto [err,war] = some_operation();
if(err) call_the police(); // some_operation failed
else if(war) // some_operation complains
{
std::cerr << "hold breath...";
if( war == some_error_condition )
thats_unacceptable();
//else ignore
}
That said, note that there exist real use cases deviating from my reasoning above; indeed, things like HTTP result codes and libraries (like Vulkan) do use non zero 'result codes' for successful or partially successful conditions ...
moreover, here one of the very authors of the diagnostic library both claims that "the facility uses a convention where zero means success." and at the same time uses error_code to model HTTP errors (200status code included).
This sheds some doubts either on the actual semantics of error_code::operator bool() (the meaning of which is not explicitly laid out in the standard) or on the effective ability of the standard diagnostic library to model the error code concept in a general way. YMMV.

There are several options for a library to tell the user something went wrong or is not in line with what the function call expected.
exceptions. But there's the exception overhead, try/catch...
boost/std::optional. If there was an error/warning you can issue it as a return value (in/out or out param), otherwise the optional will be false
std::pair/std::tuple. That way you can encode more information in the return value (though a custom struct might also do it more explicitly)
You can introduce your own error data structure (don't use std::error_code as it's OS dependent).
Killing the application from within a library is not very practical either. Even if it's an unrecoverable error in the library, it doesn't have to have much of an impact in the actual calling application/process/whatever. Let the caller decide what to do.
But all that is not generally applicable. There is no one-fits-all solution to error handling. It can be very specific to where/how/when your library is used, so you wanna check what fits your purpose and how strong the calling constraints must/should be.
In all cases be clear about what the caller can expect from your error handling and don't make it feel like rocket science. Minimal design is very helpful here imo.

Related

Is it good practice to return an error to the caller instead of throwing the error right away in C++ [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Is it considered good practice to return an error to the caller function like in Go, or should my program throw an error instead when it encounters it?
There are different common practices w.r.t. error handling in C++ - as it is a multi-paradigmatic language. For example:
Return status/error codes rather than results.
Return results only, throw exceptions on error.
Return value-or-error objects, such as std::expected.
Each of these has pros and cons. The most important thing is to be consistent in your program, and to coordinate with whoever calls your functions* - so that you meet their needs.
For a detailed presentation of current options and a future potential alternative, see this talk by Brand & Nash at the annual C++ conference CppCon:
CppCon 2018: "What Could Possibly Go Wrong?: A Tale of Expectations and Exceptions"
Depends on your viewpoint. Some people swear by throwing exceptions, others will point out some of the following:
C++ has been designed to support exception-free operation with zero overhead, with the consequence that the throwing code paths are more involved. Exceptions are exceptionally slow when thrown. So, at the very least, you should avoid throwing exceptions within the performance critical code paths.
Another argument against exceptions is, that correct error handling is as much of a code feature as anything else, and that it helps to have the error code paths explicit.
A third argument against exceptions is, that C++ has been designed to allow overloading of any operators, allowing such straightforward statements like a = b; to throw. As such, code written to use exceptions must be written in a special exception safe way (construct in local variables and swap() to commit changes) if exceptions are allowed in a program.
As a corollary, code written to use exceptions simply does not mix well with code that avoids exceptions. The later will not be written in an exception safe style, and consequently blow up in your face when an exception skips parts of its execution.
Sorry, I don't know any good arguments for using exceptions. All the arguments I've seen ("it makes the code cleaner" and such) don't really seem to cut it, imho. Please refer to some exception enthusiasts for arguments for using exceptions.
Bottom line:
There are many projects out there that fully embrace exceptions, and there are other projects that ban them from their code. And because the codes of these two camps don't mix well, you will need to stick to how the project that you are working does it.

Does double exclamation (!!) in C++ cost more CPU time? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I know it's a trick to do boolean conversion. My question is primarily about resource cost when writing this way. Will complier just ignore the "!!" and do implicit boolean conversion directly?
If you have any doubts you can check the generated assembly; noting at the assembly level there is no such thing as a boolean type anyway. So yes, it's probably all optimised out.
As a rule of thumb, code that mixes types therefore necessitating type conversions will run slower, although that is masked by another rule of thumb which is write clear code.
It depends.
If you limit attention just to basic types that are convertible to bool and can be an operand of the ! operator, then it depends on the compiler. Depending on target system, the compiler may emit a sequence of instructions that gives the required effect, but not in the way you envisage. Also, a given compiler may treat things differently, with different optimisation settings (e.g compiling for debugging versus release).
The only way to be sure is to examine the code emitted by the compiler. In practice, it is unlikely to make much difference. As others have commented, you would be better off worrying about getting your code clear and working correctly,than about the merits of premature optimisation techniques. If you have a real need (e.g. the operation is in a hotspot identified by a profiler) than you will have data to understand what the need is, and identify realistic options to do something about it. Practically, I would doubt there are many real-world cases where there would be any difference.
In C++, with user-defined types, all bets are off. There are many possibilities, such as classes that have an operator!() that returns a class type, a class that has an operator!() but not an operator bool(). The list goes on, and there are many permutations. There are cases where the compiler would be incorrect in doing such a transformation (e.g. !!x would be expected to be equivalent to x.operator!().operator!() but there is not actually a requirement (coding guidelines aside) for that sequence to give the same net effect as x.operator bool()). Practically, I wouldn't expect too many compilers to even attempt to identify an opportunity in such cases - the analysis would be non-trivial, probably not give many practical benefits (optimising single instructions is rarely where the gains are to be made in compiler optimisation). Again, it is better for the programmer to focus on getting code clear and correct, rather than worrying about how the compiler optimises single expressions like this. For example, if calling an operator bool() is intended, then it is better to provide that operator AND write an expression that uses it (e.g. bool(x)) rather than hoping the compiler will convert a hack like !!x into a call of x.operator bool().

Undecided between using exceptions or reporting error messages [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a small project for numerical integration. I've been reading topics like this one and I couldn't decide if I should throw exceptions when the user supplies bad integration limits, requests impossible precision and things like that.
Also, what if a procedure fails to converge for the provided conditions? Should I throw an exception?
Right now what I have are error codes represented in integer variables, which I pass around between many routines, but I'd like to reduce the number of variables the user must declare and whose value they must consult.
I was thinking about a scenario where exceptions are thrown by the library routines in the circumstances I mentioned above and caught by the user. Would that be acceptable? If not, what would be a good approach?
Thanks.
While the question is too broad, I will try to give some general guidance, since this is one of the topics I often talk about.
In general, exceptions are useful when something happened which really shouldn't have happened.
I can give a couple of examples. A programmatic error would be one - when a function call contract is broken, I usually advice throwing an exception rather than returning an error.
An unrecoverable error should trigger an exception, however, judging between recoverable error and non-recoverable error is not always possible at the call site. For example, an attempt to open a non-existing file is usually a recoverable error, which warrants a failure code. But sometimes, the file simply must be there, and there is nothing calling code can do when it is not - so the error becomes unrecoverable. In the latter case, calling code might want the file opening function to throw an exception rather than returning a code.
This introduces the whole topic of exception policies - functions are told if they need to throw exception or return errors.
Before C++11 exceptions were avoided in projects where performance mattered (-fno-exceptions). Now, it appears that exceptions do not impact performance (see this and this), thus there is no reason not to use them.
A paranoid, but old, approach would be: divide your program in two parts, ui and numerical library. UI could be written in any language and use exceptions. Numerical library would be c or c++ and use no exceptions. For instance (win, but doesn't matter), you could have an UI in c# with exceptions that calls an "unsafe" c++ .dll where exceptions are not used.
Alternative to exception is the classic return -1;, the caller has to check return value of every call (even with optional, caller still has to check for errors). When a serie of nested function calls is executed and an error arise in the deepest function, you would have to propagate the error all the way up: you would have to check for errors in every call you do.
With Exceptions, you use a try{} block and that handle errors inside at any call-depth. Code to handle errors appears only once and does not pollute your numerical-library (or whatever you are creating)
Use exceptions!

Is there documentation for asio that explains the possible error codes and why they would result? [duplicate]

This question already has an answer here:
Which Boost error codes/conditions are returned by which Boost.Asio calls?
(1 answer)
Closed 7 years ago.
For example, asio::async_connect(), the documentation doesn't provide the possible error codes that could result.
It does provide a bunch of error_codes and a brief explanation, but is doesn't tell me which errors the async_connect function could send to the handler. Basically, I want to know if this error is something that can be recovered or not.
I'd rather not have to go through all the errors for every handler to work out whether it can be recovered or not. My reasoning is that sometimes, depending on the situation error_x might be recoverable, other times it's not.
What Joachim said holds some truth (although these underlying calls are also not documented within Asio documentation).
There is only a very small set of generic errors like operation_aborted that always apply.
You could look at mapping error_code to error_condition which is more high-level and should reduce the decision domain, if only across different platforms.

Better language feature than exception handling in C++? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
(not sure if it's only a C++ thing)
Exception handling is hard to learn in C++ and is certainly not a perfect solution but in most cases (other than some specific embedded software contexts) it's certainly the better solution we currently have for exception handling.
What about the future?
Are there other known ways to handle errors that are not implemented in most languages, or are only academical research?
Put another way: are there (suposedly) known better (unperfect is ok) ways to handle errors in programming languages?
Well, there's always been return codes, errno and the like. The general problem is that these can be ignored or forgotten about by programmers who are unaware that a particular call can fail. Exceptions are frequently ignored or missed by programmers too. The difference is that if you don't catch an exception the program dies. If you don't check a return code, the program continues on operating on invalid data. Java tried to force programmers to catch all of their exceptions by creating checked exceptions, which cause a compilation error if you don't specify exactly when they can be propagated and catch them eventually. This turned out to be insanely annoying, so programmers catch the exceptions with catch(...){/* do nothing*/} (in C++ parlence) as close to their source as possible, and the result is no better than ignoring a return code.
Besides these two error techniques, some of the functional languages support the use of various monadic return types which can encapsulate both errors and return values (e.g. Scala's Either type, Option type, or a monad that lets you return an approximate answer along with failure log). The advantage to these is that the only way to work with the successful return value is to execute code inside the monad, and the monad ensures that the code isn't run if there was a failure. (It's rather complicated to explain for someone who isn't a Haskell or Scala programmer.) I haven't worked with this model so much, but I expect it would be as annoying to some people as checked exceptions are.
Basically, IMO, error checking is a matter of attitude. You have three options:
Realize you have to deal with it, accept that fact cheerfully, and take the effort to write correct error handling code. (Any of them)
Use language features that force you to deal with it, and get annoyed because you don't want to deal with it, particularly when you're sure the error will never happen. (Checked Exceptions, Monads)
Use langauge features that allow you to ignore it easily, and write unsafe code because you ignored it. (Unchecked Exceptions, Return Codes)
Get the worst of both options 2 and 3 by using language features that force you to deal with it, but deali with every error in a way that explicitly ignores it. (Checked Exceptions, Monads)
Obviously, you should try to be a #1 type programmer.
Assuming you want your code to do different things according to whether an error occurs or not, you have basically three options:
1) Make this explicit everywhere in the code (C-style error return value checking). The main perceived disadvantage is that it's verbose.
2) Use non-local control flow to separate error-handling code from the "usual path" (exceptions). The main perceived disadvantage is keeping track of all the places your code can go next, especially if documented interfaces don't always list them all. Java's experiment with checked exceptions to "deal with" the latter issue weren't entirely successful either.
3) Sit on errors until "later" (IEEE-style sticky error bits and quiet NaNs, C++ error flags on streams), and check them only when convenient for the caller. The main perceived disadvantage is that setting and clearing errors requires careful use by everyone, and also that information available at the site of the error may be lost by the time it's handled.
Take your pick. (1) looks bloated and complex, and newbies mess it up by not checking for errors properly, but each line of code is easy to reason about. (2) looks small and simple, but each line of code might cause a jump to who-knows-where, so newbies mess it up by not implementing exception guarantees properly, and everyone sometimes catches exceptions in the wrong places or not at all. (3) is great when designed well, but you never know which of several possibilities each line of code is actually doing, so in a UB-rich environment like C++ that's easy to mess up too.
I think the underlying problem is basically hard: handling errors explicitly increases the branches in your code. Handling errors quietly increases the amount of state that you need to reason about, in a particular bit of code.
Exceptions also have the "is it truly exceptional?" problem. You could prevent exceptions from causing confusing control flow, by throwing them only in cases that your entire program can't recover from. But then you can't use them for errors which are recoverable from the POV of your program but not from the POV of the subsystem, so for those cases you fall back to the disadvantages of either (1) or (3).
I can't say that it is better than exceptions, but one alternative is the way that Erlang developers implement fault tolerance, known as "Let it fail". To summarize: each task gets spawned off as a separate "process" (Erlang's term for what most people call "threads"). If a process encounters an error, it just dies and a notification is back to the controlling process, which can either ignore it or take some sort of corrective action.
This supposedly leads to less complex and more robust code, as the entire program won't crash or exit due to lack of error handling. (Note that this robustness relies on some other features of the Erlang language and run-time environment.)
Joe Armstrong's thesis, which includes a section on how he envisions fault-tolerant Erlang systems, is available for download: http://www.erlang.org/download/armstrong_thesis_2003.pdf
Common Lisp's condition system is regarded as being a powerful superset beyond what exceptions let you do.
The fundamental problem with exception handling in systems I've seen is that if routine X calls routine Y, which calls routine Z, which throws an exception, there's no clean way for Y to let its caller distinguish among a number of situations:
The call failed for some reason that Y doesn't know about, but X might; from Y's perspective, if X knows why Z failed, X should expect to recover.
The call failed for some reason that Y doesn't know about, but its failure caused Y to leave some data structures in an invalid state.
The call failed for some reason that Y does know about; from its perspective, if the caller can handle the fact that the call won't return the expected result, X should recover.
The call failed because the CPU is catching fire.
This difficulty stems, I think, from the fact that exception types are centered around the question of what went wrong--a question which in many cases is largely orthogonal to the question of what to do about it. What I would like to see would be for exceptions to include a virtual "isSatisfied" method, and an effort to swallow an exception whose isSatisfied method returns false to throw a wrapped exception whose isSatisfied method would chain the nested one. Some types of exceptions like trying to add a duplicate key to a non-corrupted dictionary would provide a parameterless AcknowledgeException() method to set isSatisfied. Other exceptions implying data corruption or other problems would require the that either an AcknowledgeCoruption() method be passed the corrupted data structure, or that the corrupt data structure be destroyed. Once a corrupt data structure is destroyed in the process of stack unwinding, the universe would be happy again.
I'm not sure what the best architecture would be, but providing a means by which exceptions can communicate the extent to which the system state is corrupt or intact would go a long way toward alleviating the problems with existing architectures.