Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Question: is it OK to rely on compiler optimizations while coding?
Let's say I need to calculate calculateF and calcuateG which both depend on another value returned by getValue. Sometimes I need both values, some other times I only need one of those values.
// some function
double getValue(double value)
{
double val(0.0);
// do some math with value
return val;
}
// calculateF depends on getValue
double calculateF(double value)
{
double f(0.0);
auto val = getValue(value);
// calculate f which depends on val (and value)
return f;
}
// calculateG depends on getValue
double calculateG(double value)
{
double g(0.0);
auto val = getValue(value);
// calculate g which depends on val (and value)
return g;
}
Now, I could write this more elegantly:
std::pair<double,double> calculateFG(double value)
{
auto val = getValue(value);
double f(0.0), g(0.0);
// calculate f and g which depend on val (and value)
return {f,g};
}
If I want both values:
double value(5.3);
auto [f,g] = calculateFG(value); // since C++17
// do things with f and g
If I want only 1 value, say f, I just don't use g and it will be optimized out. So, the performance of calculateFG is exactly the same as calculateF if I don't use g. Furthermore, if I need both f and g, I only need to call getValue once instead of twice.
The code is cleaner (only 1 function calculateFG instead of calculateF and calculateG), and faster if both f and g are required. But is relying on the compiler optimization a wise choice?
It is hard to say if it is wise or not. It depends on compiler optimization - function inlining.
If calculateFG is inlined, the complier can optimize out the unused one. Once inlined, g is unused so all the code for generating g is dead code[1]. (It may not be able, for example, if the calculation code has some side effects)
If not, I don't think the optimization can be applied(Always calc f and g).
Now you may wonder if it is possible to always inline specific functions.
Please note that giving inline keyword does not force the compiler to inline that function. It is just a hint. With or without the keyword, it is the compiler's call. It seems like there is non-standard way though - How do I force gcc to inline a function?
[1]Relavent compiler options : -fdce -fdse -ftree-dce -ftree-dse
Modern C++ compilers are pretty good at optimization choices, given the chance.
That is to say, if you declare a function inline, that does not mean the optimizer will actually ilnine it 100% of the time. The effect is more subtle: inline means you avoid the One Definition Rule, so the function definition can go into header files. That makes it a lot easier for the optimizer.
Now with your examples of double [f,g], optimizers are very good at tracking the use of simple scalar values, and will be able to eliminate write-only operations. Inlining allows the optimizer to eliminate unnecessary writes in called functions too. For you, that means the optimizer can eliminate writes to f in calculateFG when the calling code does not use f later on.
Perhaps it is best to turn the logic inside-out. Instead of computing a value (getValue()), passing it to both calculateF() and calculateG(), and passing the results to another place, you can change the code to pass the functions instead of computed values.
This way, if the client code does not need calculateF's value, it won't call it. The same with calculateG. If getValue is also expensive, you can call it once and bind or capture the value.
These are concepts used extensively in functional programming paradigm.
You could rewrite your calculateFG() function more or less like this:
auto getFG(double value)
{
auto val = getValue(value);
return {
[val]{ return calculateF(val); },
[val]{ return calculateG(val); }};
}
It sounds like your goal is to only perform the (potentially expensive) calculations of getValue(), f, and g as few times as possible given the caller's needs -- i.e. you don't want to perform any computations that the caller isn't going to use the results of.
In that case, it might be simplest to just implement a little class that does the necessary on-demand computations and caching, something like this:
#include <stdio.h>
#include <math.h>
class MyCalc
{
public:
MyCalc(double inputValue)
: _inputValue(inputValue), _vCalculated(false), _fCalculated(false), _gCalculated(false)
{
/* empty */
}
double getF() const
{
if (_fCalculated == false)
{
_f = calculateF();
_fCalculated = true;
}
return _f;
}
double getG() const
{
if (_gCalculated == false)
{
_g = calculateG();
_gCalculated = true;
}
return _g;
}
private:
const double _inputValue;
double getV() const
{
if (_vCalculated == false)
{
_v = calculateV();
_vCalculated = true;
}
return _v;
}
mutable bool _vCalculated;
mutable double _v;
mutable bool _fCalculated;
mutable double _f;
mutable bool _gCalculated;
mutable double _g;
// Expensive math routines below; we only want to call these (at most) one time
double calculateV() const {printf("calculateV called!\n"); return _inputValue*sin(2.14159);}
double calculateF() const {printf("calculateF called!\n"); return getV()*cos(2.14159);}
double calculateG() const {printf("calculateG called!\n"); return getV()*tan(2.14159);}
};
// unit test/demo
int main()
{
{
printf("\nTest 1: Calling only getF()\n");
MyCalc c(1.5555);
printf("f=%f\n", c.getF());
}
{
printf("\nTest 2: Calling only getG()\n");
MyCalc c(1.5555);
printf("g=%f\n", c.getG());
}
{
printf("\nTest 3: Calling both getF and getG()\n");
MyCalc c(1.5555);
printf("f=%f g=%f\n", c.getF(), c.getG());
}
return 0;
}
I think that it's best to write your code in a way that expresses what you are trying to accomplish.
If your goal is to make sure that certain calculations are only done once, use something like Jeremy's answer.
A good function should do only one thing. I would design like below.
class Calc {
public:
Calc(double value) : value{value}, val{getValue(value)} {
}
double calculateF() const;
double calculateG() const;
//If it is really a common usecase to call both together
std::pair<double, double> calculateFG() const {
return {calculateF(), calculateG()};
}
static double getValue(double value);
private:
double value;
double val;
};
To know whether compiler will optimize will depend on the rest of the code. For example, if there was a debug message like log_debug(...), that could affect dead code removal. Compiler can only get rid of the dead code if it can prove that the code has no side effects in compile time (Even if you force inline).
Other option is, you can mark the getValue function with special compiler specific attributes like pure or const. This can force the compiler to optimize the second call of getValue. https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-g_t_0040code_007bpure_007d-function-attribute-3348
Related
I have a function that returns a double. Any real number is a valid output. I'm using nan's to signal errors. I am error checking this way.
double foo();
const auto error1 = std::nan("1");
const auto error2 = std::nan("2");
const auto error3 = std::nan("3");
bool bit_equal(double d1, double d2) {
return *reinterpret_cast<long long*>(&d1) == *reinterpret_cast<long long*>(&d2);
}
const auto value = foo();
if(std::isnan(value)) {
if (bit_equal(value, error1)) /*handle error1*/;
else if (bit_equal(value, error1)) /*handle error2*/;
else if (bit_equal(value, error1)) /*handle error3*/;
else /*handle default error*/;
} else /*use value normally*/;
Alternatively, if the compiler support has caught up, I can write it this way
double foo();
constexpr auto error1 = std::nan("1");
constexpr auto error2 = std::nan("2");
constexpr auto error3 = std::nan("3");
constexpr bool bit_equal(double d1, double d2) {
return std::bit_cast<long long>(d1) == std::bit_cast<long long>(d2);
}
const auto value = foo();
if(std::isnan(value)) {
if (bit_equal(value, error1)) /*handle error1*/;
else if (bit_equal(value, error1)) /*handle error2*/;
else if (bit_equal(value, error1)) /*handle error3*/;
else /*handle default error*/;
} else /*use value normally*/;
Or even
double foo();
constexpr auto error1 = std::bit_cast<long long>(std::nan("1"));
constexpr auto error2 = std::bit_cast<long long>(std::nan("2"));
constexpr auto error3 = std::bit_cast<long long>(std::nan("3"));
const auto value = foo();
if(std::isnan(value)) {
switch(std::bit_cast<long long>(value)) {
case error1: /*handle error1*/; break;
case error1: /*handle error2*/; break;
case error1: /*handle error3*/; break;
default: /*handle default error*/;
}
} else /*use value normally*/;
I have to do this because comparing nan's with == always returns false.
Is there a standard function to perform this comparison in C++?
Are any of these 3 alternatives better than the others? Although the last option seems the most succinct, it requires me to do return std::bit_cast<double>(error1); inside foo() rather than just return error1;.
Is there a better design where I can avoid using nan as an error value?
Is there a better design where I can avoid using nan as an error value?
Yes.
Throw an exception.
Use a struct (or tuple, not really) as return value
Use an out ref parameter
Since there are better alternatives, I don't think it's worth answering question 1. and 2.
NaN error signaling
Returning NaNs as error indicators is certainly a valid design choice. If you write numeric code, I'm sure you will find many people who get annoyed when you throw exceptions on any invalid input instead of letting the error propagate through NaNs. "When in Rome, speak like the Romans", right? When in math, speak like the math.h functions ;-)
(Of course this depends on your use case and the expectations of your API users)
However, NaN payloads aren't that good. Using them as an error "hint" may work for you, so you can look at the payload in a data dump and find out where it came from. But as you certainly have noticed, there is no predefined inverse to nan(const char*). Also, NaN payloads tend not to propagate well. For example, while most math functions will return a NaN when they received a NaN input, they will give you a new one without the payload.
There is a good article by agner.org talking about this very topic: Floating point exception tracking and NAN propagation
My personal recommendation would be:
Keep returning NaN on error because it is fast to check
Keep using payloads as error hints
Use a different mechanism to signal the specific type of error
Alternative mechanisms
Options that come to mind:
Exceptions. Maybe paired up with a non-throwing variant for users that are content with just a NaN
double foo();
double foo(std::nothrow_t) noexcept;
double bar()
{
try {
double x = foo();
} except(const std::domain_error&) {
error();
}
double y;
if(std::isnan(y = foo(std::nothrow)))
error();
}
Optional error code or struct output argument: double foo(Error* error=nullptr). After the call, check for NaN. If NaN, read exact error from error struct. If the user is not interested in the exact error, they don't pass a struct to begin with
struct Error
{
int errcode;
operator bool() const noexcept
{ return errcode; }
/** throw std::domain_error with error message */
[[noreturn]] void raise() const;
void check() const
{
if(errcode)
raise();
}
}
double foo(Error* err=nullptr) noexcept;
double bar()
{
Error err;
double x;
x = foo(); // just continue on NaN
if(std::isnan(x = foo()))
return x; // abort without error explanation
if(std::isnan(x = foo(&err)))
err.raise(); // raise exception
return x;
}
std::variant<double, Error> return value. In my opinion the API is not well suited for this; too verbose. This will be fixed in C++23 with std::expected. Also less efficient because the data will likely be returned on the stack
std::pair<double, Error>. If the Error type is a simple struct without a destructor and with a maximum size of 8 byte or a primitive type (so it can be returned in a register), this will be very efficient and it is also easy to check. Building your own custom pair-like type that offers some convenience methods like get_result_or_throw_error() is also possible.
template<class T>
struct Result
{
T result;
Error err;
Result() = default;
explicit constexpr Result(T result) noexcept
: result(result),
err() // set to 0
{}
explicit constexpr Result(Error err, T result=NAN) noexcept
: result(result),
err(err)
{}
operator bool() const noexcept
{ return err; }
T check() const
{
err.check(); // may throw
return result;
}
bool unpack(T& out) const noexcept
{
if(err)
return false;
out = result;
return true;
}
};
Result<double> foo() noexcept;
double bar()
{
double x = foo().check(); // throw on error
double y = foo().result; // ignore error. Continue with NaN
}
Result<double> baz() noexcept
{
Result<double> rtrn;
double x;
if(! (rtrn = foo()).unpack(x))
return rtrn; // propagate error
rtrn.result = x + 1.; // continue operation
return rtrn;
}
Further discussion
To give a bit more of a personal opinion and also delve into a few more performance concerns:
Exceptions
Well, all the usual aspects of exception handling and when to use them apply. See for example When and how should I use exception handling?
I think at this point the general consensus on exceptions is that they should not be part of the regular control flow and should only be used for very rare, exceptional cases where you most likely want to abort the operation instead of, say mitigating the error. It is just too easy to forget catching exceptions on all call sites so they tend to travel very far up the call chain before being caught.
So their use is very situational. Do you want your users to explicitly deal with any error condition on the site where they appear? Then don't use exceptions because users of your API will definitely not be bothered using a try-except block everywhere. If you want the error to get out of the way as far as possible, use them.
As for the idea of using a second set of functions without exceptions: Well, it doesn't compose well. It's feasible for a small set of functions but do you really want to write every piece of code twice, once with and once without exceptions? Probably not.
Error output parameter
This is probably the most flexible option while remaining very efficient. Passing an additional parameter has a minor cost but it isn't too bad.
The main benefit is that this is the only option besides exceptions that allows you to compose complex error reports with dynamic memory allocation for error messages etc. without incurring extra costs in the no-error case. If you make the Result object complex enough to require a destructor, it will be passed on the stack and you need to re-read the error code and actual result value after every function call and then its destructor will run.
In contrast, the Error object will be rarely touched. Yes, its destructor will run once it goes out of scope. However, I expect that most code bases will have one error object very far up the call chain and then just pass it down and reuse that object as needed.
If you make the Error object complex you might find yourself in a situation where a caller wants the error code but not the error message, e.g. because they expect an error and want to mitigate it instead of reporting it. For this case, it might make sense to add a flag to the object to indicate that the error message should not be filled.
struct Error
{
int errcode;
bool use_message;
std::string message;
};
variant, expected
I think I've made it sufficiently clear above that I don't think std::variant has a suitable API for this task. std::expected may one day be available on every platform you target but right now it isn't and you will definitely draw the ire of your release engineers if you start using C++23 features and they have to build your code for RHEL-8 or something similarly long-lived.
Performance-wise all the points I discuss below for Result apply. In addition, the floating point result will always be returned either on the stack or in a general purpose register. Using the Result or std::pair approach will at least get double results in a floating point register on Mac/Linux/BSD, which is a minor advantage, but not huge. floats will still be packed in a GP register, though.
Result type
From an API design perspective, the nice thing about a Result object is that the caller cannot ignore the possibility of an error. They may or may not remember to check for NaN or catch exceptions but with Result, they always have to unpack the contained value and in doing so, decide on their desired error handling.
From a performance perspective, the main point when writing a Result type is that you don't want to make it more expensive to access the actual return value unless you don't care about runtime and code size. This means making sure the return value can be passed in registers instead of the stack.
On Windows this is very hard to achieve because the Windows calling convention only uses a single register for return objects and I don't think they pack two 32 bit values into one 64 bit register. At this point your only options are a) accept the cost of stack return values b) try to pack error code and result value in one scalar like you did with NaN payloads or other tricks like negative integers c) not use this approach.
On all other major x86-64 platforms, you have two registers to work with. This is far more feasible unless you regularly return 16 byte payloads like std::complex<double>.
However, for this to work, the Result must not have a non-trivial destructor or copy/move constructor. For all intents and purposes, this means you cannot have dynamic error messages in the Error type. There are ways around this, if you absolutely need: You enforce that every access to the actual result also checks the error and deallocates, either reporting or ignoring it in the process. Use [[nodiscard]] on the return values to ensure the return value is checked at all. This works, for example:
struct Error
{
std::string* message;
private:
[[noreturn]] static void raise_and_delete_msg(std::string*);
public:
/*
* Note: clang needs always_inline to generate efficient
* code here. GCC is fine
*/
[[noreturn, gnu::always_inline]] void raise() const
{ raise_and_delete_msg(message); }
void discard() const noexcept
{ delete message; }
operator bool() const noexcept
{ return message != nullptr; }
void check() const
{
if(message)
raise();
}
};
template<class T>
class Result
{
T result;
Error err;
public:
constexpr Result()
: result(),
err()
{}
explicit Result(T result)
: result(std::move(result)),
err()
{}
/** Takes ownerhip of message. Will delete */
explicit Result(std::unique_ptr<std::string>&& message)
: err(Error{message.release()})
{}
Result(std::unique_ptr<std::string>&& message, T invalid)
: result(std::move(invalid)),
err(Error{message.release()})
{}
T unchecked() noexcept
{
err.discard();
return std::move(result);
}
T checked()
{
err.check();
return std::move(result);
}
bool unpack(T& out) noexcept
{
if(err) {
err.discard();
return false;
}
out = std::move(result);
return true;
}
};
[[nodiscard]] Result<double> foo();
double bar()
{
return foo().checked() + 1.;
}
However, at this point you quickly reach the point where you exceed the 8 bytes you can reasonably use for sizeof(Error) before you go back to stack return values so I'm not sure this is worth it. For example if you want and error code plus message, you need to dynamically allocate both or do other fancy tricks. Plus, [[nodiscard]] is only a warning, so you can still easily get memory leaks.
Conclusion
If I have to make suggestions:
Use exceptions if a) they are in line with the coding style and API you normally use plus b) the expectations that both you and your API users have on these functions and c) failure should be rare, costly, and loud
Use Error output arguments if you primarily target Windows or if you want complex error reporting with dynamic messages or similar.
Use Result for simple error codes on Linux/Mac or if you want your API users to always make a conscious decision to check or ignore an error. In that case, you may also accept the additional runtime cost associated with complex Error objects or any such object on Windows.
I have a legacy interface that has a function with a signature that looks like the following:
int provide_values(int &x, int &y)
x and y are considered output parameters in this function. Note: I'm aware of the drawbacks of using output parameters and that there are better design choices for such an interface. I'm not trying to debate the merits of this interface.
Within the implementation of this function, it first checks to see if the addresses of the two output parameters are the same, and returns an error code if they are.
if (&x == &y) {
return -1; // Error: both output parameters are the same variable
}
Is there a way at compile time to prevent callers of this function from providing the same variable for the two output parameters without having such a check within the body of the function? I'm thinking of something similar to the restrict keyword in C, but that only is a signal to the compiler for optimization, and only provides a warning when compiling code that calls such a function with the same pointer.
No, there's not. Keep in mind that the calling code could derive x and y from references returned from some arbitrary black-box functions. But even otherwise, it is provably impossible (by the Incompleteness Theorem) for the compiler to robustly determine whether they point to the same object, since what objects they are bound to is determined by the execution of the program.
If all you want to do is preventing that the user calls provide_values(xyz, xyz), you can use a macro as in the following example. However, this won't protect the user from calling provide_values(xyz, reference_to_xyz), so the whole this is probably pointless anyway.
#include <cstring>
void provide_values(int&, int&) {}
#define PROV_VAL(x, y) if (strcmp((#x),(#y))) { provide_values(x, y); } else { throw -1; }
int main()
{
int x;
int y;
PROV_VAL(x,y);
//PROV_VAL(x,x); // this throws
int& z = x;
PROV_VAL(x,z); // this passes though!
}
Possible duplicates I'll explain at the bottom.
I was wondering if it is possible to do a compile time check to see if a function is called before another function.
My use case looks something like this:
auto f = foo();
if(!f.isOk())
return f.getError();
auto v = f.value();
So in this case I would want to get a compile time error if the user did not call isOk before calling value.
As far as I know and searched it does not seem possible but I wanted to ask here just to be sure I didn't miss any c++ magic.
FauxDupes:
How to check at compile time that a function may be called at compile time?
This is about knowing wether your function is a constexpr function. I want to know if one function has been called before the other has been called.
What you want is not possible directly without changing your design substantially.
What you can do is enforce calling always both by wrapping them in a single call:
??? foo(const F& f) {
return f.isOk() ? f.value() : f.getError();
}
However, this just shifts the problem to choosing the return type. You could return a std::variant or with some changes on the design a std::optional, but whatever you do it will be left to the caller to check what actually has been returned.
Don't assume the most stupid user and don't try to protect them from any possible mistake. Instead assume that they do read documentation.
Having to check whether a returned value is valid is a quite common pattern: functions that return a pointer can return a null-pointer, functions returning an iterator can return the end iterator. Such cases are well documented and a responsible caller will check if the returned value is valid.
To get further inspiration I refer you to std::optional, a quite modern addition to C++, which also heavily relies on the user to know what they are dealing with.
PS: Just as one counter-example, a user might write code like this, which makes it impossible to make the desired check at compile time with your current design:
int n;
std::cin >> n;
auto f = foo();
if(n > 10 && !f.isOk())
return f.getError();
auto v = f.value();
One strategy for this kind of thing is to leverage __attribute__((warn_unused_result)) (for GCC) or _Check_return_ (msvc).
Then, change foo() to return the error condition:
SomeObj obj;
auto result = foo(obj);
This will nudge the caller into handling the error. Of course there are obvious limitations: foo() cannot be a constructor, for example, and the caller cannot use auto for the typename.
One way to ensure order is to transform the temporary dependency into physical dependency:
Move method F::getError() and F::value() into their own structure wrapper (Error, Value).
Change bool F::isOk() to something like:
std::variant<Error, Value> F::isOk()
Then, you cannot use Error::getError or Value::value() before calling isOk, as expected:
auto f = foo();
auto isOk = f.isOk();
if (auto* e = std::get_if<Error>(&isOk)) // Or std::visit
return e->getError();
auto& value = std::get<Value>(&isOk);
auto v = value.value();
I'm working in C++ enviroment and:
a) We are forbidden to use exceptions
b) It is application/data server code that evaluates lot of requests of different kinds
I have simple class encapsulating result of server operation that is also used internally for lot of functions there.
class OpResult
{
.....
bool succeeded();
bool failed(); ....
... data error/result message ...
};
As I try to have all functions small and simple, lot of blocks like this are arising:
....
OpResult result = some_(mostly check)function(....);
if (result.failed())
return result;
...
The question is, is it bad practise to make macro looking like this and use it everywhere?
#define RETURN_IF_FAILED(call) \
{ \
OpResult result = call; \
if (result.failed()) \
return result; \
}
I understand that someone can call it nasty, but is there a better way?
What other way of handling results and avoiding lot of bloat code would you suggest?
It's a trade off. You are trading code size for obfuscation of the logic. I prefer to preserve the logic as visible.
I dislike macros of this type because they break Intellisense (on Windows), and debugging of the program logic. Try putting a breakpoint on all 10 return statements in your function - not the check, just the return. Try stepping through the code that's in the macro.
The worst thing about this is that once you accept this it's hard to argue against the 30-line monster macros that some programmers LOVE to use for commonly-seen mini-tasks because they 'clarify things'. I've seen code where different exception types were handled this way by four cascading macros, resulting in 4 lines in the source file, with the macros actually expanding to > 100 real lines. Now, are you reducing code bloat? No. It's impossible to tell easily with macros.
Another general argument against macros, even if not obviously applicable here, is the ability to nest them with hard to decipher results, or to pass in arguments that result in weird but compilable arguments e.g. the use of ++x in a macros that uses the argument twice. I always know where I stand with the code, and I can't say that about a macro.
EDIT: One comment I should add is that if you really do repeat this error check logic over and over, perhaps there are refactoring opportunities in the code. Not a guarantee but a better way of code bloat reduction if it does apply. Look for repeated sequences of calls and encapsulate common sequences in their own function, rather than addressing how each call is handled in isolation.
Actually, I prefer slightly other solution. The thing is that the result of inner call is not necessarily a valid result of an outer call. For example, inner failure may be "file not found", but the outer one "configuration not available". Therefore my suggestion is to recreate the OpResult (potentially packing the "inner" OpResult into it for better debugging). This all goes to the direction of "InnerException" in .NET.
technically, in my case the macro looks like
#define RETURN_IF_FAILED(call, outerresult) \
{ \
OpResult innerresult = call; \
if (innerresult.failed()) \
{ \
outerresult.setInner(innerresult); \
return outerresult; \
} \
}
This solution requires however some memory management etc.
Some purist argue that having no explicit returns hinders the readability of the code. In my opinion however having explicit RETURN as a part of the macro name is enough to prevent confusion for any skilled and attentive developer.
My opinion is that such macros don't obfuscate the program logic, but on the contrary make it cleaner. With such a macro, you declare your intent in a clear and concise way, while the other way seems to be overly verbose and therefore error-prone. Making the maintainers parse in mind the same construct OpResult r = call(); if (r.failed) return r is wasting of their time.
An alternative approach without early returns is applying to each code line the pattern like CHECKEDCALL(r, call) with #define CHECKEDCALL(r, call) do { if (r.succeeded) r = call; } while(false). This is in my eyes much much worse and definitely error-prone, as people tend to forget about adding CHECKEDCALL() when adding more code.
Having a popular need to do checked returns (or everything) with macros seems to be a slight sign of missing language feature for me.
As long as the macro definition sits in an implementation file and is undefined as soon as unnecessary, I wouldn't be horrified.
// something.cpp
#define RETURN_IF_FAILED() /* ... */
void f1 () { /* ... */ }
void f2 () { /* ... */ }
#undef RETURN_IF_FAILED
However, I would only use this after having ruled out all non-macro solutions.
After 10 years, I'm going to answer my own question to my satisfaction, if only I had a time machine ...
I encountered a similar situation many times in new projects. Even when exceptions were allowed, I don't want to always use them for "normal fails".
I eventually discovered a way to write these kind of statements.
For generic Result that includes message, I use this:
class Result
{
public:
enum class Enum
{
Undefined,
Meaningless,
Success,
Fail,
};
static constexpr Enum Undefined = Enum::Undefined;
static constexpr Enum Meaningless = Enum::Meaningless;
static constexpr Enum Success = Enum::Success;
static constexpr Enum Fail = Enum::Fail;
Result() = default;
Result(Enum result) : result(result) {}
Result(const LocalisedString& message) : result(Fail), message(message) {}
Result(Enum result, const LocalisedString& message) : result(result), message(message) {}
bool isDefined() const { return this->result != Undefined; }
bool succeeded() const { assert(this->result != Undefined); return this->result == Success; }
bool isMeaningless() const { assert(this->result != Undefined); return this->result == Enum::Meaningless; }
bool failed() const { assert(this->result != Undefined); return this->result == Fail; }
const LocalisedString& getMessage() const { return this->message; }
private:
Enum result = Undefined;
LocalisedString message;
};
And then, I have a special helper class in this form, (similar for other return types)
class Failed
{
public:
Failed(Result&& result) : result(std::move(result)) {}
explicit operator bool() const { return this->result.failed(); }
operator Result() { return this->result; }
const LocalisedString& getMessage() const { return this->result.getMessage(); }
Result result;
};
When these are combined, I can write code like this:
if (Failed result = trySomething())
showError(result.getMessage().str());
Isn't it beutiful?
I agree with Steve's POV.
I first thought, at least reduce the macro to
#define RETURN_IF_FAILED(result) if(result.failed()) return result;
but then it occurred to me this already is a one-liner, so there really is little benefit in the macro.
I think, basically, you have to make a trade off between write-ability and readability. The macro is definitely easier to write. It is, however, an open question whether it is also is easier to read. The latter is quite a subjective judgment to make. Still, using macros objectively does obfuscate code.
Ultimately, the underlying problem is that you must not use exceptions. You haven't said what the reasons for that decision are, but I surely hope they are worth the problems this causes.
Could be done with C++0x lambdas.
template<typename F> inline OpResult if_failed(OpResult a, F f) {
if (a.failed())
return a;
else
return f();
};
OpResult something() {
int mah_var = 0;
OpResult x = do_something();
return if_failed(x, [&]() -> OpResult {
std::cout << mah_var;
return f;
});
};
If you're clever and desperate, you could make the same kind of trick work with regular objects.
In my opinion, hiding a return statement in a macro is a bad idea. The 'code obfucation' (I like that term..! ) reaches the highest possible level. My usual solution to such problems is to aggregate the function execution at one place and control the result in the following manner (assuming you have 5 nullary functions):
std::array<std::function<OpResult ()>, 5> tFunctions = {
f1, f2, f3, f4, f5
};
auto tFirstFailed = std::find_if(tFunctions.begin(), tFunctions.end(),
[] (std::function<OpResult ()>& pFunc) -> bool {
return pFunc().failed();
});
if (tFirstFailed != tFunctions.end()) {
// tFirstFailed is the first function which failed...
}
Is there any information in result which is actually useful if the call fails?
If not, then
static const error_result = something;
if ( call().failed() ) return error_result;
would suffice.
I have a setup that looks like this.
class Checker
{ // member data
Results m_results; // see below
public:
bool Check();
private:
bool Check1();
bool Check2();
// .. so on
};
Checker is a class that performs lengthy check computations for engineering analysis. Each type of check has a resultant double that the checker stores. (see below)
bool Checker::Check()
{ // initilisations etc.
Check1();
Check2();
// ... so on
}
A typical Check function would look like this:
bool Checker::Check1()
{ double result;
// lots of code
m_results.SetCheck1Result(result);
}
And the results class looks something like this:
class Results
{ double m_check1Result;
double m_check2Result;
// ...
public:
void SetCheck1Result(double d);
double GetOverallResult()
{ return max(m_check1Result, m_check2Result, ...); }
};
Note: all code is oversimplified.
The Checker and Result classes were initially written to perform all checks and return an overall double result. There is now a new requirement where I only need to know if any of the results exceeds 1. If it does, subsequent checks need not be carried out(it's an optimisation). To achieve this, I could either:
Modify every CheckN function to keep check for result and return. The parent Check function would keep checking m_results. OR
In the Results::SetCheckNResults(), throw an exception if the value exceeds 1 and catch it at the end of Checker::Check().
The first is tedious, error prone and sub-optimal because every CheckN function further branches out into sub-checks etc.
The second is non-intrusive and quick. One disadvantage is I can think of is that the Checker code may not necessarily be exception-safe(although there is no other exception being thrown anywhere else). Is there anything else that's obvious that I'm overlooking? What about the cost of throwing exceptions and stack unwinding?
Is there a better 3rd option?
I don't think this is a good idea. Exceptions should be limited to, well, exceptional situations. Yours is a question of normal control flow.
It seems you could very well move all the redundant code dealing with the result out of the checks and into the calling function. The resulting code would be cleaner and probably much easier to understand than non-exceptional exceptions.
Change your CheckX() functions to return the double they produce and leave dealing with the result to the caller. The caller can more easily do this in a way that doesn't involve redundancy.
If you want to be really fancy, put those functions into an array of function pointers and iterate over that. Then the code for dealing with the results would all be in a loop. Something like:
bool Checker::Check()
{
for( std::size_t id=0; idx<sizeof(check_tbl)/sizeof(check_tbl[0]); ++idx ) {
double result = check_tbl[idx]();
if( result > 1 )
return false; // or whichever way your logic is (an enum might be better)
}
return true;
}
Edit: I had overlooked that you need to call any of N SetCheckResultX() functions, too, which would be impossible to incorporate into my sample code. So either you can shoehorn this into an array, too, (change them to SetCheckResult(std::size_t idx, double result)) or you would have to have two function pointers in each table entry:
struct check_tbl_entry {
check_fnc_t checker;
set_result_fnc_t setter;
};
check_tbl_entry check_tbl[] = { { &Checker::Check1, &Checker::SetCheck1Result }
, { &Checker::Check2, &Checker::SetCheck2Result }
// ...
};
bool Checker::Check()
{
for( std::size_t id=0; idx<sizeof(check_tbl)/sizeof(check_tbl[0]); ++idx ) {
double result = check_tbl[idx].checker();
check_tbl[idx].setter(result);
if( result > 1 )
return false; // or whichever way your logic is (an enum might be better)
}
return true;
}
(And, no, I'm not going to attempt to write down the correct syntax for a member function pointer's type. I've always had to look this up and still never ot this right the first time... But I know it's doable.)
Exceptions are meant for cases that shouldn't happen during normal operation. They're hardly non-intrusive; their very nature involves unwinding the call stack, calling destructors all over the place, yanking the control to a whole other section of code, etc. That stuff can be expensive, depending on how much of it you end up doing.
Even if it were free, though, using exceptions as a normal flow control mechanism is a bad idea for one other, very big reason: exceptions aren't meant to be used that way, so people don't use them that way, so they'll be looking at your code and scratching their heads trying to figure out why you're throwing what looks to them like an error. Head-scratching usually means you're doing something more "clever" than you should be.