So in doctest (my testing framework) the user can disable all tests by defining the DOCTEST_CONFIG_DISABLE identifier which makes the following code and macros:
TEST_CASE("name") {
int a = 5;
int b = 6;
CHECK(a == b);
}
turn into the following after the preprocessor:
template<typename T>
void some_anon_func_123() {
int a = 5;
int b = 6;
}
that means that the self-registering test case is turned into an uninstantiated template function and the CHECK() macro (which functions as an if statement checking the condition) into a no-op - like this:
#define CHECK(x) ((void)0) // if disabled
However if the user has factored such testing code in a separate function like this:
static int g() {
std::cout << "called!" << std::endl;
return 42;
}
static void f() {
int a = 5;
CHECK(a == g());
}
TEST_CASE("name") {
f();
}
then there will be warnings for unused functions and unused variables. doctest prides itself with producing 0 warnings even on the most aggressive levels so this is unacceptable.
I tried using the ((void) ...) trick by passing it the macro argument like this:
#define CHECK(x) ((void)(x))
and that indeed silenced the warnings (atleast for a and g()) but there is still code being generated for that statement - if I invoke the f() function from my main() I will see the called! string printed in the console. This is undesirable since I want the compilation to be as fast as possible when test cases and asserts are disabled from the build (by using the DOCTEST_CONFIG_DISABLE identifier). If a user has 100 000 asserts and builds with them disabled he wouldn't want all that unnecessary codegen and compile time overhead for macros that are supposed to be disabled (the CHECK() one).
__attribute__((unused)) has to be used at the point of declaration of a variable - I cannot stick it in the CHECK() macro (or can I? I don't know...).
Not sure if _Pragma() could help - and even if it could - it is known to have issues with GCC:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55578
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69543
Is there a solution to my problem - like perhaps passing the expression to some template or whatever...? (C++98 solution needed)
I explained my problem in excruciating detail only because I often get accused of the XY problem...
EDIT:
A C++11 solution is OK too - some C++11 features have started to conditionally creep into the library anyway...
So, you want to "lie" to the compiler that you're using a function which you're not actually calling. So how to use a piece of code without executing it?
It seems that the only thing that works on all popular compilers is a C++11-only solution - a lambda which is never called:
#define CHECK(x) [&](){ ((void)(x)); }
If you absolutely need a c++98 solution, a sizeof will also work on many compilers, MSVC being a notable exception:
#define CHECK(x) sizeof(x)
MSVC will still warn for uncalled functions in the expression x.
I guess for maximum coverage you could employ a combination of the two.
// Example assert function
inline void assertImpl(bool mExpr, const std::string& mMsg) {
if(!mExpr) { printMsg(mMsg); abortExecution(); }
}
// Wrapper macro
#ifdef NDEBUG
#define MY_ASSERT(...) do{ ; }while(false)
#else
#define MY_ASSERT(...) assertImpl(__VA_ARGS__)
#endif
Consider the case where mExpr or mMsg are not-pure expressions - is there a way to force the compiler to optimize them out anyway?
bool globalState{false};
bool pureExpression() { return false; }
bool impureExpression() { globalState = !globalState; return false; }
// ...
// This will very likely be optimized out with (NDEBUG == 0)
MY_ASSERT(pureExpression());
// Will this be optimized out with (NDEBUG == 0)
MY_ASSERT(impureExpression());
What do compilers usually do in a situation where an impure expression is "discarded"?
Is there a way to make 100% sure that pure expressions get optimized out?
Is there a way to make 100% sure that impure expressions get optimized out or never get optimized out?
After macro expansion, your call to impureExpression() no longer exists: it's not part of the macro expansion result. If the call to your function isn't there, the function won't be called, on all conforming implementations, at any optimisation level, as long as NDEBUG is defined.
Note: you talk about NDEBUG == 0, but if that is what you want the condition to be, your #ifdef NDEBUG condition is incorrect. #ifdef tests whether the macro is defined, and pays no attention to what the definition is.
The optimizer is not involved here. In the macro that is enabled with NDEBUG, the arguments are discarded regardless.
When refactoring some code, I often encounter this :
bool highLevelFunc foo()
{
// ...
bool result = LesserLevelFunc();
if (!result) return false;
// ... Keep having fun if we didn't return
}
Is there any way to make this a little more sexy and less verbose ? Without any overhead or pitfall of course.
I can think of a macro
#define FORWARD_IF_FALSE(r) if (!r) return r;
bool highLevelFunc foo()
{
// ...
FORWARD_IF_FALSE(LesserLevelFunc());
// ...
}
Anything better, i.e without preprocessor macro?
To me, "readable" code is sexy. I find the original code more readable than your proposal, since the original uses standard C++ syntax and the latter uses a macro which I'd have to go and look up.
If you want to be more explicit, you could say if (result == false) (or better yet, if (false == result) to prevent a possible assignment-as-comparison bug) but understanding the ! operator is a fairly reasonable expectation in my opinion.
That said, there is no reason to assign the return value to a temporary variable; you could just as easily say:
if (!LesserLevelFunc()) return false;
This is quite readable to me.
EDIT: You could also consider using exceptions instead of return values to communicate failure. If LesserLevelFunc() threw an exception, you would not need to write any special code in highLevelFunc() to check for success. The exception would propagate up through the caller to the nearest matching catch block.
Because you might be continuing if LesserLevelFunc returns true, I suggest keeping it pretty close to how it is now:
if (!LesserLevelFunc())
return false;
First of all introducing the macro you are making the code unsafe. Moreover your macro is invalid.
The expression after the negation operator shall be enclosed in parentheses.
#define FORWARD_IF_FALSE(r) if (!( r ) ) return r;
Secondly the macro calls r twice. Sometimes two calls of a function is not equivalent to one call of the same function. For example the function can have some side effects or internal flags that are switched on/off in each call of the function.
So I would keep the code as is without introducing the macro because the macro does not equivalent to the symantic of the original code.
Say I have the following function:
Thingy& getThingy(int id)
{
for ( int i = 0; i < something(); ++i )
{
// normal execution guarantees that the Thingy we're looking for exists
if ( thingyArray[i].id == id )
return thingyArray[i];
}
// If we got this far, then something went horribly wrong and we can't recover.
// This function terminates the program.
fatalError("The sky is falling!");
// Execution will never reach this point.
}
Compilers will typically complain at this, saying that "not all control paths return a value". Which is technically true, but the control paths that don't return a value abort the program before the function ends, and are therefore semantically correct. Is there a way to tell the compiler (VS2010 in my case, but I'm curious about others as well) that a certain control path is to be ignored for the purposes of this check, without suppressing the warning completely or returning a nonsensical dummy value at the end of the function?
You can annotate the function fatalError (its declaration) to let the compiler know it will never return.
In C++11, this would be something like:
[[noreturn]] void fatalError(std::string const&);
Pre C++11, you have compiler specific attributes, such as GCC's:
void fatalError(std::string const&) __attribute__((noreturn));
or Visual Studio's:
__declspec(noreturn) void fatalError(std::string const&);
Why don't you throw an exception? That would solve the problem and it would force the calling method to deal with the exception.
If you did manage to haggle the warning out some way or other, you are still left with having to do something with the function that calls getThingy(). What happens when getThingy() fails? How will the caller know? What you have here is an exception (conceptually) and your design should reflect that.
You can use a run time assertion in lieu of your fatalError routine. This would just look like:
Thingy& getThingy(int id)
{
for ( int i = 0; i < something(); ++i )
{
if ( thingyArray[i].id == id )
return thingyArray[i];
}
// Clean up and error condition reporting go here.
assert(false);
}
I'm working in C++ enviroment and:
a) We are forbidden to use exceptions
b) It is application/data server code that evaluates lot of requests of different kinds
I have simple class encapsulating result of server operation that is also used internally for lot of functions there.
class OpResult
{
.....
bool succeeded();
bool failed(); ....
... data error/result message ...
};
As I try to have all functions small and simple, lot of blocks like this are arising:
....
OpResult result = some_(mostly check)function(....);
if (result.failed())
return result;
...
The question is, is it bad practise to make macro looking like this and use it everywhere?
#define RETURN_IF_FAILED(call) \
{ \
OpResult result = call; \
if (result.failed()) \
return result; \
}
I understand that someone can call it nasty, but is there a better way?
What other way of handling results and avoiding lot of bloat code would you suggest?
It's a trade off. You are trading code size for obfuscation of the logic. I prefer to preserve the logic as visible.
I dislike macros of this type because they break Intellisense (on Windows), and debugging of the program logic. Try putting a breakpoint on all 10 return statements in your function - not the check, just the return. Try stepping through the code that's in the macro.
The worst thing about this is that once you accept this it's hard to argue against the 30-line monster macros that some programmers LOVE to use for commonly-seen mini-tasks because they 'clarify things'. I've seen code where different exception types were handled this way by four cascading macros, resulting in 4 lines in the source file, with the macros actually expanding to > 100 real lines. Now, are you reducing code bloat? No. It's impossible to tell easily with macros.
Another general argument against macros, even if not obviously applicable here, is the ability to nest them with hard to decipher results, or to pass in arguments that result in weird but compilable arguments e.g. the use of ++x in a macros that uses the argument twice. I always know where I stand with the code, and I can't say that about a macro.
EDIT: One comment I should add is that if you really do repeat this error check logic over and over, perhaps there are refactoring opportunities in the code. Not a guarantee but a better way of code bloat reduction if it does apply. Look for repeated sequences of calls and encapsulate common sequences in their own function, rather than addressing how each call is handled in isolation.
Actually, I prefer slightly other solution. The thing is that the result of inner call is not necessarily a valid result of an outer call. For example, inner failure may be "file not found", but the outer one "configuration not available". Therefore my suggestion is to recreate the OpResult (potentially packing the "inner" OpResult into it for better debugging). This all goes to the direction of "InnerException" in .NET.
technically, in my case the macro looks like
#define RETURN_IF_FAILED(call, outerresult) \
{ \
OpResult innerresult = call; \
if (innerresult.failed()) \
{ \
outerresult.setInner(innerresult); \
return outerresult; \
} \
}
This solution requires however some memory management etc.
Some purist argue that having no explicit returns hinders the readability of the code. In my opinion however having explicit RETURN as a part of the macro name is enough to prevent confusion for any skilled and attentive developer.
My opinion is that such macros don't obfuscate the program logic, but on the contrary make it cleaner. With such a macro, you declare your intent in a clear and concise way, while the other way seems to be overly verbose and therefore error-prone. Making the maintainers parse in mind the same construct OpResult r = call(); if (r.failed) return r is wasting of their time.
An alternative approach without early returns is applying to each code line the pattern like CHECKEDCALL(r, call) with #define CHECKEDCALL(r, call) do { if (r.succeeded) r = call; } while(false). This is in my eyes much much worse and definitely error-prone, as people tend to forget about adding CHECKEDCALL() when adding more code.
Having a popular need to do checked returns (or everything) with macros seems to be a slight sign of missing language feature for me.
As long as the macro definition sits in an implementation file and is undefined as soon as unnecessary, I wouldn't be horrified.
// something.cpp
#define RETURN_IF_FAILED() /* ... */
void f1 () { /* ... */ }
void f2 () { /* ... */ }
#undef RETURN_IF_FAILED
However, I would only use this after having ruled out all non-macro solutions.
After 10 years, I'm going to answer my own question to my satisfaction, if only I had a time machine ...
I encountered a similar situation many times in new projects. Even when exceptions were allowed, I don't want to always use them for "normal fails".
I eventually discovered a way to write these kind of statements.
For generic Result that includes message, I use this:
class Result
{
public:
enum class Enum
{
Undefined,
Meaningless,
Success,
Fail,
};
static constexpr Enum Undefined = Enum::Undefined;
static constexpr Enum Meaningless = Enum::Meaningless;
static constexpr Enum Success = Enum::Success;
static constexpr Enum Fail = Enum::Fail;
Result() = default;
Result(Enum result) : result(result) {}
Result(const LocalisedString& message) : result(Fail), message(message) {}
Result(Enum result, const LocalisedString& message) : result(result), message(message) {}
bool isDefined() const { return this->result != Undefined; }
bool succeeded() const { assert(this->result != Undefined); return this->result == Success; }
bool isMeaningless() const { assert(this->result != Undefined); return this->result == Enum::Meaningless; }
bool failed() const { assert(this->result != Undefined); return this->result == Fail; }
const LocalisedString& getMessage() const { return this->message; }
private:
Enum result = Undefined;
LocalisedString message;
};
And then, I have a special helper class in this form, (similar for other return types)
class Failed
{
public:
Failed(Result&& result) : result(std::move(result)) {}
explicit operator bool() const { return this->result.failed(); }
operator Result() { return this->result; }
const LocalisedString& getMessage() const { return this->result.getMessage(); }
Result result;
};
When these are combined, I can write code like this:
if (Failed result = trySomething())
showError(result.getMessage().str());
Isn't it beutiful?
I agree with Steve's POV.
I first thought, at least reduce the macro to
#define RETURN_IF_FAILED(result) if(result.failed()) return result;
but then it occurred to me this already is a one-liner, so there really is little benefit in the macro.
I think, basically, you have to make a trade off between write-ability and readability. The macro is definitely easier to write. It is, however, an open question whether it is also is easier to read. The latter is quite a subjective judgment to make. Still, using macros objectively does obfuscate code.
Ultimately, the underlying problem is that you must not use exceptions. You haven't said what the reasons for that decision are, but I surely hope they are worth the problems this causes.
Could be done with C++0x lambdas.
template<typename F> inline OpResult if_failed(OpResult a, F f) {
if (a.failed())
return a;
else
return f();
};
OpResult something() {
int mah_var = 0;
OpResult x = do_something();
return if_failed(x, [&]() -> OpResult {
std::cout << mah_var;
return f;
});
};
If you're clever and desperate, you could make the same kind of trick work with regular objects.
In my opinion, hiding a return statement in a macro is a bad idea. The 'code obfucation' (I like that term..! ) reaches the highest possible level. My usual solution to such problems is to aggregate the function execution at one place and control the result in the following manner (assuming you have 5 nullary functions):
std::array<std::function<OpResult ()>, 5> tFunctions = {
f1, f2, f3, f4, f5
};
auto tFirstFailed = std::find_if(tFunctions.begin(), tFunctions.end(),
[] (std::function<OpResult ()>& pFunc) -> bool {
return pFunc().failed();
});
if (tFirstFailed != tFunctions.end()) {
// tFirstFailed is the first function which failed...
}
Is there any information in result which is actually useful if the call fails?
If not, then
static const error_result = something;
if ( call().failed() ) return error_result;
would suffice.