Code coverage (c++ code execution path) - c++

Let's say I have this code:
int function(bool b)
{
// execution path 1
int ret = 0;
if(b)
{
// execution path 2
ret = 55;
}
else
{
// execution path 3
ret = 120;
}
return ret;
}
I need some sort of a mechanism to make sure that the code has gone in any possible path, i.e execution paths 1, 2 & 3 in the code above.
I thought about having a global function, vector and a macro.
This macro would simply call that function, passing as parameters the source file name and the line of code, and that function would mark that as "checked", by inserting to the vector the info that the macro passed.
The problem is that I will not see anything about paths that did not "check".
Any idea how do I do this? How to "register" a line of code at compile-time, so in run-time I can see that it didn't "check" yet?
I hope I'm clear.

Usually coverage utilities (such as gcov) are supplied with compiler. However please note that they will usually give you only C0 coverage. I.e.
C0 - every line is executed at least once. Please note that a ? b : c is marked as executed even if only one branch have been used.
C1 - every branch is executed at least once.
C2 - every path is executed at least once
So even if your tests shows 100% C0 coverage you may not catch every path in code - and probably you don't have time to do it (number of paths grows exponentially with respect to branches). However it is good to know if you have 10% C2 or 70% C2 (or 0.1% C2).

Quite often there will be a utility supplied with your compiler to do this sort of code coverage analysis. For example, GCC has the gcov utility.

You need a code coverage program (gcov, bullseye, dev partner) and unit-testing (unittest++, cppunit, etc.). You write test that will test that function.
TEST( UnitTestFunction )
{
CHECK( function(true) == 55 );
CHECK( function(false) == 120 );
}
Then unit tests in this case do not just check for integrity (though they still do) but they also test for coverage.

Try SD C++ TestCoverage for a VisualStudio compatible test coverage tool. I believe that it in fact actually will tell you about test coverage of a?b:c, too.

The problem is that I will not see anything about paths that did not "check".
If this means in other words that you're not only looking for the set of code points which are actually executed but also for the set of code points which have been "marked" somehow as expected to be executed to maybe finally report the difference, i might have a very dangerous solution. It works for me on MSVC 2010 and 2013.
The approach is to make use of the pre program start initialization of static variables, but since all code points are in functions and therefore, the "static anker point" has to be put there somehow and so, the c++ feature of delayed initialization of static function variables has to be overcome.
This seems to be possible by adding an indirection through a template class (X) with a static member variable (progloc_) to enforce the initialization per template parameter which in turn is a wrapper struct which transports the needed information (_.FILE._ " at line " _.LINE._).
Putting this together, the most important code to achieve this could look like the following:
template <class T> class X {
public:
static T progloc_;
};
template <class T> T X<T>::progloc_;
#define TRACE_CODE_POINT \
struct ProgLocation { \
public: \
std::string loc_; \
ProgLocation() : loc_(std::string(__FILE__ " at line " S__LINE__)) \
{ \
TestFw::CodePoints::Test::imHere(loc_); \
} \
}; \
TestFw::CodePoints::X<ProgLocation> dummy; \
TestFw::CodePoints::Test::iGotCalled(dummy.progloc_.loc_);
The S__LINE__ - trick which is used in the ProgLocation - ctor comes from here on SO.
#define S(x) #x
#define S_(x) S(x)
#define S__LINE__ S_(__LINE__)
To track, the following is used:
class Test
{
private:
typedef std::set<std::string> TFuncs;
static TFuncs registeredFunctions;
static TFuncs calledFunctions;
public:
static int imHere(const std::string fileAndLine)
{
assert(registeredFunctions.find(fileAndLine) == registeredFunctions.end());
registeredFunctions.insert(fileAndLine);
return 0;
}
static void iGotCalled(const std::string fileAndLine)
{
if (calledFunctions.find(fileAndLine) == calledFunctions.end())
calledFunctions.insert(fileAndLine);
}
static void report()
{
for (TFuncs::const_iterator rfIt = registeredFunctions.begin(); rfIt != registeredFunctions.end(); ++rfIt)
if (calledFunctions.find(*rfIt) == calledFunctions.end())
std::cout << (*rfIt) << " didn't get called" << std::endl;
}
};
Maybe there are many problems connected with this approach which I don't see yet and make it inpracticable for your case, and as others pointed out, using static code analysis tools is for most situations the better solution.
EDIT:
Just found out that the provided solution has been discussed before in another context:
non-deferred-static-member-initialization-for-templates-in-gcc

You can use FILE and LINE preprocessor directives:
#define TRACE(msg) MyTraceNotify(msg,__FILE__,__LINE__)
Just insert TRACE(msg) macro in your code at the places you want to track, with your custom message, and write your MyTraceNotify function.
void MyTraceNotify(const char *msg, const char *filename, ULONG line)
{
/* Put your code here... */
}

Related

silence warnings about unused variables/functions at the point of their conditionally compiled usage

So in doctest (my testing framework) the user can disable all tests by defining the DOCTEST_CONFIG_DISABLE identifier which makes the following code and macros:
TEST_CASE("name") {
int a = 5;
int b = 6;
CHECK(a == b);
}
turn into the following after the preprocessor:
template<typename T>
void some_anon_func_123() {
int a = 5;
int b = 6;
}
that means that the self-registering test case is turned into an uninstantiated template function and the CHECK() macro (which functions as an if statement checking the condition) into a no-op - like this:
#define CHECK(x) ((void)0) // if disabled
However if the user has factored such testing code in a separate function like this:
static int g() {
std::cout << "called!" << std::endl;
return 42;
}
static void f() {
int a = 5;
CHECK(a == g());
}
TEST_CASE("name") {
f();
}
then there will be warnings for unused functions and unused variables. doctest prides itself with producing 0 warnings even on the most aggressive levels so this is unacceptable.
I tried using the ((void) ...) trick by passing it the macro argument like this:
#define CHECK(x) ((void)(x))
and that indeed silenced the warnings (atleast for a and g()) but there is still code being generated for that statement - if I invoke the f() function from my main() I will see the called! string printed in the console. This is undesirable since I want the compilation to be as fast as possible when test cases and asserts are disabled from the build (by using the DOCTEST_CONFIG_DISABLE identifier). If a user has 100 000 asserts and builds with them disabled he wouldn't want all that unnecessary codegen and compile time overhead for macros that are supposed to be disabled (the CHECK() one).
__attribute__((unused)) has to be used at the point of declaration of a variable - I cannot stick it in the CHECK() macro (or can I? I don't know...).
Not sure if _Pragma() could help - and even if it could - it is known to have issues with GCC:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55578
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69543
Is there a solution to my problem - like perhaps passing the expression to some template or whatever...? (C++98 solution needed)
I explained my problem in excruciating detail only because I often get accused of the XY problem...
EDIT:
A C++11 solution is OK too - some C++11 features have started to conditionally creep into the library anyway...
So, you want to "lie" to the compiler that you're using a function which you're not actually calling. So how to use a piece of code without executing it?
It seems that the only thing that works on all popular compilers is a C++11-only solution - a lambda which is never called:
#define CHECK(x) [&](){ ((void)(x)); }
If you absolutely need a c++98 solution, a sizeof will also work on many compilers, MSVC being a notable exception:
#define CHECK(x) sizeof(x)
MSVC will still warn for uncalled functions in the expression x.
I guess for maximum coverage you could employ a combination of the two.

Way to toggle debugging code on and off

I was programming a manchester decoding algorithm for arduino, and I often had to print debug stuff when trying to get things working, but printing to serial and string constants add a lot of overhead. I can't just leave it there in the final binary.
I usually just go through the code removing anything debug related lines.
I'm looking for a way to easily turn it on and off.
The only way I know is this
#if VERBOSE==1
Serial.println();
Serial.print(s);
Serial.print(" ");
Serial.print(t);
Serial.print(" preamble");
#endif
...
#if VERBOSE==1
Serial.println(" SYNC!\n");
#endif
and on top of the file I can just have
#define VERBOSE 0 // 1 to debug
I don't like how much clutter it adds to single liners. I was very tempted to do something very nasty like this. But yeah, evil.
Change every debug output to
verbose("debug message");
then use
#define verbose(x) Serial.print(x) //debug on
or
#define verbose(x) //debug off
There's a C++ feature that allows me to just do this instead of preprocessor?
At the risk of sounding silly: Yes, there is a C++ feature for this, it looks like this:
if (DEBUG)
{
// Your debugging stuff here…
}
If DEBUG is a compile-time constant (I think using a macro is reasonable but not required in this case), the compiler will almost certainly generate no code (not even a branch) for the debugging stuff if debug is false at compile-time.
In my code, I like having several debugging levels. Then I can write things like this:
if (DEBUG_LEVEL >= DEBUG_LEVEL_FINE)
{
// Your debugging stuff here…
}
Again, the compiler will optimize away the entire construct if the condition is false at compile-time.
You can even get more fancy by allowing a two-fold debugging level. A maximum level enabled at compile-time and the actual level used at run-time.
if (MAX_DEBUG >= DEBUG_LEVEL_FINE && Config.getDebugLevel() >= DEBUG_LEVEL_FINE)
{
// Your debugging stuff here…
}
You can #define MAX_DEBUG to the highest level you want to be able to select at run-time. In an all-performance build, you can #define MAX_DEBUG 0 which will make the condition always false and not generate any code at all. (Of course, you cannot select debugging at run-time in this case.)
However, if squeezing out the last instruction is not the most important issue and all your debugging code does is some logging, then the usual pattern lokks like this:
class Logger
{
public:
enum class LoggingLevel { ERROR, WARNING, INFO, … };
void logError(const std::string&) const;
void logWarning(const std::string&) const;
void logInfo(const std::string&) const;
// …
private:
LoggingLevel level_;
};
The various functions then compare the current logging level to the level indicated by the function name and if it is less, immediately return. Except in tight loops, this will probably be the most convenient solution.
And finally, we can combine both worlds by providing inline wrappers for the Logger class.
class Logger
{
public:
enum class LoggingLevel { ERROR, WARNING, INFO, … };
void
logError(const char *const msg) const
{
if (COMPILE_TIME_LOGGING_LEVEL >= LoggingLevel::ERROR)
this->log_(LoggingLevel::ERROR, msg);
}
void
logError(const std::string& msg) const
{
if (COMPILE_TIME_LOGGING_LEVEL >= LoggingLevel::ERROR)
this->log_(LoggingLevel::ERROR, msg.c_str());
}
// …
private:
LoggingLevel level_;
void
log_(LoggingLevel, const char *) const;
};
As long as evaluating the function arguments for your Logger::logError etc calls does not have visible side-effects, chances are good that the compiler will eliminate the call if the conditional in the inline function is false. This is why I have added the overloads that take a raw C-string to optimize the frequent case where the function is called with a string literal. Look at the assembly to be sure.
Personally I wouldn't have a a lot of #ifdef DEBUG scattered around my code:
#ifdef DEBUG
printf("something");
#endif
// some code ...
#ifdef DEBUG
printf("something else");
#endif
rather, I would wrap it in a function:
void DebugPrint(const char const *debugText) // ToDo: make it variadic [1]
{
#ifdef DEBUG
printf(debugText);
#endif
}
DebugPrint("something");
// some code ...
DebugPrint("something else");
If you don't define DEBUG then the macro preprocessor (not the compiler) won't expand that code.
The slight downside of my approach is that, although it makes your cod cleaner, it imposes an extra function call, even if DEBUG is not defined. It is possible that a smart linker will realize that the called function is empty and will remove the function calls, but I wouldn't bank on it.
References:
“Variadic function” in: Wikipedia, The Free Encyclopedia.
I also would suggest to use inline functions which become empty if a flag is set. Why when it is set? Because you usually want to debug always unless you compile a release build.
Because NDEBUG is already used you could use it too to avoid using multiple different flags. The definition of a debug level is also very useful.
One more thing to say: Be careful using functions which are altered by using macros! You could easily violate the One Definition Rule by translating some parts of your code with and some other without debugging disabled.
You might follow the convention of assert(3) and wrap debugging code with
#ifndef NDEBUG
DebugPrint("something");
#endif
See here (on StackOverflow, which would be a better place to ask) for a practical example.
In a more C++ like style, you could consider
ifdef NDEBUG
#define debugout(Out) do{} while(0)
#else
extern bool dodebug;
#define debugout(Out) do {if (dodebug) { \
std::cout << __FILE__ << ":" << __LINE__ \
<< " " << Out << std::endl; \
}} while(0)
#endif
then use debugout("here x=" << x) in your program. YMMV. (you'll set your dodebug flag either thru a gdb command or thru some program argument, perhaps parsed using getopt_long(3), at least on Linux).
PS. Remind that the do{...}while(0) is an old trick to make a robust statement like macro (suitable in every position where a plain statement is, e.g. as the then or else part of an if etc...).
You could also use templates utilizing the constexpr if feature in C++17. you don't have to worry about the preprocessor at all but your declaration and definition have to be in the same place when using templates.

C++ macro to log every line of code

During one of my recent discussions with my manager, he mentioned that one of his former clients used a C++ macro to log info about every line of code. All they had to do was enable an environment variable before starting the run. (Of course the environment variable was enabled in the test-bed alone.
The log mentioned the variables used and their corresponding values too.
For example, for the line:
a = a + b;
The log would say something like:
"a = a + b; (a = 5 + 3)"
Personally, I was not sure if this was possible, but he was very sure of this having existed, though he did not remember the specifics of the code.
So, here is the (obvious) question: Is this possible? Can you provide the code for this one?
I don't know if every line/variable can be expanded like that, but function calls can be logged. I have logged all function calls using the -finstrument-functions option of gcc. It will call:
void __cyg_profile_func_enter (void *this_fn, void *call_site);
and
void __cyg_profile_func_exit (void *this_fn, void *call_site);
for function enter and exit.
The docs explain how to use it. I don't know if other compilers offer something similar.
You may check how BOOST_CHECKA from Boost.Test is implemented. Internally it uses expression templates.
For test:
#define BOOST_TEST_MAIN
#include <boost/test/included/unit_test.hpp>
#include <boost/test/test_tools.hpp>
BOOST_AUTO_TEST_CASE(test1)
{
int a=0;
int b=1;
int c=2;
BOOST_CHECKA( a+b == c );
}
Output is:
Running 1 test case...
main.cpp(11): error: in "test1": check a+b == c failed [0+1!=2]
*** 1 failure detected in test suite "Master Test Suite"
Note values in square brackets: [0+1!=2]
It has some limitations.
For test:
BOOST_CHECKA( (a+b) == c );
output is:
check (a+b) == c failed [1!=2]

Calling a function immediately before main

Is is possible to register a function to be run immediately before main is entered? I know that all global objects are created before entering main, so I could put the code in the constructor of a global object, but that does not guarantee any particular order. What I would like to do is put some registration code into the constructor, but alas, I don't know what to put there :) I guess this is highly system-specific?
If you're using gcc, you can use the constructor attribute on a function to have it called before main (see the documentation for more details).
constructor
destructor
The constructor attribute causes the function to be called automatically before execution enters main (). Similarly, the destructor attribute causes the function to be called automatically after main () has completed or exit () has been called. Functions with these attributes are useful for initializing data that will be used implicitly during the execution of the program.
Not sure this is exactly what you want... But it should do the job.
int main() {
static int foo = registerSomething();
}
It's better to explicitly calls such registration functions, either in main or on first access (but first access init could pose issues if you're multithreaded).
I am guessing here but:
You want to register something in a different compilation unit
You ran into a problem with the registration because the global variables in which you're saving registrations were not yet constructed.
C++ defines that a function-static is initialized sometime before it is first accessed, so you can work around it in the way shown below.
typedef std::map<std::string, std::string> RegistrationCache;
RegistrationCache& get_string_map()
{
static RegistrationCache cache;
return cache;
}
class Registration
{
Registration(std::string name, std::string value)
{
get_string_map()[name] = value;
}
};
Goal
Let's say you want the following:
STATIC_EXECUTE {
printf("This probably prints first"\n");
}
STATIC_EXECUTE {
printf("But order isn't guaranteed in the language spec, IIRC"\n");
}
int main(int argc, char **argv) {
printf("This definitely prints last. Buh Bye.\n");
}
Implementation
C++ version - static variable + constructor:
// This is some crazy magic that produces __StaticExecute__247
// Vanilla interpolation of __StaticExecute__##__LINE__ would produce __StaticExecute____LINE__
// I still can't figure out why it works, but it has to do with macro resolution ordering
// If you already have Boost, you can omit this part
#define BOOST_PP_CAT(a, b) BOOST_PP_CAT_I(a, b)
#define BOOST_PP_CAT_I(a, b) BOOST_PP_CAT_II(~, a ## b)
#define BOOST_PP_CAT_II(p, res) res
// This avoids repeating the BOOST_PP_CAT 5X
#define STATIC_EXECUTE \
STATIC_EXECUTE_I(BOOST_PP_CAT(__StaticExecute__, __LINE__))
// This is the meat, a static instance of a class whose constructor runs your code
#define STATIC_EXECUTE_I(uniq_name) \
static struct uniq_name { \
uniq_name(); \
} BOOST_PP_CAT(uniq_name, __var); \
uniq_name::uniq_name() // followed by { ... }
C version - static variable + function
// ...
// The meat: a static variable initialized from a function call
#define STATIC_EXECUTE_I(uniq_name) \
static void uniq_name (); \
static int BOOST_PP_CAT(uniq_name, __var) = \
(uniq_name(), 0); \
static void uniq_name() // followed by { ... }
Notes
IMHO, the C++ version is slightly more elegant. In-theory, it consumes slightly less space. Otherwise, potato, po-tat-oh.
Caveat: I haven't tested the "C" version on a proper C-only compiler. Fingers crossed; post a note if it doesn't work.
Caveat: Compiler portability in general is a tricky thing. I wouldn't be shocked if there's a bug on some other compiler.
The BOOST_PP_CAT code is stolen from boost/preprocessor/cat.hpp. I simplified the implementation, and in the process may have compromised portability. If it doesn't work, try the original (more verbose) implementation, and post a comment below. Or, if you are already using Boost, you can just use their version.
If you are trying to understand the Boost magic, note that (at least for me, and in this scenario), the following also seems to work:
#define BOOST_PP_CAT(a, b) BOOST_PP_CAT_I(a, b)
#define BOOST_PP_CAT_I(a, b) a ## b

C++ code purity

I'm working in C++ enviroment and:
a) We are forbidden to use exceptions
b) It is application/data server code that evaluates lot of requests of different kinds
I have simple class encapsulating result of server operation that is also used internally for lot of functions there.
class OpResult
{
.....
bool succeeded();
bool failed(); ....
... data error/result message ...
};
As I try to have all functions small and simple, lot of blocks like this are arising:
....
OpResult result = some_(mostly check)function(....);
if (result.failed())
return result;
...
The question is, is it bad practise to make macro looking like this and use it everywhere?
#define RETURN_IF_FAILED(call) \
{ \
OpResult result = call; \
if (result.failed()) \
return result; \
}
I understand that someone can call it nasty, but is there a better way?
What other way of handling results and avoiding lot of bloat code would you suggest?
It's a trade off. You are trading code size for obfuscation of the logic. I prefer to preserve the logic as visible.
I dislike macros of this type because they break Intellisense (on Windows), and debugging of the program logic. Try putting a breakpoint on all 10 return statements in your function - not the check, just the return. Try stepping through the code that's in the macro.
The worst thing about this is that once you accept this it's hard to argue against the 30-line monster macros that some programmers LOVE to use for commonly-seen mini-tasks because they 'clarify things'. I've seen code where different exception types were handled this way by four cascading macros, resulting in 4 lines in the source file, with the macros actually expanding to > 100 real lines. Now, are you reducing code bloat? No. It's impossible to tell easily with macros.
Another general argument against macros, even if not obviously applicable here, is the ability to nest them with hard to decipher results, or to pass in arguments that result in weird but compilable arguments e.g. the use of ++x in a macros that uses the argument twice. I always know where I stand with the code, and I can't say that about a macro.
EDIT: One comment I should add is that if you really do repeat this error check logic over and over, perhaps there are refactoring opportunities in the code. Not a guarantee but a better way of code bloat reduction if it does apply. Look for repeated sequences of calls and encapsulate common sequences in their own function, rather than addressing how each call is handled in isolation.
Actually, I prefer slightly other solution. The thing is that the result of inner call is not necessarily a valid result of an outer call. For example, inner failure may be "file not found", but the outer one "configuration not available". Therefore my suggestion is to recreate the OpResult (potentially packing the "inner" OpResult into it for better debugging). This all goes to the direction of "InnerException" in .NET.
technically, in my case the macro looks like
#define RETURN_IF_FAILED(call, outerresult) \
{ \
OpResult innerresult = call; \
if (innerresult.failed()) \
{ \
outerresult.setInner(innerresult); \
return outerresult; \
} \
}
This solution requires however some memory management etc.
Some purist argue that having no explicit returns hinders the readability of the code. In my opinion however having explicit RETURN as a part of the macro name is enough to prevent confusion for any skilled and attentive developer.
My opinion is that such macros don't obfuscate the program logic, but on the contrary make it cleaner. With such a macro, you declare your intent in a clear and concise way, while the other way seems to be overly verbose and therefore error-prone. Making the maintainers parse in mind the same construct OpResult r = call(); if (r.failed) return r is wasting of their time.
An alternative approach without early returns is applying to each code line the pattern like CHECKEDCALL(r, call) with #define CHECKEDCALL(r, call) do { if (r.succeeded) r = call; } while(false). This is in my eyes much much worse and definitely error-prone, as people tend to forget about adding CHECKEDCALL() when adding more code.
Having a popular need to do checked returns (or everything) with macros seems to be a slight sign of missing language feature for me.
As long as the macro definition sits in an implementation file and is undefined as soon as unnecessary, I wouldn't be horrified.
// something.cpp
#define RETURN_IF_FAILED() /* ... */
void f1 () { /* ... */ }
void f2 () { /* ... */ }
#undef RETURN_IF_FAILED
However, I would only use this after having ruled out all non-macro solutions.
After 10 years, I'm going to answer my own question to my satisfaction, if only I had a time machine ...
I encountered a similar situation many times in new projects. Even when exceptions were allowed, I don't want to always use them for "normal fails".
I eventually discovered a way to write these kind of statements.
For generic Result that includes message, I use this:
class Result
{
public:
enum class Enum
{
Undefined,
Meaningless,
Success,
Fail,
};
static constexpr Enum Undefined = Enum::Undefined;
static constexpr Enum Meaningless = Enum::Meaningless;
static constexpr Enum Success = Enum::Success;
static constexpr Enum Fail = Enum::Fail;
Result() = default;
Result(Enum result) : result(result) {}
Result(const LocalisedString& message) : result(Fail), message(message) {}
Result(Enum result, const LocalisedString& message) : result(result), message(message) {}
bool isDefined() const { return this->result != Undefined; }
bool succeeded() const { assert(this->result != Undefined); return this->result == Success; }
bool isMeaningless() const { assert(this->result != Undefined); return this->result == Enum::Meaningless; }
bool failed() const { assert(this->result != Undefined); return this->result == Fail; }
const LocalisedString& getMessage() const { return this->message; }
private:
Enum result = Undefined;
LocalisedString message;
};
And then, I have a special helper class in this form, (similar for other return types)
class Failed
{
public:
Failed(Result&& result) : result(std::move(result)) {}
explicit operator bool() const { return this->result.failed(); }
operator Result() { return this->result; }
const LocalisedString& getMessage() const { return this->result.getMessage(); }
Result result;
};
When these are combined, I can write code like this:
if (Failed result = trySomething())
showError(result.getMessage().str());
Isn't it beutiful?
I agree with Steve's POV.
I first thought, at least reduce the macro to
#define RETURN_IF_FAILED(result) if(result.failed()) return result;
but then it occurred to me this already is a one-liner, so there really is little benefit in the macro.
I think, basically, you have to make a trade off between write-ability and readability. The macro is definitely easier to write. It is, however, an open question whether it is also is easier to read. The latter is quite a subjective judgment to make. Still, using macros objectively does obfuscate code.
Ultimately, the underlying problem is that you must not use exceptions. You haven't said what the reasons for that decision are, but I surely hope they are worth the problems this causes.
Could be done with C++0x lambdas.
template<typename F> inline OpResult if_failed(OpResult a, F f) {
if (a.failed())
return a;
else
return f();
};
OpResult something() {
int mah_var = 0;
OpResult x = do_something();
return if_failed(x, [&]() -> OpResult {
std::cout << mah_var;
return f;
});
};
If you're clever and desperate, you could make the same kind of trick work with regular objects.
In my opinion, hiding a return statement in a macro is a bad idea. The 'code obfucation' (I like that term..! ) reaches the highest possible level. My usual solution to such problems is to aggregate the function execution at one place and control the result in the following manner (assuming you have 5 nullary functions):
std::array<std::function<OpResult ()>, 5> tFunctions = {
f1, f2, f3, f4, f5
};
auto tFirstFailed = std::find_if(tFunctions.begin(), tFunctions.end(),
[] (std::function<OpResult ()>& pFunc) -> bool {
return pFunc().failed();
});
if (tFirstFailed != tFunctions.end()) {
// tFirstFailed is the first function which failed...
}
Is there any information in result which is actually useful if the call fails?
If not, then
static const error_result = something;
if ( call().failed() ) return error_result;
would suffice.