Does an empty function get optimized out with impure expressions? - c++

// Example assert function
inline void assertImpl(bool mExpr, const std::string& mMsg) {
if(!mExpr) { printMsg(mMsg); abortExecution(); }
}
// Wrapper macro
#ifdef NDEBUG
#define MY_ASSERT(...) do{ ; }while(false)
#else
#define MY_ASSERT(...) assertImpl(__VA_ARGS__)
#endif
Consider the case where mExpr or mMsg are not-pure expressions - is there a way to force the compiler to optimize them out anyway?
bool globalState{false};
bool pureExpression() { return false; }
bool impureExpression() { globalState = !globalState; return false; }
// ...
// This will very likely be optimized out with (NDEBUG == 0)
MY_ASSERT(pureExpression());
// Will this be optimized out with (NDEBUG == 0)
MY_ASSERT(impureExpression());
What do compilers usually do in a situation where an impure expression is "discarded"?
Is there a way to make 100% sure that pure expressions get optimized out?
Is there a way to make 100% sure that impure expressions get optimized out or never get optimized out?

After macro expansion, your call to impureExpression() no longer exists: it's not part of the macro expansion result. If the call to your function isn't there, the function won't be called, on all conforming implementations, at any optimisation level, as long as NDEBUG is defined.
Note: you talk about NDEBUG == 0, but if that is what you want the condition to be, your #ifdef NDEBUG condition is incorrect. #ifdef tests whether the macro is defined, and pays no attention to what the definition is.

The optimizer is not involved here. In the macro that is enabled with NDEBUG, the arguments are discarded regardless.

Related

Implicitly cast parameter to bool

Premise:
I am trying to make a Define scope that is not implemented using a macro because of the potential issues with macros. Here is my initial attempt
//version for if not defined
bool Defined()
{
return false
}
//version for if defined
bool Defined(bool anything)
{
return true;
}
And an example use case
if(Defined(_DEBUG))
{
Stuff...
}
which would replace
#ifdef _DEBUG
Stuff...
#endif
or
#define Defined() false
#define Defined(Anything) true
Benefits:
syntax is cleaner, it is scoped,
This code is not conditional, so the compiler will be able to easily optimize code sections out.
Issues
There are a few issues with this procedure, the first is the reason for this post.
Question:
You can't pass in anything that is not implicitly cast-able to a bool. Is there a way to implicitly cast any object, number, pointer, etc to a bool? I don't believe there is, but I wanted to make sure, before I continued.
You can use a generic template:
template<class T>
bool Defined(T &&) { return true; }

Way to toggle debugging code on and off

I was programming a manchester decoding algorithm for arduino, and I often had to print debug stuff when trying to get things working, but printing to serial and string constants add a lot of overhead. I can't just leave it there in the final binary.
I usually just go through the code removing anything debug related lines.
I'm looking for a way to easily turn it on and off.
The only way I know is this
#if VERBOSE==1
Serial.println();
Serial.print(s);
Serial.print(" ");
Serial.print(t);
Serial.print(" preamble");
#endif
...
#if VERBOSE==1
Serial.println(" SYNC!\n");
#endif
and on top of the file I can just have
#define VERBOSE 0 // 1 to debug
I don't like how much clutter it adds to single liners. I was very tempted to do something very nasty like this. But yeah, evil.
Change every debug output to
verbose("debug message");
then use
#define verbose(x) Serial.print(x) //debug on
or
#define verbose(x) //debug off
There's a C++ feature that allows me to just do this instead of preprocessor?
At the risk of sounding silly: Yes, there is a C++ feature for this, it looks like this:
if (DEBUG)
{
// Your debugging stuff here…
}
If DEBUG is a compile-time constant (I think using a macro is reasonable but not required in this case), the compiler will almost certainly generate no code (not even a branch) for the debugging stuff if debug is false at compile-time.
In my code, I like having several debugging levels. Then I can write things like this:
if (DEBUG_LEVEL >= DEBUG_LEVEL_FINE)
{
// Your debugging stuff here…
}
Again, the compiler will optimize away the entire construct if the condition is false at compile-time.
You can even get more fancy by allowing a two-fold debugging level. A maximum level enabled at compile-time and the actual level used at run-time.
if (MAX_DEBUG >= DEBUG_LEVEL_FINE && Config.getDebugLevel() >= DEBUG_LEVEL_FINE)
{
// Your debugging stuff here…
}
You can #define MAX_DEBUG to the highest level you want to be able to select at run-time. In an all-performance build, you can #define MAX_DEBUG 0 which will make the condition always false and not generate any code at all. (Of course, you cannot select debugging at run-time in this case.)
However, if squeezing out the last instruction is not the most important issue and all your debugging code does is some logging, then the usual pattern lokks like this:
class Logger
{
public:
enum class LoggingLevel { ERROR, WARNING, INFO, … };
void logError(const std::string&) const;
void logWarning(const std::string&) const;
void logInfo(const std::string&) const;
// …
private:
LoggingLevel level_;
};
The various functions then compare the current logging level to the level indicated by the function name and if it is less, immediately return. Except in tight loops, this will probably be the most convenient solution.
And finally, we can combine both worlds by providing inline wrappers for the Logger class.
class Logger
{
public:
enum class LoggingLevel { ERROR, WARNING, INFO, … };
void
logError(const char *const msg) const
{
if (COMPILE_TIME_LOGGING_LEVEL >= LoggingLevel::ERROR)
this->log_(LoggingLevel::ERROR, msg);
}
void
logError(const std::string& msg) const
{
if (COMPILE_TIME_LOGGING_LEVEL >= LoggingLevel::ERROR)
this->log_(LoggingLevel::ERROR, msg.c_str());
}
// …
private:
LoggingLevel level_;
void
log_(LoggingLevel, const char *) const;
};
As long as evaluating the function arguments for your Logger::logError etc calls does not have visible side-effects, chances are good that the compiler will eliminate the call if the conditional in the inline function is false. This is why I have added the overloads that take a raw C-string to optimize the frequent case where the function is called with a string literal. Look at the assembly to be sure.
Personally I wouldn't have a a lot of #ifdef DEBUG scattered around my code:
#ifdef DEBUG
printf("something");
#endif
// some code ...
#ifdef DEBUG
printf("something else");
#endif
rather, I would wrap it in a function:
void DebugPrint(const char const *debugText) // ToDo: make it variadic [1]
{
#ifdef DEBUG
printf(debugText);
#endif
}
DebugPrint("something");
// some code ...
DebugPrint("something else");
If you don't define DEBUG then the macro preprocessor (not the compiler) won't expand that code.
The slight downside of my approach is that, although it makes your cod cleaner, it imposes an extra function call, even if DEBUG is not defined. It is possible that a smart linker will realize that the called function is empty and will remove the function calls, but I wouldn't bank on it.
References:
“Variadic function” in: Wikipedia, The Free Encyclopedia.
I also would suggest to use inline functions which become empty if a flag is set. Why when it is set? Because you usually want to debug always unless you compile a release build.
Because NDEBUG is already used you could use it too to avoid using multiple different flags. The definition of a debug level is also very useful.
One more thing to say: Be careful using functions which are altered by using macros! You could easily violate the One Definition Rule by translating some parts of your code with and some other without debugging disabled.
You might follow the convention of assert(3) and wrap debugging code with
#ifndef NDEBUG
DebugPrint("something");
#endif
See here (on StackOverflow, which would be a better place to ask) for a practical example.
In a more C++ like style, you could consider
ifdef NDEBUG
#define debugout(Out) do{} while(0)
#else
extern bool dodebug;
#define debugout(Out) do {if (dodebug) { \
std::cout << __FILE__ << ":" << __LINE__ \
<< " " << Out << std::endl; \
}} while(0)
#endif
then use debugout("here x=" << x) in your program. YMMV. (you'll set your dodebug flag either thru a gdb command or thru some program argument, perhaps parsed using getopt_long(3), at least on Linux).
PS. Remind that the do{...}while(0) is an old trick to make a robust statement like macro (suitable in every position where a plain statement is, e.g. as the then or else part of an if etc...).
You could also use templates utilizing the constexpr if feature in C++17. you don't have to worry about the preprocessor at all but your declaration and definition have to be in the same place when using templates.

Writing debug build only assertion function ignoring side-effects

Today I discovered some of my assertion functions are still exist and being called in release build. Here's an example of my assertion function.
bool const isDebugMode()
{
return false; // Will be controlled by preprocessor flag.
}
void assertWithReason(bool const condition, std::string const reason = "")
{
if (isDebugMode() and not condition)
{
abort();
}
}
I think some side-effect in condition expression is preventing eliminating the assertion call.
For example,
assertWithReason(glGetError() == GL_NO_ERROR);
I expected this assertion call to be eliminated, but it is not. Because it is being executed before checking debug-build.
I am not sure how C++ handles this case, but as C++ is very strict language, it doesn't seem to be eliminated unless I put some special flag. Anyway, I intentionally wrote the assertions to be removed in release build.
Is it possible to write a function which is surely removed in release build in C++?
Of course I can use preprocessor macro, but I want to avoid using preprocessor macro as much as possible.
I am using Clang, and compiler specific extension (such as GCC attribute) is also fine.
I quite like using macros for this purpose. Yes, I know, Macros are evil, but just like knives (used wrong) are evil, they come in handy if you use them right.
#define MY_ASSERT(x) do {\
if (is_debug() && !x) assertFailed(__FILE__, __LINE__, __FUNCTION__, #x);\
} while(0);
Now you can also show where it failed (my_drawfunc.cpp: 34 : my_do_draw(): assertion failed: glGetError == GL_NO_ERROR or something like that.
In C++11 you can use lambda expressions. It is likely that constant propogation will make it so that is_debug is never evaulated, and even if it is, the lambda is not called.
class Debug { enum { is_debug = 1; } };
template <class F>
void assert(F f) {
if (is_debug && !f()) abort();
}
{
int x = 6;
assert([&]() { return x == 6; });
}

Emulating GCC's __builtin_unreachable?

I get a whole lot of warnings about switches that only partially covers the range of an enumeration switched over. Therefor, I would like to have a "default" for all those switches and put __builtin_unreachable (GCC builtin) in that case, so that the compiler know that case is not reachable.
However, I came to know that GCC4.3 does not support that builtin yet. Is there any good way to emulate that functionality? I thought about dereferencing a null pointer instead, but that may have other undesirable effects/warnings and such. Do you have any better idea?
The upcoming 2023 revision of the C standard (C23, ISO/IEC 9899:2023) is going to have a new macro unreachable
#include <stddef.h>
void unreachable(void);
with the effect of gcc's __builtin_unreachable.
On older C standards, you may be able to call an inline function declared _Noreturn to mark anything after that call as unreachable. The compiler is allowed to throw out any code after such a function. If the function itself is static (and does return), the compiler will usually also inline the function. Here is an example:
static _Noreturn void unreachable() {
return; /* intentional */
}
/* ... */
foo();
bar(); /* should better not return */
unreachable();
baz(); /* compiler will know this is not reachable */
Notice that you invoke undefined behavior if a function marked _Noreturn indeed returns. Be sure that said function will never be called.
Hmm, something like (since __builtin_unreachable() appeared in 4.5):
#define GCC_VERSION (__GNUC__ * 10000 \
+ __GNUC_MINOR__ * 100 \
+ __GNUC_PATCHLEVEL__)
#if GCC_VERSION >= 40500
#define my_unreachable() __builtin_unreachable()
#else
#define my_unreachable() do { printf("Oh noes!!!111\n"); abort(); } while(0)
#endif
Would abort (leaving a core dump) or throw (allowing for alternate data capture) accommodate your needs?
Do you really want to have switch statements that don't cover the full enumeration? I nearly always try to list all the possible cases (to no-op) with no default case so that gcc will warn me if new enumerations are added, as it may be required to handle them rather than letting it silently (during compile) fall into the default.
keep it simple:
assert(false);
or, better yet:
#define UNREACHABLE (!"Unreachable code executed!")
assert(UNREACHABLE);
template<unsigned int LINE> class Unreachable_At_Line {};
#define __builtin_unreachable() throw Unreachable_At_Line<__LINE__>()
Edit:
Since you want to have unreachable code to be omitted by compiler, below is the simplest way.
#define __builtin_unreachable() { struct X {X& operator=(const X&); } x; x=x; }
Compiler optimizes away x = x; instruction especially when it's unreachable. Here is the usage:
int foo (int i)
{
switch(i)
{
case 0: return 0;
case 1: return 1;
default: return -1;
}
__builtin_unreachable(); // never executed; so compiler optimizes away
}
If you put __builtin_unreachable() in the beginning of foo() then compiler generates a linker error for unimplemented operator =. I ran these tests in gcc 3.4.6 (64-bit).

Does "default" switch case disturb jump table optimization?

In my code I'm used to write fall-back default cases containing asserts like the following, to guard me against forgetting to update the switch in case semantics change
switch(mode) {
case ModeA: ... ;
case ModeB: ... ;
case .. /* many of them ... */
default: {
assert(0 && "Unknown mode!");
return ADummyValue();
}
};
Now I wonder whether the artificial fall-back check default case will interfere with jump table generations? Imagine "ModeA" an "ModeB" etc are consecutive so the compiler could optimize into a table. Since the "default" case contains an actual "return" statement (since the assert will disappear in release mode and the compiler will moan about a missing return statement), it seems unlikely the compiler optimizes the default branch away.
What's the best way to handle this? Some friend recommended me to replace "ADummyValue" with a null pointer dereference, so that the compiler, in presence of undefined behavior, could omit to warn about a missing return statement. Are there better ways to solve this?
If your compiler is MSVC, you can use __assume intrinsic : http://msdn.microsoft.com/en-us/library/1b3fsfxw(v=VS.80).aspx
At least with the compilers I've looked at, the answer is generally no. Most of them will compile a switch statement like this to code roughly equivalent to:
if (mode < modeA || mode > modeLast) {
assert(0 && "Unknown mode!");
return ADummyValue();
}
switch(mode) {
case modeA: ...;
case modeB: ...;
case modeC: ...;
// ...
case modeLast: ...;
}
if you're using "default" (ha!) <assert.h>, the definition's tied to the NDEBUG macro anyway, so maybe just
case nevermind:
#if !defined(NDEBUG)
default:
assert("can" && !"happen");
#endif
}
I only see 1 solution in case the optimization actually is disturbed: the infamous "#ifndef NDEBUG" round the default case. Not the nicest trick, but clear in this situation.
BTW: did you already have a look what your compiler does with and without the default case?
If you have a state that should never be reached, then you should kill the program, because it just reached an unexpected state, even in the release mode (you might just be more diplomatic and actually save users data and do all that other nice stuff before going down).
And please don't obsess over micro optimizations unless you actually have measured (using a profiler) that you need them.
The best way to handle this is not to disable the assert. That way you can also keep an eye on possible bugs. Sometimes it is better for the application to crash with a good message explaining what exactly happened, then to continue working.
Use compiler extensions:
// assume.hpp
#pragma once
#if defined _MSC_VER
#define MY_ASSUME(e) (__assume(e), (e) ? void() : void())
#elif defined __GNUC__
#define MY_ASSUME(e) ((e) ? void() : __builtin_unreachable())
#else // defined __GNUC__
#error unknown compiler
#endif // defined __GNUC__
-
// assert.hpp
#include <cassert>
#include "assume.hpp"
#undef MY_ASSERT
#ifdef NDEBUG
#define MY_ASSERT MY_ASSUME
#else // NDEBUG
#define MY_ASSERT assert
#endif // NDEBUG