Xcode development, can I place #pragma unused(x) via some #define rule - c++

while developing in Xcode it is common to switch between Debug and Release mode and using some parts of code in Debug mode only while not using some in Release mode.
I often throw out NSLog code by some #define rule that lets the Pre-compiler parse out those commands that are not needed in a Release. Doing so because some final testing needs proof everything works as expected and errors are still handled properly without messing some NSLog i possibly forgot. This is in example of importance in audio development where logging in general is contra productive but needed while debugging. Wrapping everything in #ifdef DEBUG is kinda cumbersome and makes code lock wild, so my #defines are working well to keep code simple and readable without worrying about NSLog left in releases while still Logging on purpose if needed. This praxis works really well for me to have a proper test scenario with pure Release code.
But this leads to compiler warnings that some variables are not used at all. Which is not much of a problem but i want to go one step ahead and try to get rid of those warnings also. Now i could turn those warnings off in Xcode but i try to find a way to keep those and just getting rid of them for my NSLog overruled #defines
So instead of logging against dev>null i throw out (nullify) all code that is wrapped by NSLog(... ) and use some extra defined rule called ALLWAYSLog() that keeps NSLog in Releases on purpose and also changes NSLog to fprintf to avoid app origin and time prints.
Here my rules..
#ifdef DEBUG
#define NSLog(FORMAT, ...) fprintf(stderr, "%s \n", [[NSString stringWithFormat:FORMAT, ##__VA_ARGS__] UTF8String])
#else
#define NSLog(FORMAT, ...) {;}
#endif
#define ALLWAYSLog(FORMAT, ...) fprintf(stderr, "%s \n", [[[NSString alloc] initWithFormat:FORMAT, ##__VA_ARGS__] UTF8String])
To get rid of those unused variable warnings we often use
#pragma unused(variablename)
to inform the precompiler we did that on purpose..
Question:
Is it possible to write some #define rule that makes use of #pragma unused(x) ? Or how to integrate this mentioned way of __unused attribute

In the #else case, you can put the function call on the right side of the && operator with 0 on the left side. That will ensure that variables are "used" while also ensuring that the function doesn't actually get called and that the parameters are not evaluated.
#ifdef DEBUG
#define NSLog(FORMAT, ...) fprintf(stderr, "%s \n", [[NSString stringWithFormat:FORMAT, ##__VA_ARGS__] UTF8String])
#else
#define NSLog(FORMAT, ...) (0 && fprintf(stderr, "%s \n", [[NSString stringWithFormat:FORMAT, ##__VA_ARGS__] UTF8String]))
#endif

after testing and still not believing there is no "official way" of doing this i ended up reading my header files again. (usr/include/sys/cdefs.h)
where __unused is declared as __attribute__((__unused__)).
This seems the officially way of telling the Apple Clang (GCC) compiler a specific variable will be not used intentionally by placing a __unused directive at the right place in code. In example in front of a variables declaration or after a function declaration and more.. see stackoverflow discussion starting 2013 ongoing
#dbush 's answer was and is nice because it suppresses the unused variable warning by making use of the passed arguments and introducing nullify logic that will do no harm - but will still be executed to find "Expression result unused". That was pretty close to my goal and is possibly still the most simple solution.
in example:
#define NSLog(...) (0 && fprintf(stderr,"%s",[[NSString alloc] initWithFormat:__VA_ARGS__].UTF8String))
// applied to
int check = 333;
NSLog(#"findMeAgainInPreProcess %d",check);
// preprocesses to
int check = 333;
{0 && fprintf(__stderrp,"%s \n",[[NSString alloc] initWithFormat:#"findMeAgainInPreProcess %d",check].UTF8String);};
While this will not print anything, the compiler knows the expression is unused then.
But this left me with the question how unused marking is done properly. Trying a reciprocal approach distinguish Debug and Release to make use of __unused again in combination with my first approach like so...
#define ALLWAYSLog(...) fprintf(stderr,"%s \n",[[NSString alloc] initWithFormat:__VA_ARGS__].UTF8String)
#ifdef DEBUG
#define IN_RELEASE__unused
#define NSLog(...) ALLWAYSLog(__VA_ARGS__)
#else
#define IN_RELEASE__unused __unused
//#define NSLog(...) (0&&ALLWAYSLog(__VA_ARGS__)) //#dbush solution
//#define NSLog(...) NSCAssert(__VA_ARGS__,"")
//#define NSLog(...) {;}
#define NSLog(...) /*__VA_ARGS__*/
#endif
In Debug it will not silence the unused variable warning by parsing out the directive itself and in Release it will exchange IN_RELEASE__unused to __unused according to the macro and silence it. This is a little extra work but could help to see which parts are unused on purpose.
Means i can type like below..
IN_RELEASE__unused int check = 333;
NSLog(#"findMeAgainInPreProcess %d",check);
// produces for DEBUG
int check = 333; //no warning, var is used below
fprintf(__stderrp,"%s \n",[[NSString alloc] initWithFormat:#"findMeAgainInPreCompile %d", check].UTF8String);
// produces for RELEASE
__attribute__((__unused__)) int check = 333; //no warning intentionally
; // no print, nothing
This keeps NSLog in place (in code), marks the unused variables to silence the warning properly and NSLog gets parsed out completely in Release. And i can still force prints for both modes with the introduced ALLWAYSLog.
Conclusion: dbush's solution is still more straight forward.

Related

Macro which will not compile the function if not defined

Currently using to show debug output when in debug mode:
#ifdef _DEBUG
#define printX(...) Serial.printf( __VA_ARGS__ )
#else
#define printX(...) NULL
#endif
yet this still include the printX in the result code, and the parameters which have been applied still consume memory, cpu power, and stack size, so my question is:
is there a way to have a macro, which is not including the function, and "ignoring" all of it's calls in the source when in "release mode" and basically not compile anything with it ?
A macro is a not a function. It does not consume any memory, cpu power, or stack size. This is because macros operate entirely at compile time, and just act as a text replacing mechanism. When the program is run, there are no macros which are "called".
The macro
#define printX(...) NULL
replaces printX function call with all its arguments with plain NULL. This is a textual replacement that happens before a compiler is able to take a look at the code, so any nested calls inside printX, e.g.
printX(someExpensiveCall())
will also be completely eliminated.
In my programs I include a line that says:
#define DEBUG_MODE
and I use it anywhere I want to compile with (or without) debug mode:
#ifdef DEBUG_MODE
print here all the info I need for debug and certainly don't want in released binary.
#endif
Before releasing the final binary I comment out the definition line.

Is it possible to have a zero-cost assert() such that code should not have to be modified between debug and release builds?

I've noticed that some code often looks like this:
#ifdef DEBUG
assert(i == 1);
#endif //DEBUG
and that you may have several blocks of these sitting around in your raw code. Having to write out each block is tedious and messy.
Would it be plausible to have a function like this:
auto debug_assert = [](auto expr) {
#ifdef DEBUG
assert(expr);
#endif //DEBUG
};
or something like this:
#ifdef DEBUG
auto debug_assert = [](bool expr) {
assert(expr);
};
#else //DEBUG
void debug_assert(bool expr) {}
#endif //DEBUG
to get a zero-cost assert when the DEBUG flag is not specified? (i.e. it should have the same effect as if it was not put into the code without the lambda running, etc. and be optimized out by the g++/clang compilers).
As mentioned by #KerrekSB, you can already disable asserts by defining NDEBUG before including <cassert>. The best way to ensure that it's defined before including the header file is to list it in as the argument to the compiler (with gcc it's -DNDEBUG)
Note: the assert is removed by replacing it with a no-op expression, and there, the argument isn't evaluated at all (which is different from your suggested solution)! This is why it's of utmost importance to not call any functions that have side effects in assert.
For completeness: here is how assert can be implemented:
#include <cstdio>
#include <cstdlib>
#ifndef NDEBUG
#define assert(EXPRESSION) ((EXPRESSION) ? (void)0 : (printf("assertion failed at line %d, file %s: %s\n", __LINE__, __FILE__, #EXPRESSION), exit(-1)))
#else
#define assert(EXPRESSION) (void)0
#endif
Introducing your own assert-style macro is very commonly done. There are quite a lot of reasons you may want to do this:
you want to include more information about the evaluated expression (see Catch's REQUIRE and how they use expression templates to decompose the expression into individual elements and stringify them)
you want to do action other than exit()ing the program, like throwing an exception, mailing the developer, logging to a file, breaking into the debugger
you want to evaluate the expression even on release builds which is less error prone than not evaluating it at all (after all, if it doesn't have side effects, it can be eliminated by a compiler optimizations, and if it does, you just avoided a heisenbug)
and so on, and so on (if you have an idea, you can post a comment, I'll add it to the answer)

Is there a visual c++ predefined preprocessor macro that lets you know when the compiler is optimizing

I would like to be able to do something like this using visual c++ compiler (vc12):
// If we have compiled with O2
#ifdef _O2_FLAG_
bool debug_mode = false;
// If we are in dirty slow non optimized land
#else
bool debug_mode = true;
#endif
But I cannot find a predefined macro for this purpose.
Context:
The debug_mode flag is used like:
if (!debug_mode && search_timer->seconds_elapsed() > 20) {
return best_result_so_far;
}
The problem being that in a debug instance that I step through this constantly fails and bombs me out because strangely it takes me a lot longer to step through the code than the CPU normally goes through it :-)
If there is some underlying clock that pauses when debug does that would also solve my problem. Currently I am using the difference between two calls to std::chrono::high_res_clock::now().
EDIT:
In response to several comments explaining why I don't want to do what I want to do, I should perhaps reword the question as simply: Is there an equivalent of gcc's __optimize__ in cl?
You could use either _DEBUG or NDEBUG to detect the debug configuration. This technically doesn't mean the same thing as the optimization flag, but 99% of the time this should suffice.
Another option would be to add a preprocessor definition to the project yourself.

increase c++ code verbosity with macros

I'd like to have the possibility to increase the verbosity for debug purposes of my program. Of course I can do that using a switch/flag during runtime. But that can be very inefficient, due to all the 'if' statements I should add to my code.
So, I'd like to add a flag to be used during compilation in order to include optional, usually slow debug operations in my code, without affecting the performance/size of my program when not needed. here's an example:
/* code */
#ifdef _DEBUG_
/* do debug operations here
#endif
so, compiling with -D_DEBUG_ should do the trick. without it, that part won't be included in my program.
Another option (at least for i/o operations) would be to define at least an i/o function, like
#ifdef _DEBUG_
#define LOG(x) std::clog << x << std::endl;
#else
#define LOG(x)
#endif
However, I strongly suspect this probably isn't the cleanest way to do that. So, what would you do instead?
I prefer to use #ifdef with real functions so that the function has an empty body if _DEBUG_ is not defined:
void log(std::string x)
{
#ifdef _DEBUG_
std::cout << x << std::endl;
#endif
}
There are three big reasons for this preference:
When _DEBUG_ is not defined, the function definition is empty and any modern compiler will completely optimize out any call to that function (the definition should be visible inside that translation unit, of course).
The #ifdef guard only has to be applied to a small localized area of code, rather than every time you call log.
You do not need to use lots of macros, avoiding pollution of your code.
You can use macros to change implementation of the function (Like in sftrabbit's solution). That way, no empty places will be left in your code, and the compiler will optimize the "empty" calls away.
You can also use two distinct files for the debug and release implementation, and let your IDE/build script choose the appropriate one; this involves no #defines at all. Just remember the DRY rule and make the clean code reusable in debug scenario.
I would say that his actually is very dependent on the actual problem you are facing. Some problems will benefit more of the second solution, whilst the simple code might be better with simple defines.
Both snippets that you describe are correct ways of using conditional compilation to enable or disable the debugging through a compile-time switch. However, your assertion that checking the debug flags at runtime "can be very inefficient, due to all the 'if' statements I should add to my code" is mostly incorrect: in most practical cases a runtime check does not influence the speed of your program in a detectable way, so if keeping the runtime flag offers you potential advantages (e.g. turning the debugging on to diagnose a problem in production without recompiling) you should go for a run-time flag instead.
For the additional checks, I would rely on the assert (see the assert.h) which does exactly what you need: check when you compile in debug, no check when compiled for the release.
For the verbosity, a more C++ version of what you propose would use a simple Logger class with a boolean as template parameter. But the macro is fine as well if kept within the Logger class.
For commercial software, having SOME debug output that is available at runtime on customer sites is usually a valuable thing to have. I'm not saying everything has to be compiled into the final binary, but it's not at all unusual that customers do things to your code that you don't expect [or that causes the code to behave in ways that you don't expect]. Being able to tell the customer "Well, if you run myprog -v 2 -l logfile.txt and do you usual thing, then email me logfile.txt" is a very, very useful thing to have.
As long as the "if-statement to decide if we log or not" is not in the deepest, darkest jungle in peru, eh, I mean in the deepest nesting levels of your tight, performance critical loop, then it's rarely a problem to leave it in.
So, I personally tend to go for the "always there, not always enabled" approach. THat's not to say that I don't find myself adding some extra logging in the middle of my tight loops sometimes - only to remove it later on when the bug is fixed.
You can avoid the function-like macro when doing conditional compilation. Just define a regular or template function to do the logging and call it inside the:
#ifdef _DEBUG_
/* ... */
#endif
part of the code.
At least in the *Nix universe, the default define for this kind of thing is NDEBUG (read no-debug). If it is defined, your code should skip the debug code. I.e. you would do something like this:
#ifdef NDEBUG
inline void log(...) {}
#else
inline void log(...) { .... }
#endif
An example piece of code I use in my projects. This way, you can use variable argument list and if DEBUG flag is not set, related code is cleared out:
#ifdef DEBUG
#define PR_DEBUG(fmt, ...) \
PR_DEBUG(fmt, ...) printf("[DBG] %s: " fmt, __func__, ## __VA_ARGS__)
#else
#define PR_DEBUG(fmt, ...)
#endif
Usage:
#define DEBUG
<..>
ret = do_smth();
PR_DEBUG("some kind of code returned %d", ret);
Output:
[DBG] some_func: some kind of code returned 0
of course, printf() may be replaced by any output function you use. Furthermore, it can be easily modified so additional information, as for example time stamp, is automatically appended.
For me it depends from application to application.
I've had applications where I wanted to always log (for example, we had an application where in case of errors, clients would take all the logs of the application and send them to us for diagnostics). In such a case, the logging API should probably be based on functions (i.e. not macros) and always defined.
In cases when logging is not always necessary or you need to be able to completely disable it for performance/other reasons, you can define logging macros.
In that case I prefer a single-line macro like this:
#ifdef NDEBUG
#define LOGSTREAM /##/
#else
#define LOGSTREAM std::clog
// or
// #define LOGSTREAM std::ofstream("output.log", std::ios::out|std::ios::app)
#endif
client code:
LOG << "Initializing chipmunk feeding module ...\n";
//...
LOG << "Shutting down chipmunk feeding module ...\n";
It's just like any other feature.
My assumptions:
No global variables
System designed to interfaces
For whatever you want verbose output, create two implementations, one quiet, one verbose.
At application initialisation, choose the implementation you want.
It could be a logger, or a widget, or a memory manager, for example.
Obviously you don't want to duplicate code, so extract the minimum variation you want. If you know what the strategy pattern is, or the decorator pattern, these are the right direction. Follow the open closed principle.

Empty "release" ASSERT macro crashes program?

Take a look at this code:
#include <cassert>
#ifdef DEBUG
#define ASSERT(expr) assert(expr)
#else
#define ASSERT(expr)
#endif /* DEBUG */
The program will run only if I have DEBUG defined, otherwise it will hang and terminate with no results. I am using MinGW in Eclipse Indigo CDT. Advice is appreciated!
It's hard to tell without looking at the actual code that is causing the issue. My guess: you are evaluating an expression with side-effects within an ASSERT(). For instance, ASSERT( ++i < someotherthing ) whithin a loop. You could confirm by temporary modifying the macro definition to just expr on NDEBUG builds. After confirming this is the cause, go to each and every ASSERT call you are issuing to ensure that the expressions are side-effects free.
You are almost certainly abusing assertions. An assertion expression must never have side effects.
When you say, assert(initialize_critical_space_technology());, and then you omit this entire line in the release build, you can imagine for yourself what will happen.
The only safe and sane way to use assertions is on values:
const bool space_init = initialize_critical_space_technology();
assert(space_init);
Some people introduce a VERIFY macro for something that always executes the code:
#define VERIFY(x) (x) // release
#define VERIFY(x) (assert(x)) // debug