Use macro in C++ - c++

First - sorry for my bad English :-(
Second - I have some intresting task.
Preface.
Programm will be work on ATMega162. We use macro, becouse functions work very slowly. Even inline...
Task.
I have a macro:
#define ProvSetBit(reg, bit) (((reg) & (1<<(bit))) != 0)
and check bits turn in very long, unreadable string:
ProvSetBit(SystemStatus[0], COMMAND_ON_DF);
and #define COMMAND_ON_DF 0u
I want to modificate it in:
ProvSetBit(COMMAND_ON_DF);
where COMMAND_ON_DF:
#define COMMAND_ON_DF (SystemStatus[0], 0u)
or something there. But it don't work. Debugger write:"Error[Pe054]: too few arguments in macro invocation". What you can advice me?

If a function is actually inlined, it cannot be slower than a macro doing the same thing - they lead to identical assembly. And a function as trivial as the code you posted is pretty much guaranteed to be inlined. You should ditch the macro and instead investigate why the compiler is not doing inlining for you - perhpas you're not passing the proper compilation flags?
If you reall, really want/have to stick with macros for this one, you can use Boost.Preprocessor to achieve this:
#include "boost/preprocessor/facilities/expand.hpp"
#define ProvSetBit(args) BOOST_PP_EXPAND(ProvSetBit_1 args)
#define ProvSetBit_1(reg, bit) (((reg) & (1<<(bit))) != 0)
#define COMMAND_ON_DF (SystemStatus[0], 0u)
int main()
{
ProvSetBit(COMMAND_ON_DF);
}
Live example

Related

Warning unused variable and assert [duplicate]

Sometimes a local variable is used for the sole purpose of checking it in an assert(), like so -
int Result = Func();
assert( Result == 1 );
When compiling code in a Release build, assert()s are usually disabled, so this code may produce a warning about Result being set but never read.
A possible workaround is -
int Result = Func();
if ( Result == 1 )
{
assert( 0 );
}
But it requires too much typing, isn't easy on the eyes and causes the condition to be always checked (yes, the compiler may optimize the check away, but still).
I'm looking for an alternative way to express this assert() in a way that wouldn't cause the warning, but still be simple to use and avoid changing the semantics of assert().
(disabling the warning using a #pragma in this region of code isn't an option, and lowering warning levels to make it go away isn't an option either...).
We use a macro to specifically indicate when something is unused:
#define _unused(x) ((void)(x))
Then in your example, you'd have:
int Result = Func();
assert( Result == 1 );
_unused( Result ); // make production build happy
That way (a) the production build succeeds, and (b) it is obvious in the code that the variable is unused by design, not that it's just been forgotten about. This is especially helpful when parameters to a function are not used.
I wouldn't be able to give a better answer than this, that addresses that problem, and many more:
Stupid C++ Tricks: Adventures in assert
#ifdef NDEBUG
#define ASSERT(x) do { (void)sizeof(x);} while (0)
#else
#include <assert.h>
#define ASSERT(x) assert(x)
#endif
As of C++17, the variable can be decorated with an attribute.
[[maybe_unused]] int Result = Func();
assert( Result == 1 );
See https://en.cppreference.com/w/cpp/language/attributes/maybe_unused for details.
This is better than the (void)Result trick because you directly decorate the variable declaration, rather than add something as an afterthought.
You could create another macro that allows you to avoid using a temporary variable:
#ifndef NDEBUG
#define Verify(x) assert(x)
#else
#define Verify(x) ((void)(x))
#endif
// asserts that Func()==1 in debug mode, or calls Func() and ignores return
// value in release mode (any braindead compiler can optimize away the comparison
// whose result isn't used, and the cast to void suppresses the warning)
Verify(Func() == 1);
int Result = Func();
assert( Result == 1 );
This situation means that in release mode, you really want:
Func();
But Func is non-void, i.e. it returns a result, i.e. it is a query.
Presumably, besides returning a result, Func modifies something (otherwise, why bother calling it and not using its result?), i.e. it is a command.
By the command-query separation principle (1), Func shouldn't be a command and a query at the same time. In other words, queries shouldn't have side effects, and the "result" of commands should be represented by the available queries on the object's state.
Cloth c;
c.Wash(); // Wash is void
assert(c.IsClean());
Is better than
Cloth c;
bool is_clean = c.Wash(); // Wash returns a bool
assert(is_clean);
The former doesn't give you any warning of your kind, the latter does.
So, in short, my answer is: don't write code like this :)
Update (1): You asked for references about the Command-Query Separation Principle. Wikipedia is rather informative. I read about this design technique in Object Oriented Software Construction, 2nd Editon by Bertrand Meyer.
Update (2): j_random_hacker comments "OTOH, every "command" function f() that previously returned a value must now set some variable last_call_to_f_succeeded or similar". This is only true for functions that don't promise anything in their contract, i.e. functions that might "succeed" or not, or a similar concept. With Design by Contract, a relevant number of functions will have postconditions, so after "Empty()" the object will be "IsEmpty()", and after "Encode()" the message string will be "IsEncoded()", with no need to check. In the same way, and somewhat symetrically, you don't call a special function "IsXFeasible()" before each and every call to a procedure "X()"; because you usually know by design that you're fulfilling X's preconditions at the point of your call.
You could use:
Check( Func() == 1 );
And implement your Check( bool ) function as you want. It may either use assert, or throw a particular exception, write in a log file or to the console, have different implementations in debug and release, or a combination of all.
With C++17 we can do:
[[maybe_unused]] int Result = Func();
though it involves a bit of extra typing compared to a assert substitution. See this answer.
Note: Added this because is the first google hit for "c++ assert unused variable".
You should move the assert inside the function before the return value(s). You know that the return value is not an unreferenced local variable.
Plus it makes more sense to be inside the function anyway, because it creates a self contained unit that has its OWN pre- and post-conditions.
Chances are that if the function is returning a value, you should be doing some kind of error checking in release mode on this return value anyway. So it shouldn't be an unreferenced variable to begin with.
Edit, But in this case the post condition should be X (see comments):
I strongly disagree with this point, one should be able to determine the post condition from the input parameters and if it's a member function, any object state. If a global variable modifies the output of the function, then the function should be restructured.
Most answers suggest using static_cast<void>(expression) trick in Release builds to suppress the warning, but this is actually suboptimal if your intention is to make checks truly Debug-only. The goals of an assertion macro in question are:
Perform checks in Debug mode
Do nothing in Release mode
Emit no warnings in all cases
The problem is that void-cast approach fails to reach the second goal. While there is no warning, the expression that you've passed to your assertion macro will still be evaluated. If you, for example, just do a variable check, that is probably not a big deal. But what if you call some function in your assertion check like ASSERT(fetchSomeData() == data); (which is very common in my experience)? The fetchSomeData() function will still be called. It may be fast and simple or it may be not.
What you really need is not only warning suppression but perhaps more importantly - non-evaluation of the debug-only check expression. This can be achieved with a simple trick that I took from a specialized Assert library:
void myAssertion(bool checkSuccessful)
{
if (!checkSuccessful)
{
// debug break, log or what not
}
}
#define DONT_EVALUATE(expression) \
{ \
true ? static_cast<void>(0) : static_cast<void>((expression)); \
}
#ifdef DEBUG
# define ASSERT(expression) myAssertion((expression))
#else
# define ASSERT(expression) DONT_EVALUATE((expression))
#endif // DEBUG
int main()
{
int a = 0;
ASSERT(a == 1);
ASSERT(performAHeavyVerification());
return 0;
}
All the magic is in the DONT_EVALUATE macro. It is obvious that at least logically the evaluation of your expression is never needed inside of it. To strengthen that, the C++ standard guarantees that only one of the branches of conditional operator will be evaluated. Here is the quote:
5.16 Conditional operator [expr.cond]
logical-or-expression ? expression : assignment-expression
Conditional expressions group right-to-left. The first expression is
contextually converted to bool. It is evaluated and if it is true, the
result of the conditional expression is the value of the second
expression, otherwise that of the third expression. Only one of these
expressions is evaluated.
I have tested this approach in GCC 4.9.0, clang 3.8.0, VS2013 Update 4, VS2015 Update 4 with the most harsh warning levels. In all cases there are no warnings and the checking expression is never evaluated in Release build (in fact the whole thing is completely optimized away). Bare in mind though that with this approach you will get in trouble really fast if you put expressions that have side effects inside the assertion macro, though this is a very bad practice in the first place.
Also, I would expect that static analyzers may warn about "result of an expression is always constant" (or something like that) with this approach. I've tested for this with clang, VS2013, VS2015 static analysis tools and got no warnings of that kind.
The simplest thing is to only declare/assign those variables if the asserts will exist. The NDEBUG macro is specifically defined if asserts won't be effected (done that way round just because -DNDEBUG is a convenient way to disable debugging, I think), so this tweaked copy of #Jardel's answer should work (cf. comment by #AdamPeterson on that answer):
#ifndef NDEBUG
int Result =
#endif
Func();
assert(Result == 1);
or, if that doesn't suit your tastes, all sorts of variants are possible, e.g. this:
#ifndef NDEBUG
int Result = Func();
assert(Result == 1);
#else
Func();
#endif
In general with this stuff, be careful that there's never a possibility for different translation units to be build with different NDEBUG macro states -- especially re. asserts or other conditional content in public header files. The danger is that you, or users of your library might accidentally instantiate a different definition of an inline function from the one used inside the compiled part of the library, quietly violating the one definition rule and making the runtime behaviour undefined.
This is a bad use of assert, IMHO. Assert is not meant as an error reporting tool, it's meant to assert preconditions. If Result is not used elsewhere, it's not a precondition.
Certainly you use a macro to control your assert definition, such as "_ASSERT". So, you can do this:
#ifdef _ASSERT
int Result =
#endif /*_ASSERT */
Func();
assert(Result == 1);
int Result = Func();
assert( Result == 1 );
Result;
This will make the compiler stop complaining about Result not being used.
But you should think about using a version of assert that does something useful at run-time, like log descriptive errors to a file that can be retrieved from the production environment.
I'd use the following:
#ifdef _DEBUG
#define ASSERT(FUNC, CHECK) assert(FUNC == CHECK)
#else
#define ASSERT(FUNC, CHECK)
#endif
...
ASSERT(Func(), 1);
This way, for release build, the compiler don't even need to produce any code for assert.
If this code is inside a function, then act on and return the result:
bool bigPicture() {
//Check the results
bool success = 1 != Func();
assert(success == NO, "Bad times");
//Success is given, so...
actOnIt();
//and
return success;
}
// Value is always computed. We also call assert(value) if assertions are
// enabled. Value is discarded either way. You do not get a warning either
// way. This is useful when (a) a function has a side effect (b) the function
// returns true on success, and (c) failure seems unlikely, but we still want
// to check sometimes.
template < class T >
void assertTrue(T const &value)
{
assert(value);
}
template < class T >
void assertFalse(T const &value)
{
assert(!value);
}
I haven't succeeded in using [[maybe_unused]] but
you can use the unused attribute
int Result __attribute__((__unused__)) = Func();
gcc Variable-Attributes

Why are C macros not type-safe?

If have encountered this claim multiple times and can't figure out what it is supposed to mean. Since the resulting code is compiled using a regular C compiler it will end up being type checked just as much (or little) as any other code.
So why are macros not type safe? It seems to be one of the major reasons why they should be considered evil.
Consider the typical "max" macro, versus function:
#define MAX(a,b) a < b ? a : b
int max(int a, int b) {return a < b ? a : b;}
Here's what people mean when they say the macro is not type-safe in the way the function is:
If a caller of the function writes
char *foo = max("abc","def");
the compiler will warn.
Whereas, if a caller of the macro writes:
char *foo = MAX("abc", "def");
the preprocessor will replace that with:
char *foo = "abc" < "def" ? "abc" : "def";
which will compile with no problems, but almost certainly not give the result you wanted.
Additionally of course the side effects are different, consider the function case:
int x = 1, y = 2;
int a = max(x++,y++);
the max() function will operate on the original values of x and y and the post-increments will take effect after the function returns.
In the macro case:
int x = 1, y = 2;
int b = MAX(x++,y++);
that second line is preprocessed to give:
int b = x++ < y++ ? x++ : y++;
Again, no compiler warnings or errors but will not be the behaviour you expected.
Macros aren't type safe because they don't understand types.
You can't tell a macro to only take integers. The preprocessor recognises a macro usage and it replaces one sequence of tokens (the macro with its arguments) with another set of tokens. This is a powerful facility if used correctly, but it's easy to use incorrectly.
With a function you can define a function void f(int, int) and the compiler will flag if you try to use the return value of f or pass it strings.
With a macro - no chance. The only checks that get made are it is given the correct number of arguments. then it replaces the tokens appropriately and passes onto the compiler.
#define F(A, B)
will allow you to call F(1, 2), or F("A", 2) or F(1, (2, 3, 4)) or ...
You might get an error from the compiler, or you might not, if something within the macro requires some sort of type safety. But that's not down to the preprocessor.
You can get some very odd results when passing strings to macros that expect numbers, as the chances are you'll end up using string addresses as numbers without a squeak from the compiler.
Well they're not directly type-safe... I suppose in certain scenarios/usages you could argue they can be indirectly (i.e. resulting code) type-safe. But you could certainly create a macro intended for integers and pass it strings... the pre-processor handling the macros certainly doesn't care. The compiler may choke on it, depending on usage...
Since macros are handled by the preprocessor, and the preprocessor doesn't understand types, it will happily accept variables that are of the wrong type.
This is usually only a concern for function-like macros, and any type errors will often be caught by the compiler even if the preprocessor doesn't, but this isn't guaranteed.
An example
In the Windows API, if you wanted to show a balloon tip on an edit control, you'd use Edit_ShowBalloonTip. Edit_ShowBalloonTip is defined as taking two parameters: the handle to the edit control and a pointer to an EDITBALLOONTIP structure. However, Edit_ShowBalloonTip(hwnd, peditballoontip); is actually a macro that evaluates to
SendMessage(hwnd, EM_SHOWBALLOONTIP, 0, (LPARAM)(peditballoontip));
Since configuring controls is generally done by sending messages to them, Edit_ShowBalloonTip has to do a typecast in its implementation, but since it's a macro rather than an inline function, it can't do any type checking in its peditballoontip parameter.
A digression
Interestingly enough, sometimes C++ inline functions are a bit too type-safe. Consider the standard C MAX macro
#define MAX(a, b) ((a) > (b) ? (a) : (b))
and its C++ inline version
template<typename T>
inline T max(T a, T b) { return a > b ? a : b; }
MAX(1, 2u) will work as expected, but max(1, 2u) will not. (Since 1 and 2u are different types, max can't be instantiated on both of them.)
This isn't really an argument for using macros in most cases (they're still evil), but it's an interesting result of C and C++'s type safety.
There are situations where macros are even less type-safe than functions. E.g.
void printlog(int iter, double obj)
{
printf("%.3f at iteration %d\n", obj, iteration);
}
Calling this with the arguments reversed will cause truncation and erroneous results, but nothing dangerous. By contrast,
#define PRINTLOG(iter, obj) printf("%.3f at iteration %d\n", obj, iter)
causes undefined behavior. To be fair, GCC warns about the latter, but not about the former, but that's because it knows printf -- for other varargs functions, the results are potentially disastrous.
When the macro runs, it just does a text match through your source files. This is before any compilation, so it is not aware of the datatypes of anything it changes.
Macros aren't type safe, because they were never meant to be type safe.
The compiler does the type checking after macros had been expanded.
Macros and there expansion are meant as a helper to the ("lazy") author (in the sense of writer/reader) of C source code. That's all.

Does this get optimized out?

My assert macro is like this:
#ifdef DEBUG
#define ASSERT(x) ((void)(!(x) && assert_handler(#x, __FILE__, __LINE__) && (exit(-1), 1)))
#else
#define ASSERT(x) ((void)sizeof(x))
I thought this was more or less bulletproof but I seem to be using it a lot in the context of asserting the return value of functions which are important for their side effects. If in my release build I end up compiling
ASSERT(fgets(buffer,sizeof(buffer)/sizeof(buffer[0]),file));
which would become
((void)sizeof(fgets(buffer,sizeof(buffer)/sizeof(buffer[0]),file)));
Is there a chance this will get completely optimized out? I am fairly certain that it won't (I'm calling a function, fgets), but what exactly is the condition that assures it? Are there any operations with side effects which the optimizer might throw out?
It has nothing to do with optimization. When you evaluate a sizeof expression, the operand never gets evaluated. For example,
char func(void) { exit(1); }
size_t sz = sizeof(func());
// same as
size_t sz = 1;
If you want to retain the side effects without generating compiler warnings, you can cast to void as Neil G noted in his answer.
The usual meaning of assert is to be optimized out, so it might be better to stick to those semantics and do
#else
#define ASSERT(x)
#endif
If you insist on it not being optimized out, why not just do
#else
#define ASSERT(x) ((void)(x))
#endif
?

Is it possible to set pre-processor conditions within a macro?

I'm in a position where this design would greatly improve the clarity and maintenance-needs for my code.
What I'm looking for is something like this:
#define MY_MACRO(arg) #if (arg)>0 cout<<((arg)*5.0)<<endl; #else cout<<((arg)/5.0)<<endl; #endif
The idea here:
The pre-processor substitutes different lines of code depending on the compile-time (constant) value of the macro argument. Of course, I know this syntax doesn't work, as the # is seen as the string-ize operator instead of the standard #if, but I think this demonstrates the pre-processor functionality I am trying to achieve.
I know that I could just put a standard if statement in there, and the compiler/runtime would be left to check the value. But this is unnecessary work for the application when arg will always be passed a constant value, like 10.8 or -12.5 that only needs to be evaluated at compile-time.
Performance needs for this number-crunching application require that all unnecessary runtime conditions be eliminated whenever possible, and many constant values and macros have been used (in place of variables and functions) to make this happen. The ability to continue this trend without having to mix pre-processor code with real if conditions would make this much cleaner - and, of course, code-cleanliness is one of the biggest concerns when using macros, especially at this level.
As far as I know, you cannot have #if (or anything similar) inside your macro.
However, if the condition is known at compile-time, you can safetly use a normal if statement. The compiler will optimise it (assuming you have optimisions turned on).
It's called "Dead code elimination"
Simple, use real C++:
template <bool B> void foo_impl (int arg) { cout << arg*5.0 << endl; }
template < > void foo_impl<false>(int arg) { cout << arg/5.0 << endl; }
template <int I> void foo ( ) { foo_impl< (I>0) >(I); }
[edit]
Or in modern C++, if constexpr.

Constant value in conditional expression

In a coding style question about infinite loops, some people mentioned they prefer the for(;;) style because the while(true) style gives warning messages on MSVC about a conditional expression being constant.
This surprised me greatly, since the use of constant values in conditional expressions is a useful way of avoiding #ifdef hell. For instance, you can have in your header:
#ifdef CONFIG_FOO
extern int foo_enabled;
#else
#define foo_enabled 0
#endif
And the code can simply use a conditional and trust the compiler to elide the dead code when CONFIG_FOO isn't defined:
if (foo_enabled) {
...
}
Instead of having to test for CONFIG_FOO every time foo_enabled is used:
#ifdef CONFIG_FOO
if (foo_enabled) {
...
}
#endif
This design pattern is used all the time in the Linux kernel (for instance, include/linux/cpumask.h defines several macros to 1 or 0 when SMP is disabled and to a function call when SMP is enabled).
What is the reason for that MSVC warning? Additionally, is there a better way to avoid #ifdef hell without having to disable that warning? Or is it an overly broad warning which should not be enabled in general?
A warning doesn't automatically mean that code is bad, just suspicious-looking.
Personally I start from a position of enabling all the warnings I can, then turn off any that prove more annoying than useful. That one that fires anytime you cast anything to a bool is usually the first to go.
I think the reason for the warning is that you might inadvertently have a more complex expression that evaluates to a constant without realizing it. Suppose you have a declaration like this in a header:
const int x = 0;
then later on, far from the declaration of x, you have a condition like:
if (x != 0) ...
You might not notice that it's a constant expression.
I believe it's to catch things like
if( x=0 )
when you meant
if( x==0 )
A simple way to avoid the warning would be:
#ifdef CONFIG_FOO
extern int foo_enabled;
#else
extern int foo_enabled = 0;
#endif