what is the use of "static_cast<void>" in macro? - c++

I'm seeing a macro definition like this:
#define ASSERT_VALID_PARAM(param, assertion) { static_cast<void>(param); if (!(assertion)) { throw InvalidParamError(#param, #assertion, __FILE__, __PRETTY_FUNCTION__, __LINE__); } }
I'm not able to figure out the need of static_cast<void>(param) here.
Any idea on why this is needed?

This macro is designed to validate a certain real parameter passes a certain validation rule(s). The logic part of the macro is composed of 2 parts:
Validate that param is a real parameter, with a valid name. This is done by using the static_cast, and if an illegal name is used, a compile time error will be generated.
Validate the "truthyness" of assertion. This is done with a simple negating if statement.
If the param is a valid name, and the assertion fails (assertion == false), an InvalidParamError is thrown, using the passed in parameters as strings (using the Stringizing operator #) to initialize the error object.
Since the actual usage of the param parameter in the macro is only as a string, it has to be validated using actual code. Since no real operation is needed the static_cast is used, which discards the result and can potentially be optimized out. Without that check, you could pass any value which would make the information in the assertion meaningless.

it is the 'c++ way' of writing
(void)param;
it makes 'use' of the variable and thus disables the compiler warning for unused variable

static_cast<void>(param); will evaluate the param and discard the result.
If you don't add the cast to void:
you may get warnings saying you are ignoring the result of
expression.
Even if you pass some illegal code (for example a statement instead of expression) as argument, compiler will accept it happily.
From cppreference
4) If new_type is the type void (possibly cv-qualified), static_cast
discards the value of expression after evaluating it.

Related

Error: expression 'none(int)' is of type 'Option[system.int]' and has to be discarded

import options
template p[T] = none(T)
discard p[int]
templat.nim(5, 10) Error: expression 'none(int)' is of type
'Option[system.int]' and has to be discarded
I think writing discard in front of the template instantiation is a reasonable enough way to do what the compiler asks, no ? Now it's just being grumpy.
EDIT: I've tried new things and it may be yet-another-case of very unhelpful compiler messages.
import options
template p[T](): untyped = T.none
discard p[int]()
This builds. The main change might be the untyped return type (note that typed didn't work either, with the same weird message).
And last flabbergast, T.none was fine but not none(T). I thought from UFCS both should be equivalent.
By default, Nim will assume that the template returns a "statement list" (a block of Nim code, which doesn't denote any value). Since this must be a well-formed block, the return values of all calls inside it must be properly handled or discarded, hence you see the error.
To solve the problem, just add a return value to the template:
import options
template p[T]: auto = none(T) # notice that I added "auto" here!
discard p[int]

C++ Macro's Token-Paster as argument of a function

I was searching for a while on the net and unfortunately i didn't find an answer or a solution for my problem, in fact, let's say i have 2 functions named like this :
1) function1a(some_args)
2) function2b(some_args)
what i want to do is to write a macro that can recognize those functions when feeded with the correct parameter, just that the thing is, this parameter should be also a parameter of a C/C++ function, here is what i did so far.
#define FUNCTION_RECOGNIZER(TOKEN) function##TOKEN()
void function1a()
{
}
void function2a()
{
}
void anotherParentFunction(const char* type)
{
FUNCTION_RECOGNIZER(type);
}
clearly, the macro is recognizing "functiontype" and ignoring the argument of anotherParentFunction, i'm asking if there is/exist a trick or anything to perform this way of pasting.
thank you in advance :)
If you insist on using a macro: Skip the anotherParentFunction() function and use the macro directly instead. When called with constant strings, i.e.
FUNCTION_RECOGNIZER( "1a");
it should work.
A more C++ like solution would be to e.g use an enum, then implement anotherParentFunction() with the enum as parameter and a switch that calls the corresponding function. Of course you need to change the enum and the switch statement then every time you add a new function, but you would be more flexible in choosing the names of the functions.
There are many more solutions to achieve something similar, the question really is: What is your use case? What do want to achieve?
In 16.1.5 the standard says:
The implementation can process and skip sections of source files conditionally, include other source files, and replace macros. These capabilities are called preprocessing, because conceptually they occur before translation of the resulting translation unit.
[emphasis mine]
Originally pre-processing was done by a separate app, it is essentially an independent language.
Today, the pre-processor is often part of the compiler, but - for example - you can't see macros etc in the Clang AST tree.
The significance of this is that the pre-processor knows nothing about types or functions or arguments.
Your function definition
void anotherParentFunction(const char* type)
means nothing to the pre-processor and is completely ignored by it.
FUNCTION_RECOGNIZER(type);
this is recognized as a defined macro, but type is not a recognized pre-processor symbol so it is treated as a literal, the pre-processor does not consult the C++ parser or interact with it's AST tree.
It consults the macro definition:
#define FUNCTION_RECOGNIZER(TOKEN) function##TOKEN()
The argument, literal type, is tokenized as TOKEN. The word function is taken as a literal and copied to the result string, the ## tells the processor to copy the value of the token TOKEN literally, production functiontype in the result string. Because TOKEN isn't recognized as a macro, the ()s end the token and the () is appended as a literal to the result string.
Thus, the pre-processor substitutes
FUNCTION_RECOGNIZER(type);
with
functiontype();
So the bad news is, no there is no way to do what you were trying to do, but this may be an XY Problem and perhaps there's a solution to what you were trying to achieve instead.
For instance, it is possible to overload functions based on argument type, or to specialize template functions based on parameters, or you can create a lookup table based on parameter values.

Need help regarding macro definition

Im reading c++ code, i have found such definition
#define USE_VAL(X) if (&X-1) {}
has anybody idea, what does it mean?
Based on the name, it looks like a way of getting rid of an "unused variable" warning. The intended use is probably something like this:
int function(int i)
{
USE_VAL(i)
return 42;
}
Without this, you could get a compiler warning that the parameter i is unused inside the function.
However, it's a rather dangerous way of going about this, because it introduces Undefined Behaviour into the code (pointer arithmetic beyond bounds of an actual array is Undefined by the standard). It is possible to add 1 to an address of an object, but not subtract 1. Of course, with + 1 instead of - 1, the compiler could then warn about "condition always true." It's possible that the optimiser will remove the entire if and the code will remain valid, but optimisers are getting better at exploiting "undefined behaviour cannot happen," which could actually mess up the code quite unexpectedly.
Not to mention that fact that operator& could be overloaded for the type involved, potentially leading to undesired side effects.
There are better ways of implementing such functionality, such as casting to void:
#define USE_VAL(X) static_cast<void>(X)
However, my personal preference is to comment out the name of the parameter in the function definition, like this:
int function(int /*i*/)
{
return 42;
}
The advantage of this is that it actually prevents you from accidentally using the parameter after passing it to the macro.
Typically it's to avoid an "unused return value" warning. Even if the usual "cast to void" idiom normally works for unused function parameters, gcc with -pedantic is particularly strict when ignoring the return values of functions such as fread (in general, functions marked with __attribute__((warn_unused_result))), so a "fake if" is often used to trick the compiler in thinking you are doing something with the return value.
A macro is a pre-processor directive, meaning that wherever it's used, it will be replaced by the relevant piece of code.
and here after USE_VAL(X) the space it is explain what will USE_VAL(X) do.
first it take the address of x and then subtract 1 from it. if it is 0 then do nothing.
where USE_VAL(X) will used it will replaced by the if (&X-1) {}

Literals and constexpr functions, compile-time evaluation

Attempting to implement a pleasing (simple, straightforward, no TMP, no macros, no unreadable convoluted code, no weird syntax when using it) compile-time hash via user-defined literals, I found that apparently GCC's understanding of what's a constant expression is grossly different from my understanding.
Since code and compiler output say more than a thousand words, without further ado:
#include <cstdio>
constexpr unsigned int operator"" _djb(const char* const str, unsigned int len)
{
static_assert(__builtin_constant_p(str), "huh?");
return len ? str[0] + (33 * ::operator"" _djb(str+1, len-1)) : 5381;
}
int main()
{
printf("%u\n", "blah"_djb);
return 0;
}
The code is pretty straightforward, not much to explain, and not much to ask about -- except it does not evaluate at compile-time. I tried using a pointer dereference instead of using an array index as well as having the recursion break at !*str, all to the same result.
The static_assert was added later when fishing in troubled waters for why the hash just wouldn't evaluate at compile-time when I firmly believed it should. Well, surprise, that only puzzled me more, but didn't clear up anything! The original code, without the static_assert, is well-accepted and compiles without warnings (gcc 4.7.2).
Compiler output :
[...]\main.cpp: In function 'constexpr unsigned int operator"" _djb(const char*, unsigned int)':
[...]\main.cpp:5:2: error: static assertion failed: huh?
My understanding is that a string literal is, well... a literal. In other words, a compile-time constant. Specifically, it is a compiletime-known sequence of constant characters starting at a constant address assigned by the compiler (and thus, known) terminated by '\0'. This logically implies that the literal's compiler-calculated length as supplied to operator"" is a constexpr as well.
Also, my understanding is that calling a constexpr function with only compile-time parameters makes it elegible as initializer for an enumeration or as template parameter, in other words it should result in evaluation at compile time.
Of course it is in principle always allowable for the compiler to evaluate a constexpr function at runtime, but being able to move the evaluation to compile-time is the entire point of having constexpr, after all.
Where is my fallacy, and is there a way of implementing a user-defined literal that can take a string literal so it actually evaluates at compile-time?
Possibly relevant similar questions:
Can a string literal be subscripted in a constant expression?
User defined literal arguments are not constexpr?
The first one seems to suggest that at least for char const (&str)[N] this works, and GCC accepts it, though I admittedly can't follow the conclusion.
The second one uses integer literals, not string literals, and finally addresses the issue by using template metaprogramming (which I don't want). So apparently the issue is not limited to string literals?
I don't have GCC 4.7.2 at hand to try, but your code without the static assertion (more on that later) compiles fine and executes the function at compile-time with both GCC 4.7.3 and GCC 4.8. I guess you will have to update your compiler.
The compiler is not always allowed to move the evaluation to runtime: some contexts, like template arguments, and static_assert, require evaluation at compile-time or an error if not possible. If you use your UDL in a static_assert you will force the compiler to evaluate it at compile-time if possible. In both my tests it does so.
Now, on to __builtin_constant_p(str). To start with, as documented, __builtin_constant_p can produce false negatives (i.e. it can return 0 for constant expressions sometimes).
str is not provably a constant expression because it is a function argument. You can force the compiler to evaluate the function at compile-time in some contexts, but that doesn't mean it can never evaluate it at runtime: some contexts never force compile-time evaluation (and in fact, in some of those contexts compile-time evaluation is just impossible). str can be a non-constant expression.
The static assertions are tested when the compiler sees the function, not once for each call the compiler sees. That makes the fact that you always call it in compile-time contexts irrelevant: only the body matters. Because str can sometimes be a non-constant expression, __builtin_constant_p(str) in that context cannot be true: it can produce false negatives, but it does not produce false positives.
To make it more clear: static_assert(__builtin_constant_p("blah"), "") will pass (well, in theory it could be fail, but I doubt the compiler would produce a false negative here), because "blah" is always a constant expression, but str is not the same expression as "blah".
For completeness, if the argument in question was of a numeric type (more on that later), and you did the test outside of a static assertion, you could get the test to return true if you passed a constant, and false if you passed a non-constant. In a static assertion, it always fails.
But! The docs for __builtin_constant_p reveal one interesting detail:
However, if you use it in an inlined function and pass an argument of the function as the argument to the built-in, GCC will never return 1 when you call the inline function with a string constant or compound literal (see Compound Literals) and will not return 1 when you pass a constant numeric value to the inline function unless you specify the -O option.
As you can see the built-in has a limitation makes the test always return false if the expression given is a string constant.

Why and when to use __noop?

I was reading about __noop and the MSDN example is
#if DEBUG
#define PRINT printf_s
#else
#define PRINT __noop
#endif
int main() {
PRINT("\nhello\n");
}
and I don't see the gain over just having an empty macro:
#define PRINT
The generated code is the same. What's a valid example of using __noop that actually makes it useful?
The __noop intrinsic specifies that a function should be ignored and the argument list be parsed but no code be generated for the arguments. It is intended for use in global debug functions that take a variable number of arguments.
In your case the argument is an obviously side effect free expression that can be easily optimized out, so it doesn't matter.
But if the argument expression has side effects or is so complex that the compiler can't prove that it terminates normally and has no side-effects then using __noop prevents the potentially expensive evaluation of that expression.
The second benefit is that it behaves like a function call with a variable number of arguments syntactically. So substituting it for a function call doesn't affect the parsing of the program. With some other replacements (like the empty string), that might be a problem in some situations.
#define PRINT
extern int some_complicated_calculation();
PRINT("%d\n", some_complicated_calculation());
would call the function even though you don't want the result.
Using __noop, the function won't be called.
You could (assuming the compiler supports variadic macros) define PRINT to ignore the arguments; but then they won't be parsed at all, and may become invalid if you change the code around them without compiling the variant that defines PRINT to do something. Using __noop, the arguments are still parsed, so are more likely to remain valid.