Evaluate a Macro Argument Before Processing - c++

I want to be able to generate these options from a macro:
if(void* temp = func(arg)){ foo(temp, variable);return; }
if(void* temp = func2(arg)){ foo(temp, variable2);return; }
if(void* temp = func3(arg)){ foo(temp, variable3);return; }
And so on, but as you can see 1 is the only special case.
I want to write a macro which takes in a number as a parameter and generates a line this code, potentially with numbers far greater than 3. Unfortunately this requires building in the special case if the user passed a 1 and exercising the general case if they passed any other number. Is there a way to do this?

If you really want to use the CPP for this, it's easy enough. An indirect GLUE and an indirect SECOND macro are core tools that you could use:
#define GLUE(A,B) GLUE_I(A,B)
#define GLUE_I(A,B) A##B
#define SECOND(...) SECOND_I(__VA_ARGS__,,)
#define SECOND_I(_,X,...) X
The indirect SECOND allows you to pattern match in the preprocessor. The way that works is that you build a first token, which is normally just a throwaway. But since expansion is indirect, if that first token you build is a macro, it will expand first (namely, as part of argument substitution for the variadic). If that expansion contains a comma, it can shift in a "new" second argument right before the indirection picks the second one. You can use that to build your special cases.
Here's a cpp pattern matcher using this construct that returns its argument unless it is 1, in which case it expands to no tokens:
#define NOT_ONE(N) SECOND(GLUE(TEST_IF_1_IS_,N),N)
#define TEST_IF_1_IS_1 ,
Using that, your macro might be:
#define DISPATCH_CASE(N) \
if(void* temp = GLUE(func,NOT_ONE(N))){ \
foo(temp, GLUE(variable,NOT_ONE(N))); \
return;
}
Demo (coliru)
Update: Visual Studio version
But I'm on Visual Studio, and I can't make it work. I think the problem is the __VA_ARGS__ expansion works differently on Visual Studio
For VS, I've found another level of indirection of a particular sort (one that separates the macro from its arguments so the arg list can evaluate in a simple (...) context before it's applied) helps it figure out that commas delimit arguments. Typically I would repeat the same pattern in multiple macros to avoid blue paint.
Here, that translates to the slightly uglier:
#define GLUE(A,B) GLUE_C(GLUE_I,(A,B))
#define GLUE_I(A,B) A##B
#define GLUE_C(A,B) A B
#define SECOND(...) SECOND_C(SECOND_I,(__VA_ARGS__,,))
#define SECOND_I(_,X,...) X
#define SECOND_C(A,B) A B
Demo (goldbolt)

Related

Make c++ macro2 containt quoted body of macro1

I'm trying to make some kind of a simple system that calculates the number of builds, including this info in .rc file (for windows) and met the problem. Here it is:
#define QUOTE(s) #s
#define A 0,0,0,1
#define A_STR QUOTE(A)
Expanding of A_STR: "A" but not "0,0,0,1" as I expected.
Well, I need A_STR to be a string representation of A (that's what windres expects to see in .rc file), but I can't find the way to do this.
I've already tried smth like #define A_STR #A but it simply expands to #0,0,0,1.
I also tried using qmake like this: DEFINES *= A_STR="<here-is-how-I-get-version>" but gcc gets it without quotes and I've got the same problem.
When a C preprocessor macro is expanded, its parameters are expanded to their literal arguments, so s would be expanded to A when your QUOTE(s) taking argument A is expanded. Normally, after this expansion is complete, the expanded text is then scanned again to expand any macros embedded therein, so this would cause the A to expand to 0,0,0,1. However, when the stringification operator # is used to stringify the following text, that stringification happens first, so the following text never gets a chance to be expanded, thus you get stringified "A" as the final expansion of A_STR.
This problem is normally solved by introducing a second level of indirection, which gives the initial macro argument a second chance to expand:
#define QUOTE2(A) #A
#define QUOTE(A) QUOTE2(A)
However, this would not actually work for your case, because in the first-level expansion the A would expand to 0,0,0,1, which would be taken as four arguments to QUOTE2(), and thus would be rejected as an invalid macro call.
You can solve this with variadic macro arguments and __VA_ARGS__:
#define QUOTE2(...) #__VA_ARGS__
#define QUOTE(...) QUOTE2(__VA_ARGS__)

Working of pre-processor C++

#define NAME VALUE
I know whenever the compiler see this, it would replace NAME with VALUE. But I'm confused about the working of pre-processing directives such as :
#define CONFIG_VAR(name, type, value)
This does not tell the compiler to replace anything , but I could see statements like
CONFIG_VAR(rank, int, 100)
which would compile successfully. How does this work ?
In your example, that would simply do nothing at all. Any arguments, even those that seem like they should give compilation errors, are accepted and the whole macro call is replaced with an empty string.
If, however, you later replace the definition with something like:
#define CONFIG_VAR(name, type, value) add_config_var<type>(name, value)
it would suddenly do something useful. So, I'd guess that macro is a placeholder for functionality which is not (yet) implemented or not available in that part of the program.
When you say:
#define FOO BAR
what the preprocessor does is to replace each time after this it sees the text FOO by the text BAR, a macro definition. The process is called macro expansion. This is mostly used to define constants, like:
#define N 128
#define MASK (~(1 << 4))
It can be (ab)used to do very funky stuff, as it knows nothing of expressions, statements, or anything. So:
#define CONST (1 + 3 << (x))
is actually OK, and will expand to (1 + 3 << (x)) each time it is seen, using the current value of x each time. Also gunk like:
#define START 5 * (1 +
#define END + 5)
followed by START 2 + 3 + 4 END predictably gives 5 * (1 + 2 + 3 + 4 +5)`
There is also the option of defining macros with parameters, like:
#define BAD_SQUARE(x) x * x
which, if called as BAD_SQUARE(a) will expand to a * a. But BAD_SQUARE(a + b) expands to a + b * a + b, which isn't what was intended (presumably...).
This comes from the dark ages of C, today's C/C++ have safer/cleaner mechanisms to get the same result (use const in C++, in C it sadly defines a variable, not a real constant; use inline functions in C/C++ or templates in C++). There is too much code out there that uses this preprocessor usage (and too many fingers who write this way) so it is practically impossible to get rid of this. As a rule of thumb, learn to read code using macros, whiel learning to write code without them (as far as reasonable, there are times when they come mighty handy...).
This is a macro (more common in C than in C++). According to the definition you provided, the preprocessor will remove occurrences of that "function". A common use-case is usually for logging:
#ifdef DEBUG
#define dprintf(...) printf(...)
#else
#define dprintf(...) // This will remove dprintf lines
#endif
In C++, I believe the general convention is to use inline functions as they provide the same value performance-wise, but are also type checked.
If this really is the entire macro definition, then it simply defines this function-like macro to expand to nothing (an empty string). For example, in the source,
CONFIG_VAR(rank, int, 100);
will be transformed into
;
In this case pre-processor simply removes such strings (replaces with nothing). Widely enough used technique.
Here is example where it is important (actually only one of possible usages):
#if DEBUG_ON
#define LOG(level, string) SomeLogger(level, string)
#else
#define LOG(level, string)
#endif
Probably you should get more familiar with C preprocessor.
There is somewhat close technique (X macro) which builds code which handles repeating lists based on defined actions.

Splitting arguments in C++ preprocessor

Some legacy code I am working on has a macro which returns a comma-separated list intended to be used as function arguments. This is ugly, but the configuration file contains many of these and it would be difficult to change now.
#define XY1 0,0
#define XY2 1,7
...
void fun_point(x,y);
fun_point(XY1);
This works fine as long as it is a function being called. However, when trying to call another macro with the parameters, the whole string is considered as one argument rather than split at the comma into two arguments
#define MAC_POINT(x,y) (x+y)
MAC_POINT(XY1) #not expanded by preprocessor
Is there a workaround for this problem without changing the XY definitions?
Kinda. The following works:
#define MAC_POINT(x,y) (x+y)
#define MAC_POINT1(xy) MAC_POINT(xy)
#define XY x,y
MAC_POINT(x,y)
MAC_POINT1(XY)
However, you have to change from MAC_POINT to MAC_POINT1 if you only have one argument.
Another possibility is this:
#define MAC_POINT(x,y) (x+y)
#define MAC_POINT1(xy) MAC_POINT xy
#define XY x,y
MAC_POINT1((x,y))
MAC_POINT1((XY))
Now you have to change all your calls to the macro, but at least they're consistent.

Why do I need double layer of indirection for macros?

At: C++ FAQ - Miscellaneous technical issues - [39.6] What should be done with macros that need to paste two tokens together?
Could someone explain to me why? All I read is trust me, but I simply can't just trust on something because someone said so.
I tried the approach and I can't find any bugs appearing:
#define mymacro(a) int a ## __LINE__
mymacro(prefix) = 5;
mymacro(__LINE__) = 5;
int test = prefix__LINE__*__LINE____LINE__; // fine
So why do I need to do it like this instead (quote from the webpage):
However you need a double layer of indirection when you use ##.
Basically you need to create a special macro for "token pasting" such
as:
#define NAME2(a,b) NAME2_HIDDEN(a,b)
#define NAME2_HIDDEN(a,b) a ## b
Trust me on this — you really need to do
this! (And please nobody write me saying it sometimes works without
the second layer of indirection. Try concatenating a symbol with
__ LINE__ and see what happens then.)
Edit: Could someone also explain why he uses NAME2_HIDDEN before it's declared below? It seems more logical to define NAME2_HIDDEN macro before I use it. Is it some sort of trick here?
The relevant part of the C spec:
6.10.3.1 Argument substitution
After the arguments for the invocation of a function-like macro have been identified,
argument substitution takes place. A parameter in the replacement list, unless preceded
by a # or ## preprocessing token or followed by a ## preprocessing token (see below), is
replaced by the corresponding argument after all macros contained therein have been
expanded. Before being substituted, each argument’s preprocessing tokens are
completely macro replaced as if they formed the rest of the preprocessing file; no other
preprocessing tokens are available.
The key part that determines whether you want the double indirection or not is the second sentence and the exception in it -- if the parameter is involved in a # or ## operation (such as the params in mymacro and NAME2_HIDDEN), then any other macros in the argument are NOT expanded prior to doing the # or ##. If, on the other hand, there's no # or ## IMMEDIATELY in the macro body (as with NAME2), then other macros in the parameters ARE expanded.
So it comes down to what you want -- sometimes you want all macros expanded FIRST, and then do the # or ## (in which case you want the double layer indirection) and sometime you DO NOT want the macros expanded first (in which case you CAN'T HAVE double layer macros, you need to do it directly.)
__LINE__ is a special macro that is supposed to resolve to the current line number. When you do a token paste with __LINE__ directly, however, it doesn't get a chance to resolve, so you end up with the token prefix__LINE__ instead of, say, prefix23, like you would probably be expecting if you would write this code in the wild.
Chris Dodd has an excellent explanation for the first part of your question. As for the second part, about the definition sequence, the short version is that #define directives by themselves are not evaluated at all; they are only evaluated and expanded when the symbol is found elsewhere in the file. For example:
#define A a //adds A->a to the symbol table
#define B b //adds B->b to the symbol table
int A;
#undef A //removes A->a from the symbol table
#define A B //adds A->B to the symbol table
int A;
The first int A; becomes int a; because that is how A is defined at that point in the file. The second int A; becomes int b; after two expansions. It is first expanded to int B; because A is defined as B at that point in the file. The preprocessor then recognizes that B is a macro when it checks the symbol table. B is then expanded to b.
The only thing that matters is the definition of the symbol at the point of expansion, regardless of where the definition is.
The most non-technical answer, which I gathered from all links here, and link of links ;) is that, a single layer indirection macro(x) #x stringifies the inputted macro's name, but by using double layers, it will stringify the inputted macro's value.
#define valueOfPi 3
#define macroHlp(x) #x
#define macro(x) macroHlp(x)
#define myVarOneLayer "Apprx. value of pi = " macroHlp(valueOfPi)
#define myVarTwoLayers "Apprx. value of pi = " macro(valueOfPi)
printf(myVarOneLayer); // out: Apprx. value of pi = valueOfPi
printf(myVarOTwoLayers); // out: Apprx. value of pi = 3
What happens at printf(myVarOneLayer)
printf(myVarOneLayer) is expanded to printf("Apprx. value of pi = " macroHlp(valueOfPi))
macroHlp(valueOfPi) tries to stringify the input, the input itself is not evaluated. It's only purpose in life is to take an input and stringify. So it expands to "valueOfPi"
So, what happens at printf(myVarTwoLayers)
printf(myVarTwoLayers) is expanded to printf("Apprx. value of pi = " macro(valueOfPi)
macro(valueOfPi) has no stringification operation, i.e. there is no #x in it's expansion, but there is an x, so it has to evaluate x and input the value to macroHlp for stringification. It expands to macroHlp(3) which in turn will stringify the number 3, since it is using #x
The order in which macros are declared is not important, the order in which they are used is. If you were to actually use that macro before it was declared -- (in actual code that is, not in a macro which remains dormant until summoned) then you would get an error of sorts but since most sane people don't go around doing these kinds of things, writing a macro and then writing a function that uses a macro not yet defined further down, etc,etc... It seems your question isn't just one question but I'll just answer that one part. I think you should have broken this down a little more.

Incrementing Preprocessor Macros

I'm trying to make a simple preprocessor loop. (I realize this is a horrible idea, but oh well.)
// Preprocessor.h
#ifndef PREPROCESSOR_LOOP_ITERATION
#define MAX_LOOP_ITERATION 16 // This can be changed.
#define PREPROCESSOR_LOOP_ITERATION 0
#endif
#if (PREPROCESSOR_LOOP_ITERATION < MAX_LOOP_ITERATION)
#define PREPROCESSOR_LOOP_ITERATION (PREPROCESSOR_LOOP_ITERATION + 1) // Increment PREPROCESSOR_LOOP_ITERATION.
#include "Preprocessor.h"
#endif
The issue is that it doesn't look like PREPROCESSOR_LOOP_ITERATION is being incremented, so it just keeps including itself infinitely. If I change the line to an actual integer (like 17), the preprocessor skips over the #include directive properly.
What am I doing incorrectly?
The "problem" is that macros are lazily evaluated. Consider your macro definition:
#define PREPROCESSOR_LOOP_ITERATION (PREPROCESSOR_LOOP_ITERATION + 1)
This defines a macro named PREPROCESSOR_LOOP_ITERATION and its replacement list is the sequence of five preprocessing tokens (, PREPROCESSOR_LOOP_ITERATION, +, 1, and ). The macro is not expanded in the replacement list when the macro is defined. Macro replacement only takes place when you invoke the macro. Consider a simpler example:
#define A X
#define B A
B // this expands to the token X
#undef A
#define A Y
B // this expands to the token Y
There is an additional rule that if the name of a macro being replaced is encountered in a replacement list, it is not treated as a macro and thus is not replaced (this effectively prohibits recursion during macro replacement). So, in your case, any time you invoke the PREPROCESSOR_LOOP_ITERATION macro, it gets replaced with
( PREPROCESSOR_LOOP_ITERATION + 1 )
then macro replacement stops and preprocessing continues with the next token.
You can perform limited arithmetic with the preprocessor by defining a sequence of macros and making use of the concatenation (##) operator, but it's quite tedious. You should consider using the Boost.Preprocessor library to help you with this. It will work with both C and C++ code. It allows for limited iteration, but what it does allow is extraordinarily useful. The closest feature that matches your use case is likely BOOST_PP_ITERATE. Other facilities like the sequence (BOOST_PP_SEQ) handlers are very helpful for writing generative code.
EDIT: As James pointed out, my original solution did not work due to lazy evaluation of macros. If your compiler supports it, the macro __COUNTER__ increments by one every time it is called, and you can use it to do a simple preprocessor loop like this:
// Preprocessor.h
#define MAX_LOOP_ITERATION 16 // Be careful of off-by-one
// do stuff
#if (__COUNTER__ < MAX_LOOP_ITERATION)
#include "Preprocessor.h"
#endif
I verified this in Visual C by running cl /P Preprocessor.h.
Seriously, find another way to do this.
The preprocessor should be relegated to include guards and simple conditional compilations.
Everything else it was ever useful for has a better way to do it in C++ (inlining, templates and so forth).
The fact that you state I realize this is a horrible idea ... should be a dead giveaway that you should rethink what you're doing :-)
What I would suggest is that you step back and tell us the real problem that you're trying to solve. I suspect that implementing recursive macros isn't the problem, it's a means to solve a problem you're having. Knowing the root problem will open up all sorts of other wondrous possibilities.