This question was migrated from Super User because it can be answered on Stack Overflow.
Migrated 10 days ago.
Using the f2 key (rename symbol) command in VS Code while over a variable name and renaming it, seems to work well for int or other in-memory variables, but when I do so with a value defined by a macro define statement, it renames all the references just fine, but fails to rename the actual macro define name. For example lets say I have the following code:
#define FOO 1
int foo = FOO;
If I place my cursor over FOO in #define FOO 1, press f2, then type a new name (e.g. BAR), VS Code will confirm all the locations FOO was found and if I want to refactor them (as expected). I press Apply, but I end up with:
#define FOO 1
int foo = BAR;
which isn't very helpful :D. Now I have a broken link and have to manually fix both.
The behavior is the same regardless if I have my cursor on the first (#define FOO 1) or second (int foo = FOO) FOO.
Related
Background: My code, which I cannot post here will eventually run on a microcontroller, and the macros just offer a way to create multiple pin definition functions, via 1 single macro define mechanic. I use windows and gcc to experiment around with those.
I tried to abstract the problem as much as possible. I use the std console functions cause it is convenient for me to display it in the console window. As such, I also save the file as .cpp and compile it with g++ on windows.
Say I set up my code like this:
#define MACRO2(_x) foo##_x(_x)
#define MACRO1(_x) MACRO2(_x)
#define BAR 3
void fooBAR(int num)
{
std::cout << num << std::endl;
}
If I run the following code (working example)
int main()
{
MACRO2(BAR);
return 0;
}
first BAR gets inserted into ##_x and thus defines the function name which is to be called and then BAR gets inserted as the argument of that function and gets expanded to its value, so we get fooBAR(3). The code works, there are no errors.
Now if I try to add a macro in between (and this is the real world situation I am faced with for reasons I cannot go into), my code looks like this:
int main()
{
MACRO1(BAR);
return 0;
}
But this code throws an error, because when MACRO1(BAR) gets substituted with MACRO2(BAR), (BAR) then gets expanded into 3, and MACRO2(3) leads to foo3(3) which isn't defined, as confirmed by the error log:
error: 'foo3' was not declared in this scope
So the requirements are:
I need to pass BAR into MACRO1 and it needs to be passed to MACRO2 without being expanded
The word BAR has to stay exactly as it is, I know I could use ## in order to prevent it from expanding, but then I would need to add a char to BAR and the function call wouldn't work anymore.
Is it possible to somehow get this done? Pass a macro to another macro as an argument, without the initial macro being expanded in the process?
But this code throws an error, because when MACRO1(BAR) gets
substituted with MACRO2(BAR), (BAR) then gets expanded into 3, and
MACRO2(3) leads to foo3(3)
Yes. This is the specified preprocessor behavior for your particular set of macros.
After they are identified, the arguments to a function-like macro are fully macro-expanded before being substituted into the macro's replacement text, except where they are operands of the ## or # preprocessor operator. Any appearances of those operators are evaluated, and then the resulting text is rescanned, along with any following text as appropriate, for additional macros to expand.
Is it possible to somehow get this done? Pass a macro to another macro as an argument, without the initial macro being expanded in the process?
Only where the argument is the operand of a ## or # operator. The latter doesn't help you, but the former affords a workaround: you can pass an additional, empty argument so that you can perform a concatenation without changing the wanted argument:
#define MACRO2(_x) foo##_x(_x)
#define MACRO1(_x,dummy) MACRO2(_x##dummy)
#define BAR 3
int main()
{
MACRO1(BAR,);
return 0;
}
That expands to
int main()
{
fooBAR(3);
return 0;
}
If you want to avoid the extra comma, then you can do so by making MACRO1 variadic:
#define MACRO2(_x) foo##_x(_x)
#define MACRO1(_x,...) MACRO2(_x##__VA_ARGS__)
#define BAR 3
int main()
{
MACRO1(BAR);
return 0;
}
That expands to the same thing as the other.
Do note that both of these approaches afford the possibility of an error being introduced by providing unwanted extra argument values to the top-level macro. One would probably suppose that most such errors would be caught at compile time, as the expansion would result in broken code, like the attempt in the question. But it is hard to rule out the possibility that such an error would coincidentally expand to something that happened to be valid, but wrong.
One way to accomplish this is to change slightly the definition of BAR.
#define MACRO2(_x) foo##_x(_x())
#define MACRO1(_x) MACRO2(_x)
#define BAR() 3
This question already has answers here:
statement with only a variable name
(2 answers)
Closed 5 years ago.
I'm looking at some code and I've come across the following line in a macro:
int foo = 0; (foo);
It compiles just fine. In fact, it seems that
0;
is a valid line of code in C/C++.
I've taken a look at the produced assembly in debug and release builds (on msvc), and it does not make a difference to the assembly. My test was a simple one though (with and without (foo);):
int main()
{
int foo = 0;
(foo);
return 0;
}
My question is why would anyone want to do this? I'm sure (foo); is in the macro for a reason, but I'm not sure why.
For context, the macro it is found in looks like this (I've renamed the variables):
#define MY_MACRO int _foo = 0; (_foo); UINT _bar = CP_THREAD_ACP; (_bar); LPCWSTR _baz = NULL; (_baz); LPCSTR _thing = NULL; (_thing)
In the code, it is simply called like
MY_MACRO;
//other code...
As Fred Larson suggested in the comments, it could be used to suppress compiler warnings. I tried this code:
int foo = 0;
foo;
I debugged the code and the compiler completely skipped over the second line, so it seems to do nothing of importance. Of course if you are afraid that these lines affect your macro, you can always step through the code with a debugger like I did.
It might be the result of macro expansion.
It might just be someone being silly.
It could be to suppress compiler warnings.
In any case, statements like that are perfectly legal in the language and have no actual effect on the program.
Any competent compiler will just remove them during compilation.
This question already has answers here:
"static const" vs "#define" vs "enum"
(17 answers)
Closed 7 years ago.
I have seen a lot of programs using #define at the beginning. Why shouldn't I declare a constant global variable instead ?
(This is a C++ answer. In C, there is a major advantage to using macros, which is that they are pretty much the only way you can get a true constant-expression.)
What is the benefit of using #define to declare a constant?
There isn't one.
I have seen a lot of programs using #define at the beginning.
Yes, there is a lot of bad code out there. Some of it is legacy, and some of it is due to incompetence.
Why shouldn't I declare a constant global variable instead ?
You should.
A const object is not only immutable, but has a type and is far easier to debug, track and diagnose, since it actually exists at compilation time (and, crucially, has a name in a debug build).
Furthermore, if you abide by the one-definition rule, you don't have to worry about causing an almighty palaver when you change the definition of a macro and forget to re-compile literally your entire project, and any code that is a dependent of that project.
And, yes, it's ironic that const objects are still called "variables"; of course, in practice, they are not variable in the slightest.
What is the benefit of using #define to declare a constant?
Declaring a constant with #define is a superior alternative to using literals and magic numbers (that is, code is much better off with a value defined as #define NumDaysInWeek (7) than simply using 7), but not a superior alternative to defining proper constants.
You should declare a constant instead of #define-ing it, for the following reasons:
#define performs a token/textual replacement in the source code, not a semantic replacement.
This screws up namespace use (#defined variables are replaced with values and not containing a fully qualified name).
That is, given:
namespace x {
#define abc 1
}
x::abc is an error, because the compiler actually tries to compile x::1 (which is invalid).
abc on the other hand will always be seen as 1, forbidding you from redefining/reusing the identifier abc in any other local context or namespace.
#define inserts it's parameters textually, instead of as variables:
#define max(a, b) a > b ? a : b;
int a = 10, b = 5;
int c = max(a++, b); // (a++ > b ? a++ : b); // c = 12
#define has absolutely no semantic information:
#define pi 3.14 // this is either double or float, depending on context
/*static*/ const double pi = 3.14; // this is always double
#define makes you (the developer) see different code than the compiler
This may not be a big thing, but the errors created this way are obscure, unexpected and waste a lot of time (you could look at an error, where the code looks perfectly fine to you, and curse the compiler for half a day, only to discover later, that one of the symbols in your expression actually means something completely different).
If you get with a debugger to code using one of the declarations of pi above, the first one will cause the debugger to tell you that pi is an invalid symbol.
Edit (valid example for a local static const variable):
const result& some_class::some_function(const int key) const
{
if(map.count(key)) // map is a std::map<int,result> member of some_class
return map.at(key); // return a (const result&) to existing element
static const result empty_value{ /* ... */ }; // "static" is required here
return empty_value; // return a (const result&) to empty element
}
This shows a case when you have a const value, but it's storage needs to outlast the function, because you are returning a const reference (and the value doesn't exist in the data of some_class). It's a relatively rare case, but valid.
According to the "father" of C++, Stroustroup, defining constants using macros should be avoided.
The biggest Problems when using macros as constants include
Macros override all occurrences in the code. e.g. also variable definitions. This may result in compile Errors or undefined behavior.
Macros make the code very difficult to read and understand because the complexity of a macro can be hidden in a Header not clearly visible to the programmer
(Apologies for the long title, but I couldn't think of a less specific one which would be clear enough)
I need to pass the name of an (object-like) macro to a nested (function-like) macro, as in the following (trivial) example:
#define ROOT_FUNC(INPUT) int v_ ## INPUT = INPUT
#define CALLER_FUNC(INPUT) ROOT_FUNC(INPUT)
#define INTA 1
#define INTB 2
#define INTC 3
Now, if I write ROOT_FUNC(INTA); in my code I get an integer variable called v_INTA with the value 1. If I define a variable in code, int INTD = 4;, and then write CALLER_FUNC(INTD); I end up with an integer variable called v_INTD with the value 4.
But if I write CALLER_FUNC(INTA); I get an integer variable called v_1 with a value of 1, because INTA is expanded to 1 at the time CALLER_FUNC is expanded, before ROOT_FUNC is expanded (i.e. ROOT_FUNC(1) is what gets expanded).
If I change line 2 to: #define CALLER_FUNC(INPUT) ROOT_FUNC(#INPUT) (i.e. stringifying INPUT), a compiler error occurs because it is being asked to define an integer variable called v_"1" (an invalid name) and give it the value "1" (a non-integer value).
I know the preprocessor is fairly primitive, but is there any way of achieving what I'm after?
(Second edit for further clarification, I want CALLER_FUNC(INTA); to expand first to ROOT_FUNC(INTA);, then to int v_INTA = 1; – i.e. I want INTA to be expanded inside ROOT_FUNC, rather than outside it. I am looking for an answer in principle, not just any way to change CALLER_FUNC to produce the result int v_INTA = 1;, which would be trivial).
P.S.
In case you are wondering, I originally had a use case involving signal handling (e.g. taking macro names like SIGINT as inputs for nested macros), but got around these limitations by simplifying my structure and abandoning nested macros; hence this question is purely academic.
If you can expand the first macro to take two arguments, you could get it to work like this:
#define FUNC(intname, intv) int v##intname = intv
#define CALL_FUNC(intv) FUNC(_##intv, intv)
#define INT1 1
#define INT2 2
int main(void)
{
int INTD = 4;
CALL_FUNC(INT1);
CALL_FUNC(INT2);
CALL_FUNC(INTD);
}
The output (from GCC), looks something like this:
int main(void)
{
int INTD = 4;
int v_INT1 = 1;
int v_INT2 = 2;
int v_INTD = INTD; // not sure if you want the value of INTD here - I guess it doesn't matter?
}
Which I guess is what you are after - if I read your question right?
The token pasting prevents the preprocessor from expanding it out and simply generates a new token which is passed to the second macro (which then simply pastes that together to form the variable), the value (which is expanded) is passed down as the second argument..
EDIT1: Reading more through what you are after, I'm guessing the above trick is not what you reall want...ah well..
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When are C++ macros beneficial?
Why is #define bad and what is the proper substitute?
Someone has told me that #define is bad. Well, I honestly don't not understand why its bad. If its bad, then what other way can I do this then?
#include <iostream>
#define stop() cin.ignore(numeric_limits<streamsize>::max(), '\n');
#define is not inherently bad. However, there are usually better ways of doing what you want. Consider an inline function:
inline void stop() {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
(Really, you don't even need inline for a function like that. Just a plain ordinary function would work just fine.)
It's bad because it's indiscriminate. Anywhere you have stop() in your code will get replaced.
The way you solve it is by putting that code into its own method.
In C++, using #define is not forcibly bad, although alternatives should be preferred. There are some context, such as include guards in which there is no other portable/standard alternative.
It should be avoided because the C preprocessor operates (as the name suggests) before the compiler. It performs simple textual replacement, without regard to other definitions. This means the result input to the compiler sometimes doesn't make sense. Consider:
// in some header file.
#define FOO 5
// in some source file.
int main ()
{
// pre-compiles to: "int 5 = 2;"
// the compiler will vomit a weird compiler error.
int FOO = 2;
}
This example may seem trivial, but real examples exist. Some Windows SDK headers define:
#define min(a,b) ((a<b)?(a):(b))
And then code like:
#include <Windows.h>
#include <algorithm>
int main ()
{
// pre-compiles to: "int i = std::((1<2)?(1):(2));"
// the compiler will vomit a weird compiler error.
int i = std::min(1, 2);
}
When there are alternatives, use them. In the posted example, you can easily write:
void stop() {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
For constants, use real C++ constants:
// instead of
#define FOO 5
// prefer
static const int FOO = 5;
This will guarantee that your compiler sees the same thing you do and benefit you with name overrides in nested scopes (a local FOO variable will override the meaning of global FOO) as expected.
It's not necessarily bad, it's just that most things people have used it for in the past can be done in a much better way.
For example, that snippet you provide (and other code macros) could be an inline function, something like (untested):
static inline void stop (void) {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
In addition, there are all the other things that code macros force you to do "macro gymnastics" for, such as if you wanted to call the very badly written:
#define f(x) x * x * x + x
with:
int y = f (a + 1); // a + 1 * a + 1 * a + 1 + a + 1 (4a+2, not a^3+a)
int z = f (a++); // a++ * a++ * a++ + a++
The first of those will totally surprise you with its results due to the precedence of operators, and the second will give you undefined behaviour. Inline functions do not suffer these problems.
The other major thing that macros are used for is for providing enumerated values such as:
#define ERR_OK 0
#define ERR_ARG 1
: :
#define ERR_MEM 99
and these are better done with enumerations.
The main problem with macros is that the substitution is done early in the translation phase, and information is often lost because of this. For example, a debugger generally doesn't know about ERR_ARG since it would have been substituted long before the part of the translation process that creates debugging information.
But, having maligned them enough, they're still useful for defining simple variables which can be used for conditional compilation. That's pretty much all I use them for in C++ nowadays.
#define by itself is not bad, but it does have some bad properties to it. I'll list a few things that I know of:
"Functions" do not act as expected.
The following code seems reasonable:
#define getmax(a,b) (a > b ? a : b)
...but what happens if I call it as such?:
int a = 5;
int b = 2;
int c = getmax(++a,b); // c equals 7.
No, that is not a typo. c will be equal to 7. If you don't believe me, try it. That alone should be enough to scare you.
The preprocessor is inherently global
Whenever you use a #define to define a function (such as stop()), it acts across ALL included files after being discovered.
What this means is that you can actually change libraries that you did not write. As long as they use the function stop() in the header file, you could change the behavior of code you didn't write and didn't modify.
Debugging is more difficult.
The preprocessor does symbolic replacement before the code ever makes it to the compiler. Thus if you have the following code:
#define NUM_CUSTOMERS 10
#define PRICE_PER_CUSTOMER 1.10
...
double something = NUM_CUSTOMERS * PRICE_PER_CUSTOMER;
if there is an error on that line, then you will NOT see the convenient variable names in the error message, but rather will see something like this:
double something = 10 * 1.10;
So that makes it more difficult to find things in code. In this example, it doesn't seem that bad, but if you really get into the habit of doing it, then you can run into some real headaches.