How does #define work in C++ [duplicate] - c++

This question already has answers here:
Macro Expansion
(7 answers)
The need for parentheses in macros in C [duplicate]
(8 answers)
Closed 4 years ago.
#include <iostream>
using namespace std;
#define squareOf(x) x*x
int main() {
// your code goes here
int x;
cout<<squareOf(x+4);
return 0;
}
I thought the answer would come as 16 but it came out as 4.
I am confused how does this works.

16 would never be the result here. Let's say you would have initialized x with 0, then you would have your x+4 replaced by x+4*x+4 which would evaluate as 0+4*0+4 = 4.
Preprocessor macros replace source code, they are not functions.
You might now think that maybe
#define squareOf(x) (x)*(x)
would be better, but consider that then
int x = 2;
int y = squareOf(x++);
would result in y = (2)*(3) = 6, not in 4.
If you do not have a really good reason, avoid preprocessor macros. There are good reasons, but if something behaves like a function, better make it a function.
Now take a look at this:
template <class T>
inline T squareOf(const T& number)
{
return number*number;
}
As inline, it does also replace code (at least if the compiler wants so), but here, this one actually behaves like a function (since it is one). Wouldn't expect a bad outcome from that one.

Related

Is it possible to make a compile-time (macros) branching based on assert condition? [duplicate]

This question already has answers here:
How can I use "sizeof" in a preprocessor macro?
(13 answers)
C++ Getting the size of a type in a macro conditional
(7 answers)
Does the sizeof operator work in preprocessor #if directives?
(4 answers)
sizeof in preprocessor command doesn't compile with error C1017
(4 answers)
sizeof() is not executed by preprocessor
(9 answers)
Closed 4 months ago.
For example, I want something similar in meaning to this:
//Somethere in Windows header
struct COUPLE {
WORD part0;
WORD part1;
}
//In my code
#if sizeof(COUPLE) == sizeof(INT)
#define COUPLE_TO_INT(arg) (*((INT*)((void*)&arg)))
#else
inline INT COUPLE_TO_INT(const COUPLE &arg) {
return ((INT)arg.part1 << 16) + arg.part0;
}
#endif
Of course, the code from the example is not compiled.
And, of course, I can do with just the INT COUPLE_TO_INT(const COUPLE &arg) function, but as I noticed, in most cases it is not required and I can do with reinterpret_cast, which requires less resources (shifting and summation). However, there may be situations where padding breaks this mechanism, so a backup path is required.
It is clear that I cannot influence the alignment of the structures from the header in any way, but I can find out their size and act on this.
Is it possible to branch a macro based on an C++ assert or something of the same kind?
You could use constexpr if in a normal function, e.g.
int coupleToInt(const COUPLE& c) {
if constexpr (sizeof(COUPLE) == sizeof(int)) {
// ...
}
else {
// ...
}
}
This feature is available since C++17 and you tagged the question as such. The condition is evaluated at compile time.

do{}while(0) vs an empty statement [duplicate]

This question already has answers here:
do { ... } while (0) — what is it good for? [duplicate]
(5 answers)
Why use apparently meaningless do-while and if-else statements in macros?
(9 answers)
Closed 4 years ago.
Trying to make a logging system that only logs data when a certain macro is defined, I've done something like this:
#ifdef _DEBUG
#define foo(a) std::cout << a << std::endl
#else
#define foo(a)
#endif
int main()
{
foo("Hello!");
return 0;
}
The function main, after pre-processing, expands to:
int main()
{
;
return 0;
}
However, on some places, I saw that people use do{}while(0) instead of an empty macro. I suppose that a compiler would optimize away both of these but I'm wondering is there an advantage that one has over another?
I am aware of the need for both an empty statement and do{}while(0) but what I do not know is the difference between the two.
I don't believe my question was fully read and compared to the ones that have been provided when marking as duplicate.

Why is there a size_t defined in the global scope as well as in namespace std? [duplicate]

This question already has answers here:
Does "std::size_t" make sense in C++?
(8 answers)
Closed 5 years ago.
I've noticed that my C++ programs compile fine whether I use ::size_t or std::size_t. I can use them interchangeably with no issues at all, so it seems like one of them is a typedef for the other.
As an example, consider the following code which uses the global size_t (this is the whole file, no usings and other stuff):
#include <iostream>
int main() {
::size_t x = 100;
std::cout << x << std::endl;
}
The next code uses the size_t in std:
#include <iostream>
int main() {
std::size_t x = 100;
std::cout << x << std::endl;
}
Both compile fine and outputs 100 as expected.
I was under the impression that everything in the standard library is put in namespace std, but clearly this isn't the case. Why is this so?
Note: the same goes for ptrdiff_t, intN_t and uintN_t too.
According to what I've understood,::size_t and std::size_t are slightly different, but essentially the same, with a similar function.
There's a much better answer here: link
Hopes this helps!

C++ why my template expansion lead to compiler stack overflow? [duplicate]

This question already has answers here:
C++11 constexpr function compiler error with ternary conditional operator (?:)
(2 answers)
Closed 6 years ago.
I was trying template meta programming and writing a function to calculate power of base^re like 3^2=9
template<int N>
int Tpow(int base){return N==0?1:base*Tpow<N-1>(base);}
int main()
{
int r3=Tpow<3>(2);
return 0;
}
Just several lines, but it crashes both gcc and clang. Where did I get wrong?
Thanks.
Solution: You have to specialize your template for N equal 0. Like:
template<>
int Tpow<0>(int base){return 1;}
Now that you have this, you could also optimize your original template like so:
template<int N>
int Tpow(int base){return base*Tpow<N-1>(base);}
because you know you handle the case of N equal 0.
Explanation: Your compiler is basically doing this: It sees
int r3=Tpow<3>(2);
and makes a function for 3 as the template variable, like so
int Tpow_3(int base){return 3==0?1:base*Tpow<3-1>(base);}
and then it needs to make a function for 2 as template variable, like so
int Tpow_2(int base){return 2==0?1:base*Tpow<2-1>(base);}
and this goes on and on an on, because the compiler doesn't care about your 0==0?... yet.
The compiler must compile the entire function body: it cannot rely on the ternary conditional to only compile one side. So there is no block on the recursion.
(Using the constexpr of C++11 will not help either).
To solve this, you need to specialise the function for the N = 0 case.

Why #define is bad? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When are C++ macros beneficial?
Why is #define bad and what is the proper substitute?
Someone has told me that #define is bad. Well, I honestly don't not understand why its bad. If its bad, then what other way can I do this then?
#include <iostream>
#define stop() cin.ignore(numeric_limits<streamsize>::max(), '\n');
#define is not inherently bad. However, there are usually better ways of doing what you want. Consider an inline function:
inline void stop() {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
(Really, you don't even need inline for a function like that. Just a plain ordinary function would work just fine.)
It's bad because it's indiscriminate. Anywhere you have stop() in your code will get replaced.
The way you solve it is by putting that code into its own method.
In C++, using #define is not forcibly bad, although alternatives should be preferred. There are some context, such as include guards in which there is no other portable/standard alternative.
It should be avoided because the C preprocessor operates (as the name suggests) before the compiler. It performs simple textual replacement, without regard to other definitions. This means the result input to the compiler sometimes doesn't make sense. Consider:
// in some header file.
#define FOO 5
// in some source file.
int main ()
{
// pre-compiles to: "int 5 = 2;"
// the compiler will vomit a weird compiler error.
int FOO = 2;
}
This example may seem trivial, but real examples exist. Some Windows SDK headers define:
#define min(a,b) ((a<b)?(a):(b))
And then code like:
#include <Windows.h>
#include <algorithm>
int main ()
{
// pre-compiles to: "int i = std::((1<2)?(1):(2));"
// the compiler will vomit a weird compiler error.
int i = std::min(1, 2);
}
When there are alternatives, use them. In the posted example, you can easily write:
void stop() {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
For constants, use real C++ constants:
// instead of
#define FOO 5
// prefer
static const int FOO = 5;
This will guarantee that your compiler sees the same thing you do and benefit you with name overrides in nested scopes (a local FOO variable will override the meaning of global FOO) as expected.
It's not necessarily bad, it's just that most things people have used it for in the past can be done in a much better way.
For example, that snippet you provide (and other code macros) could be an inline function, something like (untested):
static inline void stop (void) {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
In addition, there are all the other things that code macros force you to do "macro gymnastics" for, such as if you wanted to call the very badly written:
#define f(x) x * x * x + x
with:
int y = f (a + 1); // a + 1 * a + 1 * a + 1 + a + 1 (4a+2, not a^3+a)
int z = f (a++); // a++ * a++ * a++ + a++
The first of those will totally surprise you with its results due to the precedence of operators, and the second will give you undefined behaviour. Inline functions do not suffer these problems.
The other major thing that macros are used for is for providing enumerated values such as:
#define ERR_OK 0
#define ERR_ARG 1
: :
#define ERR_MEM 99
and these are better done with enumerations.
The main problem with macros is that the substitution is done early in the translation phase, and information is often lost because of this. For example, a debugger generally doesn't know about ERR_ARG since it would have been substituted long before the part of the translation process that creates debugging information.
But, having maligned them enough, they're still useful for defining simple variables which can be used for conditional compilation. That's pretty much all I use them for in C++ nowadays.
#define by itself is not bad, but it does have some bad properties to it. I'll list a few things that I know of:
"Functions" do not act as expected.
The following code seems reasonable:
#define getmax(a,b) (a > b ? a : b)
...but what happens if I call it as such?:
int a = 5;
int b = 2;
int c = getmax(++a,b); // c equals 7.
No, that is not a typo. c will be equal to 7. If you don't believe me, try it. That alone should be enough to scare you.
The preprocessor is inherently global
Whenever you use a #define to define a function (such as stop()), it acts across ALL included files after being discovered.
What this means is that you can actually change libraries that you did not write. As long as they use the function stop() in the header file, you could change the behavior of code you didn't write and didn't modify.
Debugging is more difficult.
The preprocessor does symbolic replacement before the code ever makes it to the compiler. Thus if you have the following code:
#define NUM_CUSTOMERS 10
#define PRICE_PER_CUSTOMER 1.10
...
double something = NUM_CUSTOMERS * PRICE_PER_CUSTOMER;
if there is an error on that line, then you will NOT see the convenient variable names in the error message, but rather will see something like this:
double something = 10 * 1.10;
So that makes it more difficult to find things in code. In this example, it doesn't seem that bad, but if you really get into the habit of doing it, then you can run into some real headaches.