I have started reading Effective C++ and at some point in item 2, the following is mentioned:
// call f with the maximum of a and b
#define CALL_WITH_MAX(a, b) f((a) > (b) ? (a) : (b))
...
int a = 5, b = 0;
CALL_WITH_MAX(++a, b); // a is incremented twice
CALL_WITH_MAX(++a, b+10); // a is incremented once
Here, the number of times that a is incremented before calling f
depends on what it is being compared with!
Indeed, if I use a simple print statement in f, 7 gets printed in the first call, but I cannot for the life of me figure out why. Am I missing something obvious?
The compiler replaces the macros with exactly what you pass in, verbatim. So you end up with
int a = 5, b = 0;
f((++a) > (b) ? (++a) : (b));
f((++a) > (b+10) ? (++a) : (b+10));
Use g++ -E myprog.cpp (replace g++ with whatever-your-compiler-is if you are not using g++) - it works on ALMOST all compilers, it will produce the actual stuff after preprocessing.
And this is a great example of why you shouldn't use macros to do function-type stuff.
You'd get much more of what you (probably) expect if you were to use an inline function:
inline void CallWithMax(int a, int b)
{
f((a) > (b) ? (a) : (b));
}
Any decent compiler should be able to do this AT LEAST as efficient as a macro, with the added advantage that your a and b are evaluated once in the calling code, and nothing "weird" happens.
You can also step through a inline function if you build your code with debug symbols, so if you want to see what value a and b actually are inside the function, you can do that. Macros, because they expand into the original place in the source code, so you can't really see what's going on inside.
Related
Why this macro gives output 144, instead of 121?
#include<iostream>
#define SQR(x) x*x
int main()
{
int p=10;
std::cout<<SQR(++p);
}
This is a pitfall of preprocessor macros. The problem is that the expression ++p is used twice, because the preprocessor simply replaces the macro "call" with the body pretty much verbatim.
So what the compiler sees after the macro expansion is
std::cout<<++p*++p;
Depending on the macro, you can also get problems with operator precedence if you are not careful with placing parentheses where needed.
Take for example a macro such as
// Macro to shift `a` by `b` bits
#define SHIFT(a, b) a << b
...
std::cout << SHIFT(1, 4);
This would result in the code
std::cout << 1 << 4;
which may not be what was wanted or expected.
If you want to avoid this, then use inline functions instead:
inline int sqr(const int x)
{
return x * x;
}
This has two things going for it: The first is that the expression ++p will only be evaluated once. The other thing is that now you can't pass anything other than int values to the function. With the preprocessor macro you could "call" it like SQR("A") and the preprocessor would not care while you instead get (sometimes) cryptical errors from the compiler.
Also, being marked inline, the compiler may skip the actual function call completely and put the (correct) x*x directly in the place of the call, thereby making it as "optimized" as the macro expansion.
The approach of squaring with this macro has two problems:
First, for the argument ++p, the increment operation is performed twice. That's certainly not intended. (As a general rule of thumb, just don't do several things in "one line". Separate them into more statements.). It doesn't even stop at incrementing twice: The order of these increments isn't defined, so there is no guaranteed outcome of this operation!
Second, even if you don't have ++p as the argument, there is still a bug in your macro! Consider the input 1 + 1. Expected output is 4. 1+1 has no side-effect, so it should be fine, shouldn't it? No, because SQR(1 + 1) translates to 1 + 1 * 1 + 1 which evaluates to 3.
To at least partially fix this macro, use parentheses:
#define SQR(x) (x) * (x)
Altogether, you should simply replace it by a function (to add type-safety!)
int sqr(int x)
{
return x * x;
}
You can think of making it a template
template <typename Type>
Type sqr(Type x)
{
return x * x; // will only work on types for which there is the * operator.
}
and you may add a constexpr (C++11), which is useful if you ever plan on using a square in a template:
constexpr int sqr(int x)
{
return x * x;
}
Because you are using undefined behaviour. When the macro is expanded, your code turns into this:
std::cout<<++p*++p;
using increment/decrement operators on the same variable multiple times in the same statement is undefined behaviour.
Because SQR(++p) expands to ++p*++p, which has undefined behavior in C/C++. In this case it incremented p twice before evaluating the multiplication. But you can't rely on that. It might be 121 (or even 42) with a different C/C++ compiler.
If have encountered this claim multiple times and can't figure out what it is supposed to mean. Since the resulting code is compiled using a regular C compiler it will end up being type checked just as much (or little) as any other code.
So why are macros not type safe? It seems to be one of the major reasons why they should be considered evil.
Consider the typical "max" macro, versus function:
#define MAX(a,b) a < b ? a : b
int max(int a, int b) {return a < b ? a : b;}
Here's what people mean when they say the macro is not type-safe in the way the function is:
If a caller of the function writes
char *foo = max("abc","def");
the compiler will warn.
Whereas, if a caller of the macro writes:
char *foo = MAX("abc", "def");
the preprocessor will replace that with:
char *foo = "abc" < "def" ? "abc" : "def";
which will compile with no problems, but almost certainly not give the result you wanted.
Additionally of course the side effects are different, consider the function case:
int x = 1, y = 2;
int a = max(x++,y++);
the max() function will operate on the original values of x and y and the post-increments will take effect after the function returns.
In the macro case:
int x = 1, y = 2;
int b = MAX(x++,y++);
that second line is preprocessed to give:
int b = x++ < y++ ? x++ : y++;
Again, no compiler warnings or errors but will not be the behaviour you expected.
Macros aren't type safe because they don't understand types.
You can't tell a macro to only take integers. The preprocessor recognises a macro usage and it replaces one sequence of tokens (the macro with its arguments) with another set of tokens. This is a powerful facility if used correctly, but it's easy to use incorrectly.
With a function you can define a function void f(int, int) and the compiler will flag if you try to use the return value of f or pass it strings.
With a macro - no chance. The only checks that get made are it is given the correct number of arguments. then it replaces the tokens appropriately and passes onto the compiler.
#define F(A, B)
will allow you to call F(1, 2), or F("A", 2) or F(1, (2, 3, 4)) or ...
You might get an error from the compiler, or you might not, if something within the macro requires some sort of type safety. But that's not down to the preprocessor.
You can get some very odd results when passing strings to macros that expect numbers, as the chances are you'll end up using string addresses as numbers without a squeak from the compiler.
Well they're not directly type-safe... I suppose in certain scenarios/usages you could argue they can be indirectly (i.e. resulting code) type-safe. But you could certainly create a macro intended for integers and pass it strings... the pre-processor handling the macros certainly doesn't care. The compiler may choke on it, depending on usage...
Since macros are handled by the preprocessor, and the preprocessor doesn't understand types, it will happily accept variables that are of the wrong type.
This is usually only a concern for function-like macros, and any type errors will often be caught by the compiler even if the preprocessor doesn't, but this isn't guaranteed.
An example
In the Windows API, if you wanted to show a balloon tip on an edit control, you'd use Edit_ShowBalloonTip. Edit_ShowBalloonTip is defined as taking two parameters: the handle to the edit control and a pointer to an EDITBALLOONTIP structure. However, Edit_ShowBalloonTip(hwnd, peditballoontip); is actually a macro that evaluates to
SendMessage(hwnd, EM_SHOWBALLOONTIP, 0, (LPARAM)(peditballoontip));
Since configuring controls is generally done by sending messages to them, Edit_ShowBalloonTip has to do a typecast in its implementation, but since it's a macro rather than an inline function, it can't do any type checking in its peditballoontip parameter.
A digression
Interestingly enough, sometimes C++ inline functions are a bit too type-safe. Consider the standard C MAX macro
#define MAX(a, b) ((a) > (b) ? (a) : (b))
and its C++ inline version
template<typename T>
inline T max(T a, T b) { return a > b ? a : b; }
MAX(1, 2u) will work as expected, but max(1, 2u) will not. (Since 1 and 2u are different types, max can't be instantiated on both of them.)
This isn't really an argument for using macros in most cases (they're still evil), but it's an interesting result of C and C++'s type safety.
There are situations where macros are even less type-safe than functions. E.g.
void printlog(int iter, double obj)
{
printf("%.3f at iteration %d\n", obj, iteration);
}
Calling this with the arguments reversed will cause truncation and erroneous results, but nothing dangerous. By contrast,
#define PRINTLOG(iter, obj) printf("%.3f at iteration %d\n", obj, iter)
causes undefined behavior. To be fair, GCC warns about the latter, but not about the former, but that's because it knows printf -- for other varargs functions, the results are potentially disastrous.
When the macro runs, it just does a text match through your source files. This is before any compilation, so it is not aware of the datatypes of anything it changes.
Macros aren't type safe, because they were never meant to be type safe.
The compiler does the type checking after macros had been expanded.
Macros and there expansion are meant as a helper to the ("lazy") author (in the sense of writer/reader) of C source code. That's all.
I have define the following macro,
#define abss(a) a >= 0 ? a : -a
while invoking this with,
int b=-1;
int c = abss(b);
printf("%d\n",c);
it should replaced in the form b >= 0 ? b : --b, which should output -2, but it outputs in my Bloodshed/DevC++ compiler 1.
I am analyzing C language for examination purpose, so I have to know what actually happens to the above case in C. Is the output result 1 is for CPP compiler I am using or what???
Your code
int b=-1;
int c = abss(b);
printf("%d\n",c);
gets translated by the preprocessor to:
int b=-1;
int c = b >= 0 ? b : -b;
printf("%d\n",c);
(the int b=-1 is in the C domain whereas the abss(b) gets expanded by the preprocessor before the compiler gets to int b=-1)
Why on earth do you think it should output -2?
#define abss(a) a >= 0 ? a : -a
so this:
int c = abss(b)
becomes
int c = b >= 0 ? b : -b
and b is -1, so that will evaluate to 1
by the way, you should bracket every use of a macro parameter, as you may get passed such things as x + 1. which would evaluate rather strangely.
Not to mention the results of abss(++x)
Your macro expands to
b >= 0 ? b : -b;
Not to b >= 0 ? b : --(-b); or whatever it is you expected.
Are you suggesting that
int x = -4;
-x;
should decrement x?
Bottom line - avoid macro use when possible.
The macro processor will never attach two adjacent tokens together, even if there is no whitespace between them, so indeed, as other posters have mentioned, your macro expands to - b, or 1.
If you want two tokens to become one when expanded by the preprocessor, you have to use the token pasting operator ##, something like
#define abss(a) a >= 0 ? a : - ## a
this will indeed do what you want, in the sense that the negative sign and the variable will form one token. Unfortunately (or fortunately, depending on how you look at it!) the pasted token "-b" will be invalid, and the expansion will not compile.
#define neg(a) -a
int b = -1;
neg(b);
In the above case, the macro is expanded to -b by the preprocessor, which will result in the value of 1. The preprocessor won't even be aware that the sign of the content of b is negative, so where should the second - come from to form a prefix decrement operator?
Even if you wrote it like this:
neg( -b );
Macro replacement is done on token level, not as a textual search & replace. You would get the equivalent of - ( -b ), but not --b.
Edit: The manual you link to in other comments is outdated (does not even address the C99 standard), and is dangerously bad. Just skimming through it I found half a dozen of statements that will make you look real stupid if you assume them to be correct. Don't use it, other than to light a fire.
This macro
#define abss(a) a >= 0 ? a : -a
is indeed hazardous, but the hazards are not where you think they are. The expression from your post
int c = abss(b);
works perfectly fine. However, consider what happens when the expression is not a simple variable, but is a function that is hard to calculate, or when it is an expression with side effects. For example
int c = abss(long_calc(arg1, arg2));
or
int c = abss(ask_user("Enter a value"));
or
int c = abss(b++);
If abss were a function, all three invocations would produce good results in predictable time. However, the macro makes the first call invoke long_calc and ask_user twice (what if the user enters a different number when prompted again?), and it post-increments b twice.
Moreover, consider this seemingly simple expression:
int c = abss(b) + 123;
Since + has a higher precedence than ? :, the resulting expansion will look like this:
int c = b >= 0 ? b : -b + 123;
When b is positive, 123 will not be added, so the meaning of the expression will change dramatically!
This last shortcoming can be addressed by enclosing the expression in parentheses. You should also enclose in parentheses each macro argument, like this:
#define abss(a) ((a) >= 0 ? (a) : -(a))
The reason for you getting the behaviour is preprocessor will just replace a with b and so effectively your code after preprocessing will be like
b >= 0 ? b : -b
and not
b >= 0 ? b : --b
So, when if b = -1, then it will effectively be considered as -(-1) and thus becomes 1.
You can check the preprocessor output and see it for yourself.
Extending this question, I wanted to use my enumed val as they're "supposed" to be,
#include <stdio.h>
enum E{ A, B, C } ;
#define inc(enVal) (*((int*)&enVal))++
int main()
{
E t = A ;
inc( t ) ;
printf( "t %d\n", t ) ;
}
Now uh, t is a variable of enum'd type E, and I have a macro inc that increases the value of t by 1,
So is this macro (and presumably other macros like it for flag checking) going to be that much less efficient than just using int t instead?
No, it's not going to be less efficient. It will, however, be incredibly, hideously, wrong. Please, don't ever.
Oh, especially since the backing type of enums is undefined and they might well actually be compiled to less than the size of an int on some compilers.
Come on, it's ok to overload for enums:
E& operator ++ (E& x)
{
x = E((int)x + 1);
return x;
}
See in action.
I'm pretty sure this violates the strict-aliasing rules in the standard, and not only that it won't work right for C at all. What are you really trying to do? Does it actually make SENSE to increment the value?
Say you're trying to implement a state machine, much better is to just have a vector/array lookup table and use that to move to a new state.
Are you sure you shouldn't just be using an int instead, if you want to be able to assume that the enumerated values are consecutive?
I have this which does not compile with the error "fatal error C1017: invalid integer constant expression" from visual studio. How would I do this?
template <class B>
A *Create()
{
#if sizeof(B) > sizeof(A)
#error sizeof(B) > sizeof(A)!
#endif
...
}
The preprocessor does not understand sizeof() (or data types, or identifiers, or templates, or class definitions, and it would need to understand all of those things to implement sizeof).
What you're looking for is a static assertion (enforced by the compiler, which does understand all of these things). I use Boost.StaticAssert for this:
template <class B>
A *Create()
{
BOOST_STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
Preprocessor expressions are evaluated before the compiler starts compilation. sizeof() is only evaluated by the compiler.
You can't do this with preprocessor. Preprocessor directives cannot operate with such language-level elements as sizeof. Moreover, even if they could, it still wouldn't work, since preprocessor directives are eliminated from the code very early, they can't be expected to work as part of template code instantiated later (which is what you seem to be trying to achieve).
The proper way to go about it is to use some form of static assertion
template <class B>
A *Create()
{
STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
There are quite a few implementations of static assertions out there. Do a search and choose one that looks best to you.
sizeof() cannot be used in a preprocessor directive.
The preprocessor runs before the compiler (at least logically it does) and has no knowledge of user defined types (and not necessarily much knowledge about intrinsic types - the preprocessor's int size could be different than the compiler targets.
Anyway, to do what you want, you should use a STATIC_ASSERT(). See the following answer:
Ways to ASSERT expressions at build time in C
With a STATIC_ASSERT() you'll be able to do this:
template <class B>
A *Create()
{
STATIC_ASSERT( sizeof(A) >= sizeof( B));
return 0;
}
This cannot be accomplished with pre-processor . The pre-processor executes in a pass prior to the compiler -- therefore the sizes of NodeB and Node have not yet been computed at the time #if is evaluated.
You could accomplish something similar using template-programming techniques. An excellent book on the subject is Modern C++ Design: Generic Programming and Design Patterns Applied, by Andrei Alexandrescu.
Here is an example from a web page which creates a template IF statement.
From that example, you could use:
IF< sizeof(NodeB)<sizeof(Node), non_existing_type, int>::RET i;
which either declares a variable of type int or of type non_existing_type. Assuming the non-existing type lives up to its name should the template IF condition evaluate as true, a compiler error will result. You can rename i something descriptive.
Using this would be "rolling your own" static assert, of which many are already available. I suggest you use one of those after playing around with building one yourself.
If you are interested in a compile time assert that will work for both C and C++, here is one I developed:
#define CONCAT2(x, y) x ## y
#define CONCAT(x, y) CONCAT2(x, y)
#define COMPILE_ASSERT(expr, name) \
struct CONCAT(name, __LINE__) { char CONCAT(name, __LINE__) [ (expr) ? 1 : -1 ]; }
#define CT_ASSERT(expr) COMPILE_ASSERT(expr, ct_assert_)
The to how this works is that the size of the array is negative (which is illegal) when the expression is false. By further wrapping that in a structure definition, this does not create anything at runtime.
This has already been explained, but allow me to elaborate on why the preprocessor can not compute the size of a structure. Aside from the fact that this is too much to ask of a simple preprocessor, there are also compiler flags that affect the way the structure is laid out.
struct X {
short a;
long b;
};
this structure might be 6 bytes or 8 bytes long, depending on whether the compiler was told to 32-bit align the "b" field, for performance reasons. There's no way the preprocessor could have that information.
Using MSVC, this code compiles for me:
const int cPointerSize = sizeof(void*);
const int cFourBytes = 4;`
#if (cPointerSize == cFourBytes)
...
however this (which should work identically) does not:
#if ( sizeof(void*) == 4 )
...
i see many people say that sizeof cannot be used in a pre-processor directive,
however that can't be the whole story because i regularly use the following macro:
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
for example:
#include <stdio.h>
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
int main(int argc, char*argv[])
{
unsigned char chars[] = "hello world!";
double dubls[] = {1, 2, 3, 4, 5};
printf("chars num bytes: %ld, num elements: %ld.\n" , sizeof(chars), STATICARRAYSIZE(chars));
printf("dubls num bytes: %ld, num elements: %ld.\n" , sizeof(dubls), STATICARRAYSIZE(dubls));
}
yields:
orion$ ./a.out
chars num bytes: 13, num elements: 13.
dubls num bytes: 40, num elements: 5.
however
i, too, cannot get sizeof() to compile in a #if statement under gcc 4.2.1.
eg, this doesn't compile:
#if (sizeof(int) == 2)
#error uh oh
#endif
any insight would be appreciated.