This question already has answers here:
The need for parentheses in macros in C [duplicate]
(8 answers)
Closed 5 years ago.
I know that #define has the following syntax: #define SYMBOL string
If I write, for example
#define ALPHA 2-1
#define BETA ALPHA*2
then ALPHA = 1 but BETA = 0.(why ?)
But if i write something like this
#define ALPHA (2-1)
#define BETA ALPHA*2
then ALPHA = 1 and BETA = 2.
Can someone explain me what's the difference between those two ?
Pre-processor macros created using #define are textual substitutions.
The two examples are not equivalent. The first sets BETA to 2-1*2. The second sets BETA to (2-1)*2. It is not correct to claim that ALPHA == 1 as you do, because ALPHA is not a number - it is a free man! it's just a sequence of characters.
When parsed as C or C++, those two expressions are different (the first is the same as 2 - (1*2)).
We can show the difference, by printing the string expansion of BETA as well as evaluating it as an expression:
#ifdef PARENS
#define ALPHA (2-1)
#else
#define ALPHA 2-1
#endif
#define BETA ALPHA*2
#define str(x) str2(x)
#define str2(x) #x
#include <stdio.h>
int main()
{
printf("%s = %d\n", str(BETA), BETA);
return 0;
}
Compile the above with and without PARENS defined to see the difference:
(2-1)*2 = 2
2-1*2 = 0
The consequence of this is that when using #define to create macros that expand to expressions, it's generally a good idea to use many more parentheses than you would normally need, as you don't know the context in which your values will be expanded. For example:
#define ALPHA (2-1)
#define BETA ((ALPHA)*2)
Macro (#define ...) are only text replacement.
With this version:
#define ALPHA 2-1
#define BETA ALPHA*2
The preprocessor replace BETA by ALPHA*2 then replace ALPHA by 2-1. When the macro expansion is over, BETA is replaced by 2-1*2 which is (due to operator precedence) equal to 2-(1*2)=0
When you add the parenthesis around the "value of ALPHA" (ALPHA doesn’t really have a value since macro are just text replacement), you change the order of evaluation of the operation. Now BETA is replaced by (2-1)*2 which is equal to 2.
Order of operations. The first example become 2-1*2, which equals 2-2.
The second example, on the other hand, expands to (2-1)*2, which evaluates to 1*2.
In the first example:
#define ALPHA 2-1
#define BETA ALPHA*2
alpha is substituted directly for whatever value you gave it (in this case, 2-1).
This leads to BETA expanding into (becoming) 2-1*2, which evaluates to 0, as described above.
In the second example:
#define ALPHA (2-1)
#define BETA ALPHA*2
Alpha (within the definition of BETA) expands into the value it was set to(2-1), which then causes BETA to expand into (2-1)*2 whenever it is used.
In case you're having trouble with order of opeartions, you can always use the acronym PEMDAS to help you (Parenthesis Exponent Multiplication Division Addition Subtraction), which can itself be remembered as "Please Excuse My Dear Aunt Sally". The first operations in the acronym must always be done before the later operations in the acronym (except for multiplication and division (in which you just go from left to right in the equation, since they are considered to have an equal priority, and addition and subtraction (same scenario as multiplication and division).
macros in c/c++ are just text substitutions, not functions. So the macro is there just to replace a macro name in the program text with its contents before the compiler event tries to analyze the code. So in the first case the compiler will see this:
BETA ==> ALPHA * 2 ==> 2 - 1 * 2 ==> compiler ==> 0
printf("beta=%d\n", BETA); ==> printf("beta=%d\n", 2 - 1 * 2);
In the second
BETA ==> ALPHA * 2 ==> (2 - 1) * 2 ==> compiler ==> 2
printf("beta=%d\n", BETA); ==> printf("beta=%d\n", (2 - 1) * 2);
Related
Write a program using conditional compilation directives to round off the number 56 to nearest fifties.
Expected Output: 50
Where is the mistake?
#include <iostream>
using namespace std;
#define R 50
int main()
{
int Div;
Div = R % 50;
cout<<"Div:: "<<Div;
printf("\n");
#if(Div<=24)
{
int Q;
printf("Rounding down\n");
Q =(int(R /50))*50;
printf("%d",Q);
}
#else
{
int Q;
printf("Rounding UP\n");
Q =(int(R/50)+1)*50;
printf("%d",Q);
}
#endif
}
The only sense-making way of interpreting the homework task is to consider 56 as an example and to require a compile-time constant result for any given integer number. The only sense-making way of using conditional compilation directives is taking also negative numbers into account.
#ifndef NUMBER
#define NUMBER 56
#endif
#if NUMBER < 0
#define ROUNDED (NUMBER-25)/50*50
#else
#define ROUNDED (NUMBER+25)/50*50
#endif
Div is a variable and so cannot be evaluated in a preprocessing directive because preprocessing happens before your source code is compiled.
Since the preprocessor does not understand variables it gives the token Div an arbitrary value of 0 when using it in an arithmetic expression. This means your program appears to work. However if you changed the value of R to say 99, you would see in that case the code does not work.
This version without Div works in all cases.
#define R 50
#if R % 50 <= 24
int Q;
printf("Rounding down\n");
Q =(int(R /50))*50;
printf("%d",Q);
#else
int Q;
printf("Rounding UP\n");
Q =(int(R/50)+1)*50;
printf("%d",Q);
#endif
It's a really, really pointless task that you have been set. I feel embarassed even attempting an answer.
I am working on a micro-controller and I want to implement a simple average filter on the resulted values to filter out noise (or to be honest to not let values dance on LCD!).
The ADC result is inserted into memory by DMA. I have (just for sake of ease of debugging) an array with size 8. To make life even easier I have wrote some defines to make my code editable with minimum effort:
#define FP_ID_POT_0 0 //identifier for POT_0
#define FP_ID_POT_1 1 //identifier for POT_1
#define FP_ANALOGS_BUFFER_SIZE 8 //buffer size for filtering ADC vals
#define FP_ANALOGS_COUNT 2 // we have now 2 analog axis
#define FP_FILTER_ELEMENT_COUNT FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT
//means that the DMA buffer will have 4 results for each ADC
So, the buffer has size of 8 and its type is uint32_t. And I am reading 2 ADC channels. in the buffer I will have 4 result for Channel A and 4 result for Channel B (in a circular manner). A simple dump of this array is like:
INDEX 0 1 2 3 4 5 6 7
CHNL A B A B A B A B
VALUE 4017 62 4032 67 4035 64 4029 63
It means that the DMA puts results for ChA and ChB always in a fixed place.
Now to calculate the average for each channel I have the function below:
uint32_t filter_pots(uint8_t which) {
uint32_t sum = 0;
uint8_t i = which;
for( ; i < FP_ANALOGS_BUFFER_SIZE; i += FP_ANALOGS_COUNT) {
sum += adc_vals[i];
}
return sum / (uint32_t)FP_FILTER_ELEMENT_COUNT;
}
If I want to use the function for chA I will pass 0 as argument to the funtion. If I want chB I pass 1...and if I happen to have chC I will pass 2 and so on. This way I can initiate the for-loop to point to the element that I need.
The problem is, at the last step when I want to return the result, I do not get the correct value. The function returns 1007 when used for chA and 16 when used for chB. I am pretty sure that the sum is calculated OK (I can see it in debugger). The problem, I beleive, is in the division by a value defined using #define. Even casting it to uint32_t does not help. The sum is being calculated OK, but I can no see what type or value FP_FILTER_ELEMENT_COUNT has been assigned to by compiler. Mybe its an overflow problem of dividing uint32 by uint8?
#define FP_FILTER_ELEMENT_COUNT FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT
//means FP_FILTER_ELEMENT_COUNT will be 8 / 2 which results in 4
What causes this behaviour and if there is no way that #define would work in my case, what other options I have?
The compiler is IAR Embedded Workbench. Platform is STM32F103
For fewer surprises, always put parenthesis around your macro definitions
#define FP_FILTER_ELEMENT_COUNT (FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT)
This prevents oddball operator precedence issues and other unexpected syntax and logic errors from cropping up. In this case, you're returning sum/8/2 (i.e. sum/16) when you want to return sum/4.
Parentheses will help, as #Russ said, but an even better solution is to use constants:
static const int FP_ID_POT_0 = 0; //identifier for POT_0
static const int FP_ID_POT_1 = 1; //identifier for POT_1
static const int FP_ANALOGS_BUFFER_SIZE = 8; //buffer size for filtering ADC vals
static const int FP_ANALOGS_COUNT = 2; // we have now 2 analog axis
static const int FP_FILTER_ELEMENT_COUNT = FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT;
In C++, all of these are compile-time integral constant expressions, and can be used as array bounds, case labels, template arguments, etc. But unlike macros, they respect namespaces, are type-safe, and act like real values, not text substitution.
This is a C++ question on something that is confusing me. (I am refreshing my C++ after a long time). I am reading this example here. There are two parts that confuse me:
The first part:
In the code line:
void namedWindow(const string& winname, int flags=WINDOW_AUTOSIZE )
WINDOW_AUTOSIZE is an input, but as far as I can tell, it is not an int. When I code this line up and run, it works fine. My input into this function literally is 'WINDOW_AUTOSIZE'. I am confused as to why this works. How is WINDOW_AUTOSIZE an int?
My second question is regarding the last line, whereby they say:
By default, flags == CV_WINDOW_AUTOSIZE | CV_WINDOW_KEEPRATIO |
CV_GUI_EXPANDED
I am confused as to how/what this means exactly... I know that | is a bitwise OR, but not clear what this means exactly...
Thank you.
The words written in capital letters are constants. They have been defined somewhere in the code or in the headers to be used in another place. A constant can stand for a number, string etc. The constants in this code are obviously of the type int
CV_WINDOW_AUTOSIZE | CV_WINDOW_KEEPRATIO | CV_GUI_EXPANDED is just bitwise OR of the int values the constants stand for. These are spacial constants where only one bit of the int is set (so called flags)
Assume, CV_WINDOW_AUTOSIZE is 0x1 and CV_WINDOW_KEEPRATIO is 0x2. So bitwise OR-ing would result in 0x3. The called function can then check by AND-operation which flag was set.
My input into this function literally is 'WINDOW_AUTOSIZE'
Yep, WINDOW_AUTOSIZE is in fact an integer; Simply look at the fact that it's a default argument for an int function parameter. It wouldn't compile if it wasn't an int
// it might have been defined like this
#define WINDOW_AUTOSIZE 23434 // some number just for example
// or like this
const int WINDOW_AUTOSIZE = 34234;
As for the second question bitwise ORing means that all bits in the corresponding integral values are ORed together, so lets say for example
CV_WINDOW_AUTOSIZE = 0x0010
CV_WINDOW_KEEPRATIO = 0x0100
CV_GUI_EXPANDED = 0x1100
then the corresponding operation would give an integral value with every bit equal to the result of OR for each position
CV_WINDOW_AUTOSIZE | CV_WINDOW_KEEPRATIO | CV_GUI_EXPANDED =
0x0010
0x0100
0x1100
------
0x1110
On the use of bitflags
Consider the following : You have a keyboard with 4 keys :
Ctrl, Alt, Del, Shift
How many constants would you need to define all states this keyboard can be on ? Well lets enumerate the states
All 4 keys pressed : 1 constant
3 keys pressed : It takes (4 by 3) constants = 4 constants :
(4 by 3) = 4! / ( (4-3)! * 3! ) = 4
2 keys pressed : (4 by 2) = 6 constants
1 key pressed : 4 constants (the names of the keys)
No key pressed : 1 constant
So to sum up you'd define :
1 + 4 + 6 + 4 + 1 = 16 constants
Now what if I told you only need 4 different constants, each one having only one bit ON ? :
#define CtrlK 0x0001
#define AltK 0x0010
#define DelK 0x0100
#define ShiftK 0x1000
Then any state for the keyboard can be expressed by a combination of the above : Say you want to express the state Shift key and Del key are pressed. Then it would be
CtrlK | DelK
The more combinations you have, the more this technique pays off.
Ofcourse (maybe you could see a reference on bitflags) user code can probe an integral value to see which bits are switched ON.
I belive the WINDOW_AUTOSIZE is not a string or text. It will be a constant or #defined preprocessor constant. So int datatype can accept it. Please check the definition of the WINDOW_AUTOSIZE in the source code.
Also note that we can pass variables with 'char', 'enum' datatypes to a function which accepts int. The conversion to int will happen internally.
I'm following the guide given at micromouseonline . com/2010/07/14/bit-banding-in-the-stm32 . I'm using IAR EWARM and Cortex M3. Everything works fine but I'm not able to set the bits in a given address. Im using STM32L151xD and IAR EWARM compiler.
This is how they define the functions
#define RAM_BASE 0x20000000
#define RAM_BB_BASE 0x22000000
#define Var_ResetBit_BB(VarAddr, BitNumber) (*(vu32 *) (RAM_BB_BASE | ((VarAddr - RAM_BASE) << 5) | ((BitNumber) << 2)) = 0)
#define Var_SetBit_BB(VarAddr, BitNumber) (*(vu32 *) (RAM_BB_BASE | ((VarAddr - RAM_BASE) << 5) | ((BitNumber) << 2)) = 1)
#define Var_GetBit_BB(VarAddr, BitNumber) (*(vu32 *) (RAM_BB_BASE | ((VarAddr - RAM_BASE) << 5) | ((BitNumber) << 2)))
#define varSetBit(var,bit) (Var_SetBit_BB((u32)&var,bit))
#define varGetBit(var,bit) (Var_GetBit_BB((u32)&var,bit))
the call is :
uint32_t flags;
varSetBit(flags,1);
however, the bit 1 in flags is always 0 if I see using a debugger. The flags is assumed to be 0 at first. So, all the bits in flags will be 0. However, when i use varSetBit(flags,1), the answer at bit 1 is 0 again. I dont think im doing anything wrong. Is it a compiler problem? am i missing some settings? Any help will be appreciated.
I suspect that you misunderstand the purpose of the bit-banding feature.
With bit-banding, the application have access (read/write) to the micro-controller's registers bit by bit. This allows modifying a bit with a single store instruction instead of a read/modify/write sequence. For this to work, stm32 devices (or more generally Cortex M3 devices) have a specific address space where each bit of each register is mapped to a specific address.
Let's take an example, if you need to set the bit 3 of the register FOO:
Without bit-banding, you would have to write the following code:
FOO = FOO | 0b100;
This results from the assembler instructions in a load of the register FOO, a bit-wise OR operation and a store of the register FOO.
With bit-banding, you write:
varSetBit(FOO, 3);
which results to a simple store at the address computed by preprocessor from the varSetBit macro.
Now that is said, bit-banding only apply to the micro-controller's register. You can't use them to manipulate bits of your own variables as you do with your flags variable.
For more information read the ARM application note
I have this preprocessor directive:
#define INDEXES_PER_SECTOR BYTES_PER_SECTOR / 4
where BYTES_PER_SECTOR is declared in another header file as:
#define BYTES_PER_SECTOR 64
I have this simple math equation that I wrote where after executing I get an assertion error as the value assigned to iTotalSingleIndexes is incorrect.
int iTotalSingleIndexes = (iDataBlocks - 29) / INDEXES_PER_SECTOR;
Now I believe this to be because of the preprocessor directive INDEXES_PER_SECTOR. Upon executing my equation iDataBlocks is 285 which is correct. I have confirmed this with gdb. The problem is that the value that gets assigned to iTotalSingleIndexes is 1 when it ought to be 16. I really have no idea why this is happening.
When I do something like:
int iIndexesInASector = INDEXES_PER_SECTOR;
int iTotalSingleIndexes = (iDataBlocks - 29) / iIndexesInASector;
the correct value gets assigned to iTotalSingleIndexes.
On other notes I use preprocessor directives in other equations and they work just fine so I am even more puzzled.
Any help would be much appreciated.
The preprocessor simply performs token replacement - it doesn't evaluate expressions. So your line:
int iTotalSingleIndexes = (iDataBlocks - 29) / INDEXES_PER_SECTOR;
expands to this sequence of tokens:
int iTotalSingleIndexes = ( iDataBlocks - 29 ) / 64 / 4 ;
...which, due to the associativity of the / operator, is then parsed by the compiler as:
int iTotalSingleIndexes = ((iDataBlocks - 29) / 64) / 4;
...which results in the value of 1. As leppie says, you want:
#define INDEXES_PER_SECTOR (BYTES_PER_SECTOR / 4)
This makes INDEXES_PER_SECTOR expand to a complete subexpression.
#define INDEXES_PER_SECTOR (BYTES_PER_SECTOR / 4)
Both of the given answers so far are correct,so accept one of them, but I thought I should expand on what they are saying
Number 1 rule of preprocessor macros.
If a macro expands to an expression, always enclose the expansion in parentheses
Number 2 rule of preprocessor macros
Always enclose macro arguments in parentheses where they are used in the expansion
For example, consider the macro below
#define X_PLUS_4(X) X + 4
foo = 1;
y = 3 * X_PLUS_4(foo + 2) * 4; // naively expect y to be 84
the second line expands to
y = 3 * foo + 2 + 4 * 4; // y is 13
which is probably not what you want
Applying the rules
#define X_PLUS_4(X) ((X) + 4)
The use above then becomes
y = 3 * ((foo + 2) + 4) * 4;
If you want to accomplish preprocessing-time operations with the preprocessor, you can use the Boost preprocessing library. However, you should REALLY be using const data for this.
const int BytesPerSector = 64;
const int IndexesPerSector = BytesPerSector / 4;
The preprocessor should be reserved for when you have absolutely no other choice. Performing arithmetic at compile-time is easily done with const ints.