What is significance of "Mask" flags present in bitwise flag - bit-manipulation

I can see flags like this all over c/c++ libs and codes.
#define EDIT_MASK 0x0000003f // usually twice of the last flag minus one
#define EDIT_PO_1 0x00000001
#define EDIT_PO_2 0x00000002
#define EDIT_PO_3 0x00000004
#define EDIT_PO_4 0x00000008
#define EDIT_PO_5 0x00000010
#define EDIT_PO_6 0x00000020
I know the "EDIT_PO_1, ..2, ..3 .. EDIT_PO_6" flags are used for bitwise flags and stuff (Use of bitwise flags -just for ref). But I don't understand the significance of the "mask flag" i.e. EDIT_MASK.

Related

Is it possible to do real addition in preprocessing not only replacement? [duplicate]

For some base. Base 1 even. Some sort of complex substitution -ing.
Also, and of course, doing this is not a good idea in real life production code. I just asked out of curiosity.
You can relatively easy write macro which adds two integers in binary. For example - macro which sums two 4-bit integers in binary :
#include "stdio.h"
// XOR truth table
#define XOR_0_0 0
#define XOR_0_1 1
#define XOR_1_0 1
#define XOR_1_1 0
// OR truth table
#define OR_0_0 0
#define OR_0_1 1
#define OR_1_0 1
#define OR_1_1 1
// AND truth table
#define AND_0_0 0
#define AND_0_1 0
#define AND_1_0 0
#define AND_1_1 1
// concatenation macros
#define XOR_X(x,y) XOR_##x##_##y
#define OR_X(x,y) OR_##x##_##y
#define AND_X(x,y) AND_##x##_##y
#define OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_##rc1 (rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// stringification macros
#define STR_X(x) #x
#define STR(x) STR_X(x)
// boolean operators
#define XOR(x,y) XOR_X(x,y)
#define OR(x,y) OR_X(x,y)
#define AND(x,y) AND_X(x,y)
// carry_bit + bit1 + bit2
#define BIT_SUM(carry,bit1,bit2) XOR(carry, XOR(bit1,bit2))
// carry_bit + carry_bit_of(bit1 + bit2)
#define CARRY_SUM(carry,bit1,bit2) OR(carry, AND(bit1,bit2))
// do we have overflow or maybe result perfectly fits into 4 bits ?
#define OVERFLOW_0(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// draft-horse macros which performs addition of two 4-bit integers
#define ADD_BIN_NUM(a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_4(0,0,0,0, 0,0,0,0, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_4(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_3(rc1,rc2,rc3,AND(CARRY_SUM(0,a4,b4),OR(a4,b4)), rb1,rb2,rb3,BIT_SUM(0,a4,b4), a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_3(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_2(rc1,rc2,AND(CARRY_SUM(rc4,a3,b3),OR(a3,b3)),rc4, rb1,rb2,BIT_SUM(rc4,a3,b3),rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_2(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_1(rc1,AND(CARRY_SUM(rc3,a2,b2),OR(a2,b2)),rc3,rc4, rb1,BIT_SUM(rc3,a2,b2),rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW(AND(CARRY_SUM(rc2,a1,b1),OR(a1,b1)),rc2,rc3,rc4, BIT_SUM(rc2,a1,b1),rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = " STR(rb1) STR(rb2) STR(rb3) STR(rb4)
#define SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = overflow"
void main()
{
printf("%s\n",
ADD_BIN_NUM(
0,0,0,1, // first 4-bit int
1,0,1,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
0,1,0,0, // first 4-bit int
0,1,0,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
1,0,1,1, // first 4-bit int
0,1,1,0) // second 4-bit int
);
}
This macro can be easily extended for addition of two 8-bit or 16-bit or even 32-bit ints.
So basically all that we need is token concatenation and substitution rules to achieve amazing results with macros.
EDIT:
I have changed formating of results and more importantly - I've added overflow check.
HTH!
The preprocessor operates on preprocessing tokens and the only time that it evaluates numbers is during the evaluation of a #if or #elif directive. Other than that, numbers aren't really numbers during preprocessing; they are classified as preprocessing number tokens, which aren't actually numbers.
You could evaluate basic arithmetic using token concatenation:
#define ADD_0_0 0
#define ADD_0_1 1
#define ADD_1_0 1
#define ADD_1_1 2
#define ADD(x, y) ADD##_##x##_##y
ADD(1, 0) // expands to 1
ADD(1, 1) // expands to 2
Really, though, there's no reason to do this, and it would be silly to do so (you'd have to define a huge number of macros for it to be even remotely useful).
It would be more sensible to have a macro that expands to an integral constant expression that can be evaluated by the compiler:
#define ADD(x, y) ((x) + (y))
ADD(1, 1) // expands to ((1) + (1))
The compiler will be able to evaluate the 1 + 1 expression.
It is quite possible to do bounded integer addition in the preprocessor. And, it is actually needed more often than one would really hope, i.e., the alternative to just have ((2) + (3)) in the program doesn't work. (E.g., you can't have a variable called x((2)+(3))). The idea is simple: turn the addition to increments, which you don't mind (too much) listing them all out. E.g.,
#define INC(x) INC_ ## x
#define INC_0 1
#define INC_1 2
#define INC_2 3
#define INC_3 4
#define INC_4 5
#define INC_5 6
#define INC_6 7
#define INC_7 8
#define INC_8 9
#define INC_9 10
INC(7) // => 8
Now we know how to do addition to up to 1.
#define ADD(x, y) ADD_ ## x(y)
#define ADD_0(x) x
#define ADD_1(x) INC(x)
ADD(0, 2) // => 2
ADD(1, 2) // => 3
To add to even larger numbers, you need some sort of "recursion".
#define ADD_2(x) ADD_1(INC(x))
#define ADD_3(x) ADD_2(INC(x))
#define ADD_4(x) ADD_3(INC(x))
#define ADD_5(x) ADD_4(INC(x))
#define ADD_6(x) ADD_5(INC(x))
#define ADD_7(x) ADD_6(INC(x))
#define ADD_8(x) ADD_7(INC(x))
#define ADD_9(x) ADD_8(INC(x))
#define ADD_10(x) ADD_9(INC(x))
ADD(5, 2) // => 7
One has to be careful in this, however. E.g., the following does not work.
#define ADD_2(x) INC(ADD_1(x))
ADD(2, 2) // => INC_ADD_1(2)
For any extended use of such tricks, Boost Preprocessor is your friend.
I know it's not the preprocessor, but if it helps, you can do it with templates. Perhaps you could use this in conjunction with a macro to achieve what you need.
#include <iostream>
using namespace std;
template <int N, int M>
struct Add
{
static const int Value = N + M;
};
int main()
{
cout << Add<4, 5>::Value << endl;
return 0;
}
Apparently, you can. If you take a look at the Boost Preprocessor library, you can do all sorts of stuff with the preprocessor, even integer addition.
The C preprocessor can evaluate conditionals containing integer arithmetic. It will not substitute arithmetic expressions and pass the result to the compiler, but the compiler will evaluate arithmetic on compile-time constants and emit the result into the binary, as long as you haven't overloaded the operators being used.
Preprocessor macros can't really do arithmetic, but they can be usefully leveraged to do math with enumerations. The general trick is to have a macro which invokes other macros, and can be repeatedly invoked using different definitions of those other macros.
For example, something like:
#define MY_THINGS \
a_thing(FRED,4) \
a_thing(GEORGE,6) \
a_thing(HARRY,5) \
a_thing(HERMIONE,8) \
a_thing(RON,3) \
// This line left blank
#define a_thing(name,size) EN_##name}; enum {EN_SIZE_##name=(size),EN_BLAH_##name = EN_##name+(size-1),
enum {EN_FIRST_THING=0, MY_THINGS EN_TOTAL_SIZE};
#undef a_thing
That will allow one to 'allocate' a certain amount of space for each thing in e.g. an array. The math isn't done by the preprocessor, but the enumerations are still regarded as compile-time constants.
I'm pretty sure the C/C++ preprocessor just does copy and paste -- it doesn't actually evaluate any expressions. Expression evaluation is done by the compiler.
To better answer your question, you might want to post what you're trying to accomplish.

cortex m3, stm32L1XX bit-banding

I'm following the guide given at micromouseonline . com/2010/07/14/bit-banding-in-the-stm32 . I'm using IAR EWARM and Cortex M3. Everything works fine but I'm not able to set the bits in a given address. Im using STM32L151xD and IAR EWARM compiler.
This is how they define the functions
#define RAM_BASE 0x20000000
#define RAM_BB_BASE 0x22000000
#define Var_ResetBit_BB(VarAddr, BitNumber) (*(vu32 *) (RAM_BB_BASE | ((VarAddr - RAM_BASE) << 5) | ((BitNumber) << 2)) = 0)
#define Var_SetBit_BB(VarAddr, BitNumber) (*(vu32 *) (RAM_BB_BASE | ((VarAddr - RAM_BASE) << 5) | ((BitNumber) << 2)) = 1)
#define Var_GetBit_BB(VarAddr, BitNumber) (*(vu32 *) (RAM_BB_BASE | ((VarAddr - RAM_BASE) << 5) | ((BitNumber) << 2)))
#define varSetBit(var,bit) (Var_SetBit_BB((u32)&var,bit))
#define varGetBit(var,bit) (Var_GetBit_BB((u32)&var,bit))
the call is :
uint32_t flags;
varSetBit(flags,1);
however, the bit 1 in flags is always 0 if I see using a debugger. The flags is assumed to be 0 at first. So, all the bits in flags will be 0. However, when i use varSetBit(flags,1), the answer at bit 1 is 0 again. I dont think im doing anything wrong. Is it a compiler problem? am i missing some settings? Any help will be appreciated.
I suspect that you misunderstand the purpose of the bit-banding feature.
With bit-banding, the application have access (read/write) to the micro-controller's registers bit by bit. This allows modifying a bit with a single store instruction instead of a read/modify/write sequence. For this to work, stm32 devices (or more generally Cortex M3 devices) have a specific address space where each bit of each register is mapped to a specific address.
Let's take an example, if you need to set the bit 3 of the register FOO:
Without bit-banding, you would have to write the following code:
FOO = FOO | 0b100;
This results from the assembler instructions in a load of the register FOO, a bit-wise OR operation and a store of the register FOO.
With bit-banding, you write:
varSetBit(FOO, 3);
which results to a simple store at the address computed by preprocessor from the varSetBit macro.
Now that is said, bit-banding only apply to the micro-controller's register. You can't use them to manipulate bits of your own variables as you do with your flags variable.
For more information read the ARM application note

C++ bitfield testing

Is there a more compact way of comparing my bits than this (the only way I know):
#define BIT1 1
#define BIT2 2
#define BIT3 4
#define BIT4 8
#define BIT5 16
#define BIT6 32
// I declare this somewhere in a structure
unsigned char bits: 6;
// I want all of them to be 0 at first (000000)
bits = 0;
/* I do some bite setting here */
// I only want to know if the state of my bits == 000000
if(bits & (BIT1 | BIT2 | BIT3 | BIT4 | BIT5 | BIT6) == (BIT1 | BIT2 | BIT3 | BIT4 | BIT5 | BIT6))
{
// All kinds of nasty stuff
}
I thought maybe something in the lines of bits & 0x00 == 0x00
If you want compact (as indicated in your comment) rather than fast, why not do something like:
#define BIT1 0x01
#define BIT2 0x02
#define BIT3 0x04
#define BIT4 0x08
#define BIT5 0x10
#define BIT6 0x20
#define BITS1THRU4 (BIT1|BIT2|BIT3|BIT4)
// or #define BITS1THRU6 0x0f
// I declare this somewhere in a structure
unsigned char bits: 6;
// I want all of them to be 0 at first (000000)
bits = 0;
/* I do some bit setting here */
// I only want to know if the state of my first four bits == 0000
if(bits & BITS1THRU4 == 0) ...
It probably won't be any faster since your original code would have been turned into that constant anyway but it may be more readable (which is often a good reason to do it).
If you have a need for other variations, just define them. If there's too many of them (63 defines, if you use them all, may be getting a bit on the high side), I'd start thinking about another solution.
But, to be honest, unless you're going to use more meaningful names for the defines, I'd just ditch them. The name BIT3 really adds nothing to 0x04 to those that understand bit patterns. If it was something like UART_READ_READY_BIT, that would be fine but what you have is only slightly better than:
#define THREE 3
(no offence intended, I'm just pointing out my views). I'd just work out the bit patterns and put them straight in the code (bits 1 thru 6 in your case being 0x3f).
And, just as an aside, for you particular case, I think bits will only be those six bits anyway so you may find it's enough to compare it to 0 (with no bit masking). I've left in the bit masking method in case you wanted a mode general solution for checking specific bits.
if( ~bits & (BIT1 | BIT2 | BIT3 | BIT4 | BIT5 | BIT6) == (BIT1 | BIT2 | BIT3 | BIT4 | BIT5 | BIT6))
And about speed (BIT1 | BIT2 | BIT3 | BIT4 | BIT5 | BIT6) is actualy compiled as constant and there are only few operation (like 2) for this if - one NOR and one compare (i am not sure if x386 supprts NOR, but i think it does)
if I read your condition right (with corrected parentheses, as per #user547710), you check if all your bits are set, rather than zero.
Anyway, you can define a mask with bits 1-6 set more compactly by (1u << 6) - 1. This is compile-time constant expression, so you need not to worry about extra computing time. I'd do:
const unsigned char bitmask6 = 1u << 6) - 1;
if ((bits & bitmask6) == bitmask6)
Though this is just a note,
operator == has higher precedence than &,
the logical-and should be parenthesized as:
if((a & b) == c)
I interpreted that this is the intention of the questioner.
(I think this should be posted as just a comment,
but it seems that I can't post a comment)

Can I add numbers with the C/C++ preprocessor?

For some base. Base 1 even. Some sort of complex substitution -ing.
Also, and of course, doing this is not a good idea in real life production code. I just asked out of curiosity.
You can relatively easy write macro which adds two integers in binary. For example - macro which sums two 4-bit integers in binary :
#include "stdio.h"
// XOR truth table
#define XOR_0_0 0
#define XOR_0_1 1
#define XOR_1_0 1
#define XOR_1_1 0
// OR truth table
#define OR_0_0 0
#define OR_0_1 1
#define OR_1_0 1
#define OR_1_1 1
// AND truth table
#define AND_0_0 0
#define AND_0_1 0
#define AND_1_0 0
#define AND_1_1 1
// concatenation macros
#define XOR_X(x,y) XOR_##x##_##y
#define OR_X(x,y) OR_##x##_##y
#define AND_X(x,y) AND_##x##_##y
#define OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_##rc1 (rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// stringification macros
#define STR_X(x) #x
#define STR(x) STR_X(x)
// boolean operators
#define XOR(x,y) XOR_X(x,y)
#define OR(x,y) OR_X(x,y)
#define AND(x,y) AND_X(x,y)
// carry_bit + bit1 + bit2
#define BIT_SUM(carry,bit1,bit2) XOR(carry, XOR(bit1,bit2))
// carry_bit + carry_bit_of(bit1 + bit2)
#define CARRY_SUM(carry,bit1,bit2) OR(carry, AND(bit1,bit2))
// do we have overflow or maybe result perfectly fits into 4 bits ?
#define OVERFLOW_0(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// draft-horse macros which performs addition of two 4-bit integers
#define ADD_BIN_NUM(a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_4(0,0,0,0, 0,0,0,0, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_4(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_3(rc1,rc2,rc3,AND(CARRY_SUM(0,a4,b4),OR(a4,b4)), rb1,rb2,rb3,BIT_SUM(0,a4,b4), a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_3(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_2(rc1,rc2,AND(CARRY_SUM(rc4,a3,b3),OR(a3,b3)),rc4, rb1,rb2,BIT_SUM(rc4,a3,b3),rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_2(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_1(rc1,AND(CARRY_SUM(rc3,a2,b2),OR(a2,b2)),rc3,rc4, rb1,BIT_SUM(rc3,a2,b2),rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW(AND(CARRY_SUM(rc2,a1,b1),OR(a1,b1)),rc2,rc3,rc4, BIT_SUM(rc2,a1,b1),rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = " STR(rb1) STR(rb2) STR(rb3) STR(rb4)
#define SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = overflow"
void main()
{
printf("%s\n",
ADD_BIN_NUM(
0,0,0,1, // first 4-bit int
1,0,1,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
0,1,0,0, // first 4-bit int
0,1,0,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
1,0,1,1, // first 4-bit int
0,1,1,0) // second 4-bit int
);
}
This macro can be easily extended for addition of two 8-bit or 16-bit or even 32-bit ints.
So basically all that we need is token concatenation and substitution rules to achieve amazing results with macros.
EDIT:
I have changed formating of results and more importantly - I've added overflow check.
HTH!
The preprocessor operates on preprocessing tokens and the only time that it evaluates numbers is during the evaluation of a #if or #elif directive. Other than that, numbers aren't really numbers during preprocessing; they are classified as preprocessing number tokens, which aren't actually numbers.
You could evaluate basic arithmetic using token concatenation:
#define ADD_0_0 0
#define ADD_0_1 1
#define ADD_1_0 1
#define ADD_1_1 2
#define ADD(x, y) ADD##_##x##_##y
ADD(1, 0) // expands to 1
ADD(1, 1) // expands to 2
Really, though, there's no reason to do this, and it would be silly to do so (you'd have to define a huge number of macros for it to be even remotely useful).
It would be more sensible to have a macro that expands to an integral constant expression that can be evaluated by the compiler:
#define ADD(x, y) ((x) + (y))
ADD(1, 1) // expands to ((1) + (1))
The compiler will be able to evaluate the 1 + 1 expression.
It is quite possible to do bounded integer addition in the preprocessor. And, it is actually needed more often than one would really hope, i.e., the alternative to just have ((2) + (3)) in the program doesn't work. (E.g., you can't have a variable called x((2)+(3))). The idea is simple: turn the addition to increments, which you don't mind (too much) listing them all out. E.g.,
#define INC(x) INC_ ## x
#define INC_0 1
#define INC_1 2
#define INC_2 3
#define INC_3 4
#define INC_4 5
#define INC_5 6
#define INC_6 7
#define INC_7 8
#define INC_8 9
#define INC_9 10
INC(7) // => 8
Now we know how to do addition to up to 1.
#define ADD(x, y) ADD_ ## x(y)
#define ADD_0(x) x
#define ADD_1(x) INC(x)
ADD(0, 2) // => 2
ADD(1, 2) // => 3
To add to even larger numbers, you need some sort of "recursion".
#define ADD_2(x) ADD_1(INC(x))
#define ADD_3(x) ADD_2(INC(x))
#define ADD_4(x) ADD_3(INC(x))
#define ADD_5(x) ADD_4(INC(x))
#define ADD_6(x) ADD_5(INC(x))
#define ADD_7(x) ADD_6(INC(x))
#define ADD_8(x) ADD_7(INC(x))
#define ADD_9(x) ADD_8(INC(x))
#define ADD_10(x) ADD_9(INC(x))
ADD(5, 2) // => 7
One has to be careful in this, however. E.g., the following does not work.
#define ADD_2(x) INC(ADD_1(x))
ADD(2, 2) // => INC_ADD_1(2)
For any extended use of such tricks, Boost Preprocessor is your friend.
I know it's not the preprocessor, but if it helps, you can do it with templates. Perhaps you could use this in conjunction with a macro to achieve what you need.
#include <iostream>
using namespace std;
template <int N, int M>
struct Add
{
static const int Value = N + M;
};
int main()
{
cout << Add<4, 5>::Value << endl;
return 0;
}
Apparently, you can. If you take a look at the Boost Preprocessor library, you can do all sorts of stuff with the preprocessor, even integer addition.
The C preprocessor can evaluate conditionals containing integer arithmetic. It will not substitute arithmetic expressions and pass the result to the compiler, but the compiler will evaluate arithmetic on compile-time constants and emit the result into the binary, as long as you haven't overloaded the operators being used.
Preprocessor macros can't really do arithmetic, but they can be usefully leveraged to do math with enumerations. The general trick is to have a macro which invokes other macros, and can be repeatedly invoked using different definitions of those other macros.
For example, something like:
#define MY_THINGS \
a_thing(FRED,4) \
a_thing(GEORGE,6) \
a_thing(HARRY,5) \
a_thing(HERMIONE,8) \
a_thing(RON,3) \
// This line left blank
#define a_thing(name,size) EN_##name}; enum {EN_SIZE_##name=(size),EN_BLAH_##name = EN_##name+(size-1),
enum {EN_FIRST_THING=0, MY_THINGS EN_TOTAL_SIZE};
#undef a_thing
That will allow one to 'allocate' a certain amount of space for each thing in e.g. an array. The math isn't done by the preprocessor, but the enumerations are still regarded as compile-time constants.
I'm pretty sure the C/C++ preprocessor just does copy and paste -- it doesn't actually evaluate any expressions. Expression evaluation is done by the compiler.
To better answer your question, you might want to post what you're trying to accomplish.

How masking works

I am new at C, and I am debugging with source code. However, I am confused with this code snippet.
When the values are assigned to the structure value, I think it is some masking. But not sure, and if it is masking. How does masking work in this concept?
Many thanks,
#define MSGINFO_ENABLE 0x01
#define MIME_ENABLE 0x02
#define FASTSTART_CODERS_IN_OFFERED 0x04
#define TRANSADDR_ENABLE 0x08
typedef struct {
unsigned int msginfo_mask; /* added in version 0x0101 */
} VIRTBOARD;
VIRTBOARD VirtBoard;
/* Not sure I understand what is happening here. */
VirtBoard.msginfo_mask = MSGINFO_ENABLE | MIME_ENABLE | FASTSTART_CODERS_IN_OFFERED | TRANSADDR_ENABLE;
Ok in plain English:
The Hexdecimal numbers 0x01,0x02,0x04,0x08 were each selected BECAUSE they are each encoded as different single bits being set in binary. None of the bit maps overlap so each one can be read and set without being effected by the other bits. Adding the following comments to your code makes it clearer what's happening:
#define MSGINFO_ENABLE 0x01 // => 0001
#define MIME_ENABLE 0x02 // => 0010
#define FASTSTART_CODERS_IN_OFFERED 0x04 // => 0100
#define TRANSADDR_ENABLE 0x08 // => 1000
Now adding a comment before the other line shows the result:
// VirtBoard.msginfo_mask |= 0001
// VirtBoard.msginfo_mask |= 0010
// VirtBoard.msginfo_mask |= 0100
// VirtBoard.msginfo_mask |= 1000
// ----
// VirtBoard.msginfo_mask == 1111
VirtBoard.msginfo_mask = MSGINFO_ENABLE |
MIME_ENABLE |
FASTSTART_CODERS_IN_OFFERED |
TRANSADDR_ENABLE;
While the comments on the assignment make it clear what's going on, once you understand what's happening, the comments kinda defeat the purpose of symbolically defining constants.
It might help to think of it this way (values shown in binary):
MSGINFO_ENABLE = 0001
MIME_ENABLE = 0010
FASTSTART_CODERS_IN_OFFERED = 0100
TRANSADDR_ENABLE = 1000
So...
1001 is TRANSADDR_ENABLE and MSGINFO_ENABLE
or
1101 is eveything but FASTSTART_CODERS_IN_OFFERED
Does that help at all? The | notation is C syntax to set the correct bit:
int something = 0;
something = MSGINFO_ENABLE | TRANSADDR_ENABLE;
is the a syntax to set only those 2 bits.
Your variable, msginfo_mask, when represented as a binary number (1's and 0's) is used as a "mask" by setting certain bits to 1 (using bit-wise OR) or clearing certain bits to 0 (using bit-wise AND). Your code snippet sets certain bits to 1 while leaving others unchanged. Masking is comparable to how a painter masks off areas that they do not want to be painted.
If you look at the #defines at the top of your code, you will notice that each number represents a single bit when written out in binary:
#define MSGINFO_ENABLE 0x01 <-- 0001 in binary
#define MIME_ENABLE 0x02 <-- 0010 in binary
#define FASTSTART_CODERS_IN_OFFERED 0x04 <-- 0100 in binary
#define TRANSADDR_ENABLE 0x08 <-- 1000 in binary
Setting bits is done by using the OR function. If you OR a bit with 1, the result is always going to be a 1. If you OR a bit with 0, the original value will not be changed.
So, when you see:
msginfo_mask = MSGINFO_ENABLE | MIME_ENABLE |
FASTSTART_CODERS_IN_OFFERED | TRANSADDR_ENABLE;
What you are saying is "take the value of msginfo_mask and OR it with (binary) 0001, 0010, 0100, and 1000. This is the same thing as saying "set bit 0, bit 1, bit 2, and bit 3."
The binary operator '|' is the bitwise-or operator; for each bit in the two input words, if either bit is a 1, then the corresponding bit in the result is a 1:
0001 | 0010 = 0011
The '|' operator is typically used to set individual bits in a word, such as in the code snippet you posted.
The binary operator '&' is the bitwise-and operator; for each bit in the two input words, if both bits are 1, then the corresponding bit in the result is a 1:
0101 & 0110 = 0100
The '&' operator can be used to test if a bit is set. For example, to test if the MSGINFO_ENABLE bit is set, you'd do something like
if (VirtBoard.msginfo_mask & MSGINFO_ENABLE != 0)
{
/* MSGINFO_ENABLE bit is set, do something interesting */
}
The expression
VirtBoard.msginfo_mask & MSGINFO_ENABLE
will evaluate to 1 (0x0001) if the MSGINFO_ENABLE bit was set,0 otherwise.
The unary operator '~' is the bitwise-not operator; for each bit in the input word, the corresponding bit in the result is set to the opposite value:
~ 0001 = 1110
You can use the '~' operator together with the '&' operator to clear an individual bit. For example, if we wanted to clear the MSGINFO_ENABLE bit, we'd do something like
VirtBoard.msginfo_mask = VirtBoard.msginfo_mask & ~MSGINFO_ENABLE;
which can be shortened to
VirtBoard.msginfo_mask &= ~MSGINFO_ENABLE;
Negating MSGINFO_ ENABLE gives us 1111111111111110 (assuming a 16-bit unsigned int); since the leading bits are all 1, and-ing this against the VirtBoard.msginfo_ mask preserves any bits that are already set; i.e., 0000000000001111 & 1111111111111110 = 0000000000001110.
If we wanted to clear both the MSGINFO _ENABLE and TRANSADDR _ENABLE bits, we'd combine all the operators like so:
VirtBoard.msginfo_mask &= ~(MSGINFO_ENABLE | TRANSADDER_ENABLE)
The programmer is setting the mask to a certain bit value. In this case:
VitBoard.msginfo_mask = 0x01 | 0x02 | 0x04 = 0x07
Assuming the code handles messages, when a message comes in they may compare it to this mask to see what is enabled in the message.
if((newMsg & VitBoard.msginfo_mask) == 0x07)
{
//do something related to mime enable, msginfo enable and faststart
}
Notice the "&" operator to do the mask comparisons.
The other part is that "or" the masks together are probably being used as switches to enable/disable certain functionality. In the examples you have written, it looks like possibly output at different levels or parts of the codes.
The defined masks can be used to check the functionality to see if it is enable or disabled. For example:
VirtBoard.msginfo_mask = MSGINFO_ENABLE | MIME_ENABLE ;
if ( VirtBoard.msginfo_mask & MSGINFO_ENABLE )
{
printf("Messages enabled\n";
}
if ( VirtBoard.msginfo_mask & TRANSADDR_ENABLE)
{
printf("Transaddress enabled\n");
}
In the first if, since MSGINFO_ENABLED mask was "or" and assigned to the variable, when you apply an "and" operation with the variable and MSGINOF_ENABLED mask, a non zero value is returned, indicating it is true. So the printf statement will be executed.
In the case of the second if, since TRANSADDR_ENABLE was not "or" in to the variable, when an "and" is used with the variable and TRANSADDR_ENABLE mask, it will return a zero value, so no message will be printed.