Can I add numbers with the C/C++ preprocessor? - c++

For some base. Base 1 even. Some sort of complex substitution -ing.
Also, and of course, doing this is not a good idea in real life production code. I just asked out of curiosity.

You can relatively easy write macro which adds two integers in binary. For example - macro which sums two 4-bit integers in binary :
#include "stdio.h"
// XOR truth table
#define XOR_0_0 0
#define XOR_0_1 1
#define XOR_1_0 1
#define XOR_1_1 0
// OR truth table
#define OR_0_0 0
#define OR_0_1 1
#define OR_1_0 1
#define OR_1_1 1
// AND truth table
#define AND_0_0 0
#define AND_0_1 0
#define AND_1_0 0
#define AND_1_1 1
// concatenation macros
#define XOR_X(x,y) XOR_##x##_##y
#define OR_X(x,y) OR_##x##_##y
#define AND_X(x,y) AND_##x##_##y
#define OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_##rc1 (rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// stringification macros
#define STR_X(x) #x
#define STR(x) STR_X(x)
// boolean operators
#define XOR(x,y) XOR_X(x,y)
#define OR(x,y) OR_X(x,y)
#define AND(x,y) AND_X(x,y)
// carry_bit + bit1 + bit2
#define BIT_SUM(carry,bit1,bit2) XOR(carry, XOR(bit1,bit2))
// carry_bit + carry_bit_of(bit1 + bit2)
#define CARRY_SUM(carry,bit1,bit2) OR(carry, AND(bit1,bit2))
// do we have overflow or maybe result perfectly fits into 4 bits ?
#define OVERFLOW_0(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// draft-horse macros which performs addition of two 4-bit integers
#define ADD_BIN_NUM(a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_4(0,0,0,0, 0,0,0,0, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_4(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_3(rc1,rc2,rc3,AND(CARRY_SUM(0,a4,b4),OR(a4,b4)), rb1,rb2,rb3,BIT_SUM(0,a4,b4), a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_3(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_2(rc1,rc2,AND(CARRY_SUM(rc4,a3,b3),OR(a3,b3)),rc4, rb1,rb2,BIT_SUM(rc4,a3,b3),rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_2(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_1(rc1,AND(CARRY_SUM(rc3,a2,b2),OR(a2,b2)),rc3,rc4, rb1,BIT_SUM(rc3,a2,b2),rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW(AND(CARRY_SUM(rc2,a1,b1),OR(a1,b1)),rc2,rc3,rc4, BIT_SUM(rc2,a1,b1),rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = " STR(rb1) STR(rb2) STR(rb3) STR(rb4)
#define SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = overflow"
void main()
{
printf("%s\n",
ADD_BIN_NUM(
0,0,0,1, // first 4-bit int
1,0,1,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
0,1,0,0, // first 4-bit int
0,1,0,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
1,0,1,1, // first 4-bit int
0,1,1,0) // second 4-bit int
);
}
This macro can be easily extended for addition of two 8-bit or 16-bit or even 32-bit ints.
So basically all that we need is token concatenation and substitution rules to achieve amazing results with macros.
EDIT:
I have changed formating of results and more importantly - I've added overflow check.
HTH!

The preprocessor operates on preprocessing tokens and the only time that it evaluates numbers is during the evaluation of a #if or #elif directive. Other than that, numbers aren't really numbers during preprocessing; they are classified as preprocessing number tokens, which aren't actually numbers.
You could evaluate basic arithmetic using token concatenation:
#define ADD_0_0 0
#define ADD_0_1 1
#define ADD_1_0 1
#define ADD_1_1 2
#define ADD(x, y) ADD##_##x##_##y
ADD(1, 0) // expands to 1
ADD(1, 1) // expands to 2
Really, though, there's no reason to do this, and it would be silly to do so (you'd have to define a huge number of macros for it to be even remotely useful).
It would be more sensible to have a macro that expands to an integral constant expression that can be evaluated by the compiler:
#define ADD(x, y) ((x) + (y))
ADD(1, 1) // expands to ((1) + (1))
The compiler will be able to evaluate the 1 + 1 expression.

It is quite possible to do bounded integer addition in the preprocessor. And, it is actually needed more often than one would really hope, i.e., the alternative to just have ((2) + (3)) in the program doesn't work. (E.g., you can't have a variable called x((2)+(3))). The idea is simple: turn the addition to increments, which you don't mind (too much) listing them all out. E.g.,
#define INC(x) INC_ ## x
#define INC_0 1
#define INC_1 2
#define INC_2 3
#define INC_3 4
#define INC_4 5
#define INC_5 6
#define INC_6 7
#define INC_7 8
#define INC_8 9
#define INC_9 10
INC(7) // => 8
Now we know how to do addition to up to 1.
#define ADD(x, y) ADD_ ## x(y)
#define ADD_0(x) x
#define ADD_1(x) INC(x)
ADD(0, 2) // => 2
ADD(1, 2) // => 3
To add to even larger numbers, you need some sort of "recursion".
#define ADD_2(x) ADD_1(INC(x))
#define ADD_3(x) ADD_2(INC(x))
#define ADD_4(x) ADD_3(INC(x))
#define ADD_5(x) ADD_4(INC(x))
#define ADD_6(x) ADD_5(INC(x))
#define ADD_7(x) ADD_6(INC(x))
#define ADD_8(x) ADD_7(INC(x))
#define ADD_9(x) ADD_8(INC(x))
#define ADD_10(x) ADD_9(INC(x))
ADD(5, 2) // => 7
One has to be careful in this, however. E.g., the following does not work.
#define ADD_2(x) INC(ADD_1(x))
ADD(2, 2) // => INC_ADD_1(2)
For any extended use of such tricks, Boost Preprocessor is your friend.

I know it's not the preprocessor, but if it helps, you can do it with templates. Perhaps you could use this in conjunction with a macro to achieve what you need.
#include <iostream>
using namespace std;
template <int N, int M>
struct Add
{
static const int Value = N + M;
};
int main()
{
cout << Add<4, 5>::Value << endl;
return 0;
}

Apparently, you can. If you take a look at the Boost Preprocessor library, you can do all sorts of stuff with the preprocessor, even integer addition.

The C preprocessor can evaluate conditionals containing integer arithmetic. It will not substitute arithmetic expressions and pass the result to the compiler, but the compiler will evaluate arithmetic on compile-time constants and emit the result into the binary, as long as you haven't overloaded the operators being used.

Preprocessor macros can't really do arithmetic, but they can be usefully leveraged to do math with enumerations. The general trick is to have a macro which invokes other macros, and can be repeatedly invoked using different definitions of those other macros.
For example, something like:
#define MY_THINGS \
a_thing(FRED,4) \
a_thing(GEORGE,6) \
a_thing(HARRY,5) \
a_thing(HERMIONE,8) \
a_thing(RON,3) \
// This line left blank
#define a_thing(name,size) EN_##name}; enum {EN_SIZE_##name=(size),EN_BLAH_##name = EN_##name+(size-1),
enum {EN_FIRST_THING=0, MY_THINGS EN_TOTAL_SIZE};
#undef a_thing
That will allow one to 'allocate' a certain amount of space for each thing in e.g. an array. The math isn't done by the preprocessor, but the enumerations are still regarded as compile-time constants.

I'm pretty sure the C/C++ preprocessor just does copy and paste -- it doesn't actually evaluate any expressions. Expression evaluation is done by the compiler.
To better answer your question, you might want to post what you're trying to accomplish.

Related

Write a program using conditional compilation directives to round off the number 56 to nearest fifties

Write a program using conditional compilation directives to round off the number 56 to nearest fifties.
Expected Output: 50
Where is the mistake?
#include <iostream>
using namespace std;
#define R 50
int main()
{
int Div;
Div = R % 50;
cout<<"Div:: "<<Div;
printf("\n");
#if(Div<=24)
{
int Q;
printf("Rounding down\n");
Q =(int(R /50))*50;
printf("%d",Q);
}
#else
{
int Q;
printf("Rounding UP\n");
Q =(int(R/50)+1)*50;
printf("%d",Q);
}
#endif
}
The only sense-making way of interpreting the homework task is to consider 56 as an example and to require a compile-time constant result for any given integer number. The only sense-making way of using conditional compilation directives is taking also negative numbers into account.
#ifndef NUMBER
#define NUMBER 56
#endif
#if NUMBER < 0
#define ROUNDED (NUMBER-25)/50*50
#else
#define ROUNDED (NUMBER+25)/50*50
#endif
Div is a variable and so cannot be evaluated in a preprocessing directive because preprocessing happens before your source code is compiled.
Since the preprocessor does not understand variables it gives the token Div an arbitrary value of 0 when using it in an arithmetic expression. This means your program appears to work. However if you changed the value of R to say 99, you would see in that case the code does not work.
This version without Div works in all cases.
#define R 50
#if R % 50 <= 24
int Q;
printf("Rounding down\n");
Q =(int(R /50))*50;
printf("%d",Q);
#else
int Q;
printf("Rounding UP\n");
Q =(int(R/50)+1)*50;
printf("%d",Q);
#endif
It's a really, really pointless task that you have been set. I feel embarassed even attempting an answer.

Is it possible to do real addition in preprocessing not only replacement? [duplicate]

For some base. Base 1 even. Some sort of complex substitution -ing.
Also, and of course, doing this is not a good idea in real life production code. I just asked out of curiosity.
You can relatively easy write macro which adds two integers in binary. For example - macro which sums two 4-bit integers in binary :
#include "stdio.h"
// XOR truth table
#define XOR_0_0 0
#define XOR_0_1 1
#define XOR_1_0 1
#define XOR_1_1 0
// OR truth table
#define OR_0_0 0
#define OR_0_1 1
#define OR_1_0 1
#define OR_1_1 1
// AND truth table
#define AND_0_0 0
#define AND_0_1 0
#define AND_1_0 0
#define AND_1_1 1
// concatenation macros
#define XOR_X(x,y) XOR_##x##_##y
#define OR_X(x,y) OR_##x##_##y
#define AND_X(x,y) AND_##x##_##y
#define OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_##rc1 (rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// stringification macros
#define STR_X(x) #x
#define STR(x) STR_X(x)
// boolean operators
#define XOR(x,y) XOR_X(x,y)
#define OR(x,y) OR_X(x,y)
#define AND(x,y) AND_X(x,y)
// carry_bit + bit1 + bit2
#define BIT_SUM(carry,bit1,bit2) XOR(carry, XOR(bit1,bit2))
// carry_bit + carry_bit_of(bit1 + bit2)
#define CARRY_SUM(carry,bit1,bit2) OR(carry, AND(bit1,bit2))
// do we have overflow or maybe result perfectly fits into 4 bits ?
#define OVERFLOW_0(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
// draft-horse macros which performs addition of two 4-bit integers
#define ADD_BIN_NUM(a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_4(0,0,0,0, 0,0,0,0, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_4(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_3(rc1,rc2,rc3,AND(CARRY_SUM(0,a4,b4),OR(a4,b4)), rb1,rb2,rb3,BIT_SUM(0,a4,b4), a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_3(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_2(rc1,rc2,AND(CARRY_SUM(rc4,a3,b3),OR(a3,b3)),rc4, rb1,rb2,BIT_SUM(rc4,a3,b3),rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_2(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) ADD_BIN_NUM_1(rc1,AND(CARRY_SUM(rc3,a2,b2),OR(a2,b2)),rc3,rc4, rb1,BIT_SUM(rc3,a2,b2),rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define ADD_BIN_NUM_1(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW(AND(CARRY_SUM(rc2,a1,b1),OR(a1,b1)),rc2,rc3,rc4, BIT_SUM(rc2,a1,b1),rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) OVERFLOW_X(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4)
#define SHOW_RESULT(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = " STR(rb1) STR(rb2) STR(rb3) STR(rb4)
#define SHOW_OVERFLOW(rc1,rc2,rc3,rc4, rb1,rb2,rb3,rb4, a1,a2,a3,a4, b1,b2,b3,b4) STR(a1) STR(a2) STR(a3) STR(a4) " + " STR(b1) STR(b2) STR(b3) STR(b4) " = overflow"
void main()
{
printf("%s\n",
ADD_BIN_NUM(
0,0,0,1, // first 4-bit int
1,0,1,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
0,1,0,0, // first 4-bit int
0,1,0,1) // second 4-bit int
);
printf("%s\n",
ADD_BIN_NUM(
1,0,1,1, // first 4-bit int
0,1,1,0) // second 4-bit int
);
}
This macro can be easily extended for addition of two 8-bit or 16-bit or even 32-bit ints.
So basically all that we need is token concatenation and substitution rules to achieve amazing results with macros.
EDIT:
I have changed formating of results and more importantly - I've added overflow check.
HTH!
The preprocessor operates on preprocessing tokens and the only time that it evaluates numbers is during the evaluation of a #if or #elif directive. Other than that, numbers aren't really numbers during preprocessing; they are classified as preprocessing number tokens, which aren't actually numbers.
You could evaluate basic arithmetic using token concatenation:
#define ADD_0_0 0
#define ADD_0_1 1
#define ADD_1_0 1
#define ADD_1_1 2
#define ADD(x, y) ADD##_##x##_##y
ADD(1, 0) // expands to 1
ADD(1, 1) // expands to 2
Really, though, there's no reason to do this, and it would be silly to do so (you'd have to define a huge number of macros for it to be even remotely useful).
It would be more sensible to have a macro that expands to an integral constant expression that can be evaluated by the compiler:
#define ADD(x, y) ((x) + (y))
ADD(1, 1) // expands to ((1) + (1))
The compiler will be able to evaluate the 1 + 1 expression.
It is quite possible to do bounded integer addition in the preprocessor. And, it is actually needed more often than one would really hope, i.e., the alternative to just have ((2) + (3)) in the program doesn't work. (E.g., you can't have a variable called x((2)+(3))). The idea is simple: turn the addition to increments, which you don't mind (too much) listing them all out. E.g.,
#define INC(x) INC_ ## x
#define INC_0 1
#define INC_1 2
#define INC_2 3
#define INC_3 4
#define INC_4 5
#define INC_5 6
#define INC_6 7
#define INC_7 8
#define INC_8 9
#define INC_9 10
INC(7) // => 8
Now we know how to do addition to up to 1.
#define ADD(x, y) ADD_ ## x(y)
#define ADD_0(x) x
#define ADD_1(x) INC(x)
ADD(0, 2) // => 2
ADD(1, 2) // => 3
To add to even larger numbers, you need some sort of "recursion".
#define ADD_2(x) ADD_1(INC(x))
#define ADD_3(x) ADD_2(INC(x))
#define ADD_4(x) ADD_3(INC(x))
#define ADD_5(x) ADD_4(INC(x))
#define ADD_6(x) ADD_5(INC(x))
#define ADD_7(x) ADD_6(INC(x))
#define ADD_8(x) ADD_7(INC(x))
#define ADD_9(x) ADD_8(INC(x))
#define ADD_10(x) ADD_9(INC(x))
ADD(5, 2) // => 7
One has to be careful in this, however. E.g., the following does not work.
#define ADD_2(x) INC(ADD_1(x))
ADD(2, 2) // => INC_ADD_1(2)
For any extended use of such tricks, Boost Preprocessor is your friend.
I know it's not the preprocessor, but if it helps, you can do it with templates. Perhaps you could use this in conjunction with a macro to achieve what you need.
#include <iostream>
using namespace std;
template <int N, int M>
struct Add
{
static const int Value = N + M;
};
int main()
{
cout << Add<4, 5>::Value << endl;
return 0;
}
Apparently, you can. If you take a look at the Boost Preprocessor library, you can do all sorts of stuff with the preprocessor, even integer addition.
The C preprocessor can evaluate conditionals containing integer arithmetic. It will not substitute arithmetic expressions and pass the result to the compiler, but the compiler will evaluate arithmetic on compile-time constants and emit the result into the binary, as long as you haven't overloaded the operators being used.
Preprocessor macros can't really do arithmetic, but they can be usefully leveraged to do math with enumerations. The general trick is to have a macro which invokes other macros, and can be repeatedly invoked using different definitions of those other macros.
For example, something like:
#define MY_THINGS \
a_thing(FRED,4) \
a_thing(GEORGE,6) \
a_thing(HARRY,5) \
a_thing(HERMIONE,8) \
a_thing(RON,3) \
// This line left blank
#define a_thing(name,size) EN_##name}; enum {EN_SIZE_##name=(size),EN_BLAH_##name = EN_##name+(size-1),
enum {EN_FIRST_THING=0, MY_THINGS EN_TOTAL_SIZE};
#undef a_thing
That will allow one to 'allocate' a certain amount of space for each thing in e.g. an array. The math isn't done by the preprocessor, but the enumerations are still regarded as compile-time constants.
I'm pretty sure the C/C++ preprocessor just does copy and paste -- it doesn't actually evaluate any expressions. Expression evaluation is done by the compiler.
To better answer your question, you might want to post what you're trying to accomplish.

#define used with operators [duplicate]

This question already has answers here:
The need for parentheses in macros in C [duplicate]
(8 answers)
Closed 5 years ago.
I know that #define has the following syntax: #define SYMBOL string
If I write, for example
#define ALPHA 2-1
#define BETA ALPHA*2
then ALPHA = 1 but BETA = 0.(why ?)
But if i write something like this
#define ALPHA (2-1)
#define BETA ALPHA*2
then ALPHA = 1 and BETA = 2.
Can someone explain me what's the difference between those two ?
Pre-processor macros created using #define are textual substitutions.
The two examples are not equivalent. The first sets BETA to 2-1*2. The second sets BETA to (2-1)*2. It is not correct to claim that ALPHA == 1 as you do, because ALPHA is not a number - it is a free man! it's just a sequence of characters.
When parsed as C or C++, those two expressions are different (the first is the same as 2 - (1*2)).
We can show the difference, by printing the string expansion of BETA as well as evaluating it as an expression:
#ifdef PARENS
#define ALPHA (2-1)
#else
#define ALPHA 2-1
#endif
#define BETA ALPHA*2
#define str(x) str2(x)
#define str2(x) #x
#include <stdio.h>
int main()
{
printf("%s = %d\n", str(BETA), BETA);
return 0;
}
Compile the above with and without PARENS defined to see the difference:
(2-1)*2 = 2
2-1*2 = 0
The consequence of this is that when using #define to create macros that expand to expressions, it's generally a good idea to use many more parentheses than you would normally need, as you don't know the context in which your values will be expanded. For example:
#define ALPHA (2-1)
#define BETA ((ALPHA)*2)
Macro (#define ...) are only text replacement.
With this version:
#define ALPHA 2-1
#define BETA ALPHA*2
The preprocessor replace BETA by ALPHA*2 then replace ALPHA by 2-1. When the macro expansion is over, BETA is replaced by 2-1*2 which is (due to operator precedence) equal to 2-(1*2)=0
When you add the parenthesis around the "value of ALPHA" (ALPHA doesn’t really have a value since macro are just text replacement), you change the order of evaluation of the operation. Now BETA is replaced by (2-1)*2 which is equal to 2.
Order of operations. The first example become 2-1*2, which equals 2-2.
The second example, on the other hand, expands to (2-1)*2, which evaluates to 1*2.
In the first example:
#define ALPHA 2-1
#define BETA ALPHA*2
alpha is substituted directly for whatever value you gave it (in this case, 2-1).
This leads to BETA expanding into (becoming) 2-1*2, which evaluates to 0, as described above.
In the second example:
#define ALPHA (2-1)
#define BETA ALPHA*2
Alpha (within the definition of BETA) expands into the value it was set to(2-1), which then causes BETA to expand into (2-1)*2 whenever it is used.
In case you're having trouble with order of opeartions, you can always use the acronym PEMDAS to help you (Parenthesis Exponent Multiplication Division Addition Subtraction), which can itself be remembered as "Please Excuse My Dear Aunt Sally". The first operations in the acronym must always be done before the later operations in the acronym (except for multiplication and division (in which you just go from left to right in the equation, since they are considered to have an equal priority, and addition and subtraction (same scenario as multiplication and division).
macros in c/c++ are just text substitutions, not functions. So the macro is there just to replace a macro name in the program text with its contents before the compiler event tries to analyze the code. So in the first case the compiler will see this:
BETA ==> ALPHA * 2 ==> 2 - 1 * 2 ==> compiler ==> 0
printf("beta=%d\n", BETA); ==> printf("beta=%d\n", 2 - 1 * 2);
In the second
BETA ==> ALPHA * 2 ==> (2 - 1) * 2 ==> compiler ==> 2
printf("beta=%d\n", BETA); ==> printf("beta=%d\n", (2 - 1) * 2);

Is there any existing solution to make portable ordered bit fields?

I'd like to have a way to specify bitmaps, which would look like this:
struct Bitmap
{
unsigned foo: 2;
unsigned bar: 5;
unsigned baz: 3;
};
, and many similar structures, but I need the bit fields to have predictable order. But C++ doesn't guarantee any order or packing of bit fields, so I have to make some special code to implement this using bitwise operations. This could be implemented for the above structure as follows:
class Bitmap
{
unsigned value;
public:
unsigned foo() { return value&0x3; }
unsigned bar() { return (value>>2)&0x1f; }
unsigned baz() { return (value>>7)&0x7; }
void set_foo(unsigned new_foo) { value=(value&~0x3)|new_foo; }
void set_bar(unsigned new_bar) { value=(value&~(0x1f<<2))|(new_bar<<2); }
void set_baz(unsigned new_baz) { value=(value&~(0x7<<7))|new_baz; }
Bitmap(unsigned newFoo,unsigned newBar,unsigned newBaz)
: value(newFoo|(newBar<<2)|(newBaz<<7))
{}
};
Writing such code for many different bitmaps is a repeating task, which is why I'd like to automate it. I might use templates for this, but in that case I won't be able to name my bit fields differently for each structure (or will have to write even more code to wrap generic structure to provide the names).
Ideally I'd to have some macro to use similarly to this:
DEFINE_BITMAP(Bitmap,foo,2,bar,5,baz,3);
Bitmap myBits(1,9,5);
doStuff(myBits.bar());
where number of fields can differ between invocations of DEFINE_BITMAP, as can the widths.
So before I start inventing this wheel: has it already been done? If yes, what to look for?
Okay, so I admit I underestimated this question somewhat. You indicated that you want your macro to work with a variable number of bitfield specifications. That takes a little work... but here you go.
/* Starting with this stuff:
* https://github.com/pfultz2/Cloak/wiki/C-Preprocessor-tricks,-tips,-and-idioms
*/
#define CAT(a, ...) PRIMITIVE_CAT(a, __VA_ARGS__)
#define PRIMITIVE_CAT(a, ...) a ## __VA_ARGS__
#define IIF(c) PRIMITIVE_CAT(IIF_, c)
#define IIF_0(t, ...) __VA_ARGS__
#define IIF_1(t, ...) t
#define COMPL(b) PRIMITIVE_CAT(COMPL_, b)
#define COMPL_0 1
#define COMPL_1 0
#define INC(x) PRIMITIVE_CAT(INC_, x)
#define INC_0 1
#define INC_1 2
#define INC_2 3
#define INC_3 4
#define INC_4 5
#define INC_5 6
#define INC_6 7
#define INC_7 8
#define INC_8 9
#define INC_9 10
#define INC_10 11
#define INC_11 12
#define INC_12 13
#define INC_13 14
#define INC_14 15
#define INC_15 16
#define INC_16 17
#define INC_17 18
#define INC_18 19
#define INC_19 20
#define DEC(x) PRIMITIVE_CAT(DEC_, x)
#define DEC_0 0
#define DEC_1 0
#define DEC_2 1
#define DEC_3 2
#define DEC_4 3
#define DEC_5 4
#define DEC_6 5
#define DEC_7 6
#define DEC_8 7
#define DEC_9 8
#define DEC_10 9
#define DEC_11 10
#define DEC_12 11
#define DEC_13 12
#define DEC_14 13
#define DEC_15 14
#define DEC_16 15
#define DEC_17 16
#define DEC_18 17
#define DEC_19 18
#define DEC_20 19
#define CHECK_N(x, n, ...) n
#define CHECK(...) CHECK_N(__VA_ARGS__, 0,)
#define PROBE(x) x, 1,
#define IS_PAREN(x) CHECK(IS_PAREN_PROBE x)
#define IS_PAREN_PROBE(...) PROBE(~)
#define NOT(x) CHECK(PRIMITIVE_CAT(NOT_, x))
#define NOT_0 PROBE(~)
#define BOOL(x) COMPL(NOT(x))
#define IF(c) IIF(BOOL(c))
/* We'll add this stuff: */
#define NUM_ARGS1(_20,_19,_18,_17,_16,_15,_14,_13,_12,_11,_10,_9,_8,_7,_6,_5,_4,_3,_2,_1, n, ...) n
#define NUM_ARGS0(...) NUM_ARGS1(__VA_ARGS__,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0)
#define NUM_ARGS(...) IF(DEC(NUM_ARGS0(__VA_ARGS__)))(NUM_ARGS0(__VA_ARGS__),IF(IS_PAREN(__VA_ARGS__ ()))(0,1))
/* Something to extract things from parentheses */
#define GET_1ST(a) GET_1ST_0 a
#define GET_1ST_0(a,b) a
#define GET_2ND(a) GET_2ND_0 a
#define GET_2ND_0(a,b) b
/* And our bitfield builders */
#define BITFIELDS_MAKE_GETTER_SETTER( structname, name, bits, shift ) \
unsigned name() const { return (value >> (shift)) & ((1U << (bits)) - 1); } \
structname& name( unsigned field ) { value &= ~(((1U << (bits)) - 1) << (shift)); value |= field << (shift); return *this; }
#define BITFIELDS( name, ... ) CAT(BITFIELDS_,NUM_ARGS(__VA_ARGS__)) ( name, 0, __VA_ARGS__ )
#define BITFIELDS_0( name, N )
#define BITFIELDS_1( name, N, a ) BITFIELDS_MAKE_GETTER_SETTER(name, GET_1ST(a), GET_2ND(a), N)
#define BITFIELDS_2( name, N, a,b ) BITFIELDS_1(name,N,a) BITFIELDS_1(name,N+GET_2ND(a),b)
#define BITFIELDS_3( name, N, a,b,c ) BITFIELDS_1(name,N,a) BITFIELDS_2(name,N+GET_2ND(a),b,c)
#define BITFIELDS_4( name, N, a,b,c,d ) BITFIELDS_1(name,N,a) BITFIELDS_3(name,N+GET_2ND(a),b,c,d)
#define BITFIELDS_5( name, N, a,b,c,d,e ) BITFIELDS_1(name,N,a) BITFIELDS_4(name,N+GET_2ND(a),b,c,d,e)
#define BITFIELDS_6( name, N, a,b,c,d,e,f ) BITFIELDS_1(name,N,a) BITFIELDS_5(name,N+GET_2ND(a),b,c,d,e,f)
#define BITFIELDS_7( name, N, a,b,c,d,e,f,g ) BITFIELDS_1(name,N,a) BITFIELDS_6(name,N+GET_2ND(a),b,c,d,e,f,g)
#define BITFIELDS_8( name, N, a,b,c,d,e,f,g,h ) BITFIELDS_1(name,N,a) BITFIELDS_7(name,N+GET_2ND(a),b,c,d,e,f,g,h)
#define BITFIELDS_9( name, N, a,b,c,d,e,f,g,h,i ) BITFIELDS_1(name,N,a) BITFIELDS_8(name,N+GET_2ND(a),b,c,d,e,f,g,h,i)
#define BITFIELDS_10( name, N, a,b,c,d,e,f,g,h,i,j ) BITFIELDS_1(name,N,a) BITFIELDS_9(name,N+GET_2ND(a),b,c,d,e,f,g,h,i,j)
/* Here's our bitfield class constructor */
#define MAKE_BITFIELD_STRUCT( name, ... ) \
struct name \
{ \
unsigned long long value; \
BITFIELDS( name, __VA_ARGS__ ) \
}
Once you have that, you can use it easily enough:
#include <iostream>
// Define struct B { ... }
// Fields are specified left-to-right as LSB-to-MSB.
// Each field is given by its name and the number of bits it occupies.
MAKE_BITFIELD_STRUCT( Bitmap, (foo,2), (bar,5), (baz,3) );
int main()
{
// Construct a Bitmap
Bitmap b = Bitmap().foo(1).bar(15).baz(7);
// Prove its worth
std::cout << std::hex << b.value << "\n"; // produces "3bd"
}
There exist usable facilities in Boost.Preprocessor to do this as well, but I find using them to be something of walking among dragons...
Notice that the code I provided has the following limitations:
You can declare a maximum of 10 bitfields. If you want to use more, you'll have to update the BITFIELDS_N definitions (by adding more of them, up to 20).
Each bitfield has a maximum size of an int, and all combined bitfields have a maximum size of a long long int. If you need more, consider updating the definition to use a std::bitset.
Notice also the named parameter idiom applied with the setters.
Hope this helps.
1) Is it possible to make portable bitfields?
No. IMO there is no good reason why the standard leaves it to whimsy, but it is what it is.
2) So before I start inventing this wheel: has it already been done? If yes, what to look for?
The only suggestion I can give is to forget bitfields and just use a straight-up bitmask. There are a variety of tricks you can use to create bitmasked integers; using macro tricks is a good one. Just be careful to keep the macros contained.
For C++ stuff, you can also check out this article on Using Enum Classes as Bitfields. (I have not used it myself, but it looks cool.)

Can I use a binary literal in C or C++?

I need to work with a binary number.
I tried writing:
const char x = 00010000;
But it didn't work.
I know that I can use a hexadecimal number that has the same value as 00010000, but I want to know if there is a type in C++ for binary numbers, and if there isn't, is there another solution for my problem?
If you are using GCC then you can use a GCC extension (which is included in the C++14 standard) for this:
int x = 0b00010000;
You can use binary literals. They are standardized in C++14. For example,
int x = 0b11000;
Support in GCC
Support in GCC began in GCC 4.3 (see https://gcc.gnu.org/gcc-4.3/changes.html) as extensions to the C language family (see https://gcc.gnu.org/onlinedocs/gcc/C-Extensions.html#C-Extensions), but since GCC 4.9 it is now recognized as either a C++14 feature or an extension (see Difference between GCC binary literals and C++14 ones?)
Support in Visual Studio
Support in Visual Studio started in Visual Studio 2015 Preview (see https://www.visualstudio.com/news/vs2015-preview-vs#C++).
template<unsigned long N>
struct bin {
enum { value = (N%10)+2*bin<N/10>::value };
} ;
template<>
struct bin<0> {
enum { value = 0 };
} ;
// ...
std::cout << bin<1000>::value << '\n';
The leftmost digit of the literal still has to be 1, but nonetheless.
You can use BOOST_BINARY while waiting for C++0x. :) BOOST_BINARY arguably has an advantage over template implementation insofar as it can be used in C programs as well (it is 100% preprocessor-driven.)
To do the converse (i.e. print out a number in binary form), you can use the non-portable itoa function, or implement your own.
Unfortunately you cannot do base 2 formatting with STL streams (since setbase will only honour bases 8, 10 and 16), but you can use either a std::string version of itoa, or (the more concise, yet marginally less efficient) std::bitset.
#include <boost/utility/binary.hpp>
#include <stdio.h>
#include <stdlib.h>
#include <bitset>
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
unsigned short b = BOOST_BINARY( 10010 );
char buf[sizeof(b)*8+1];
printf("hex: %04x, dec: %u, oct: %06o, bin: %16s\n", b, b, b, itoa(b, buf, 2));
cout << setfill('0') <<
"hex: " << hex << setw(4) << b << ", " <<
"dec: " << dec << b << ", " <<
"oct: " << oct << setw(6) << b << ", " <<
"bin: " << bitset< 16 >(b) << endl;
return 0;
}
produces:
hex: 0012, dec: 18, oct: 000022, bin: 10010
hex: 0012, dec: 18, oct: 000022, bin: 0000000000010010
Also read Herb Sutter's The String Formatters of Manor Farm for an interesting discussion.
A few compilers (usually the ones for microcontrollers) has a special feature implemented within recognizing literal binary numbers by prefix "0b..." preceding the number, although most compilers (C/C++ standards) don't have such feature and if it is the case, here it is my alternative solution:
#define B_0000 0
#define B_0001 1
#define B_0010 2
#define B_0011 3
#define B_0100 4
#define B_0101 5
#define B_0110 6
#define B_0111 7
#define B_1000 8
#define B_1001 9
#define B_1010 a
#define B_1011 b
#define B_1100 c
#define B_1101 d
#define B_1110 e
#define B_1111 f
#define _B2H(bits) B_##bits
#define B2H(bits) _B2H(bits)
#define _HEX(n) 0x##n
#define HEX(n) _HEX(n)
#define _CCAT(a,b) a##b
#define CCAT(a,b) _CCAT(a,b)
#define BYTE(a,b) HEX( CCAT(B2H(a),B2H(b)) )
#define WORD(a,b,c,d) HEX( CCAT(CCAT(B2H(a),B2H(b)),CCAT(B2H(c),B2H(d))) )
#define DWORD(a,b,c,d,e,f,g,h) HEX( CCAT(CCAT(CCAT(B2H(a),B2H(b)),CCAT(B2H(c),B2H(d))),CCAT(CCAT(B2H(e),B2H(f)),CCAT(B2H(g),B2H(h)))) )
// Using example
char b = BYTE(0100,0001); // Equivalent to b = 65; or b = 'A'; or b = 0x41;
unsigned int w = WORD(1101,1111,0100,0011); // Equivalent to w = 57155; or w = 0xdf43;
unsigned long int dw = DWORD(1101,1111,0100,0011,1111,1101,0010,1000); //Equivalent to dw = 3745774888; or dw = 0xdf43fd28;
Disadvantages (it's not such a big ones):
The binary numbers have to be grouped 4 by 4;
The binary literals have to be only unsigned integer numbers;
Advantages:
Total preprocessor driven, not spending processor time in pointless operations (like "?.. :..", "<<", "+") to the executable program (it may be performed hundred of times in the final application);
It works "mainly in C" compilers and C++ as well (template+enum solution works only in C++ compilers);
It has only the limitation of "longness" for expressing "literal constant" values. There would have been earlyish longness limitation (usually 8 bits: 0-255) if one had expressed constant values by parsing resolve of "enum solution" (usually 255 = reach enum definition limit), differently, "literal constant" limitations, in the compiler allows greater numbers;
Some other solutions demand exaggerated number of constant definitions (too many defines in my opinion) including long or several header files (in most cases not easily readable and understandable, and make the project become unnecessarily confused and extended, like that using "BOOST_BINARY()");
Simplicity of the solution: easily readable, understandable and adjustable for other cases (could be extended for grouping 8 by 8 too);
This thread may help.
/* Helper macros */
#define HEX__(n) 0x##n##LU
#define B8__(x) ((x&0x0000000FLU)?1:0) \
+((x&0x000000F0LU)?2:0) \
+((x&0x00000F00LU)?4:0) \
+((x&0x0000F000LU)?8:0) \
+((x&0x000F0000LU)?16:0) \
+((x&0x00F00000LU)?32:0) \
+((x&0x0F000000LU)?64:0) \
+((x&0xF0000000LU)?128:0)
/* User macros */
#define B8(d) ((unsigned char)B8__(HEX__(d)))
#define B16(dmsb,dlsb) (((unsigned short)B8(dmsb)<<8) \
+ B8(dlsb))
#define B32(dmsb,db2,db3,dlsb) (((unsigned long)B8(dmsb)<<24) \
+ ((unsigned long)B8(db2)<<16) \
+ ((unsigned long)B8(db3)<<8) \
+ B8(dlsb))
#include <stdio.h>
int main(void)
{
// 261, evaluated at compile-time
unsigned const number = B16(00000001,00000101);
printf("%d \n", number);
return 0;
}
It works! (All the credits go to Tom Torfs.)
The C++ over-engineering mindset is already well accounted for in the other answers here. Here's my attempt at doing it with a C, keep-it-simple-ffs mindset:
unsigned char x = 0xF; // binary: 00001111
As already answered, the C standards have no way to directly write binary numbers. There are compiler extensions, however, and apparently C++14 includes the 0b prefix for binary. (Note that this answer was originally posted in 2010.)
One popular workaround is to include a header file with helper macros. One easy option is also to generate a file that includes macro definitions for all 8-bit patterns, e.g.:
#define B00000000 0
#define B00000001 1
#define B00000010 2
…
This results in only 256 #defines, and if larger than 8-bit binary constants are needed, these definitions can be combined with shifts and ORs, possibly with helper macros (e.g., BIN16(B00000001,B00001010)). (Having individual macros for every 16-bit, let alone 32-bit, value is not plausible.)
Of course the downside is that this syntax requires writing all the leading zeroes, but this may also make it clearer for uses like setting bit flags and contents of hardware registers. For a function-like macro resulting in a syntax without this property, see bithacks.h linked above.
C does not have native notation for pure binary numbers. Your best bet here would be either octal (e.g. 07777) of hexadecimal (e.g. 0xfff).
You can use the function found in this question to get up to 22 bits in C++. Here's the code from the link, suitably edited:
template< unsigned long long N >
struct binary
{
enum { value = (N % 8) + 2 * binary< N / 8 > :: value } ;
};
template<>
struct binary< 0 >
{
enum { value = 0 } ;
};
So you can do something like binary<0101011011>::value.
The smallest unit you can work with is a byte (which is of char type). You can work with bits though by using bitwise operators.
As for integer literals, you can only work with decimal (base 10), octal (base 8) or hexadecimal (base 16) numbers. There are no binary (base 2) literals in C nor C++.
Octal numbers are prefixed with 0 and hexadecimal numbers are prefixed with 0x. Decimal numbers have no prefix.
In C++0x you'll be able to do what you want by the way via user defined literals.
Based on some other answers, but this one will reject programs with illegal binary literals. Leading zeroes are optional.
template<bool> struct BinaryLiteralDigit;
template<> struct BinaryLiteralDigit<true> {
static bool const value = true;
};
template<unsigned long long int OCT, unsigned long long int HEX>
struct BinaryLiteral {
enum {
value = (BinaryLiteralDigit<(OCT%8 < 2)>::value && BinaryLiteralDigit<(HEX >= 0)>::value
? (OCT%8) + (BinaryLiteral<OCT/8, 0>::value << 1)
: -1)
};
};
template<>
struct BinaryLiteral<0, 0> {
enum {
value = 0
};
};
#define BINARY_LITERAL(n) BinaryLiteral<0##n##LU, 0x##n##LU>::value
Example:
#define B BINARY_LITERAL
#define COMPILE_ERRORS 0
int main (int argc, char ** argv) {
int _0s[] = { 0, B(0), B(00), B(000) };
int _1s[] = { 1, B(1), B(01), B(001) };
int _2s[] = { 2, B(10), B(010), B(0010) };
int _3s[] = { 3, B(11), B(011), B(0011) };
int _4s[] = { 4, B(100), B(0100), B(00100) };
int neg8s[] = { -8, -B(1000) };
#if COMPILE_ERRORS
int errors[] = { B(-1), B(2), B(9), B(1234567) };
#endif
return 0;
}
You can also use inline assembly like this:
int i;
__asm {
mov eax, 00000000000000000000000000000000b
mov i, eax
}
std::cout << i;
Okay, it might be somewhat overkill, but it works.
The "type" of a binary number is the same as any decimal, hex or octal number: int (or even char, short, long long).
When you assign a constant, you can't assign it with 11011011 (curiously and unfortunately), but you can use hex. Hex is a little easier to mentally translate. Chunk in nibbles (4 bits) and translate to a character in [0-9a-f].
From C++14 you can use Binary Literals, now they are part of the language:
unsigned char a = 0b00110011;
You can use a bitset
bitset<8> b(string("00010000"));
int i = (int)(bs.to_ulong());
cout<<i;
I extended the good answer given by #renato-chandelier by ensuring the support of:
_NIBBLE_(…) – 4 bits, 1 nibble as argument
_BYTE_(…) – 8 bits, 2 nibbles as arguments
_SLAB_(…) – 12 bits, 3 nibbles as arguments
_WORD_(…) – 16 bits, 4 nibbles as arguments
_QUINTIBBLE_(…) – 20 bits, 5 nibbles as arguments
_DSLAB_(…) – 24 bits, 6 nibbles as arguments
_SEPTIBBLE_(…) – 28 bits, 7 nibbles as arguments
_DWORD_(…) – 32 bits, 8 nibbles as arguments
I am actually not so sure about the terms “quintibble” and “septibble”. If anyone knows any alternative please let me know.
Here is the macro rewritten:
#define __CAT__(A, B) A##B
#define _CAT_(A, B) __CAT__(A, B)
#define __HEX_0000 0
#define __HEX_0001 1
#define __HEX_0010 2
#define __HEX_0011 3
#define __HEX_0100 4
#define __HEX_0101 5
#define __HEX_0110 6
#define __HEX_0111 7
#define __HEX_1000 8
#define __HEX_1001 9
#define __HEX_1010 a
#define __HEX_1011 b
#define __HEX_1100 c
#define __HEX_1101 d
#define __HEX_1110 e
#define __HEX_1111 f
#define _NIBBLE_(N1) _CAT_(0x, _CAT_(__HEX_, N1))
#define _BYTE_(N1, N2) _CAT_(_NIBBLE_(N1), _CAT_(__HEX_, N2))
#define _SLAB_(N1, N2, N3) _CAT_(_BYTE_(N1, N2), _CAT_(__HEX_, N3))
#define _WORD_(N1, N2, N3, N4) _CAT_(_SLAB_(N1, N2, N3), _CAT_(__HEX_, N4))
#define _QUINTIBBLE_(N1, N2, N3, N4, N5) _CAT_(_WORD_(N1, N2, N3, N4), _CAT_(__HEX_, N5))
#define _DSLAB_(N1, N2, N3, N4, N5, N6) _CAT_(_QUINTIBBLE_(N1, N2, N3, N4, N5), _CAT_(__HEX_, N6))
#define _SEPTIBBLE_(N1, N2, N3, N4, N5, N6, N7) _CAT_(_DSLAB_(N1, N2, N3, N4, N5, N6), _CAT_(__HEX_, N7))
#define _DWORD_(N1, N2, N3, N4, N5, N6, N7, N8) _CAT_(_SEPTIBBLE_(N1, N2, N3, N4, N5, N6, N7), _CAT_(__HEX_, N8))
And here is Renato's using example:
char b = _BYTE_(0100, 0001); /* equivalent to b = 65; or b = 'A'; or b = 0x41; */
unsigned int w = _WORD_(1101, 1111, 0100, 0011); /* equivalent to w = 57155; or w = 0xdf43; */
unsigned long int dw = _DWORD_(1101, 1111, 0100, 0011, 1111, 1101, 0010, 1000); /* Equivalent to dw = 3745774888; or dw = 0xdf43fd28; */
Just use the standard library in C++:
#include <bitset>
You need a variable of type std::bitset:
std::bitset<8ul> x;
x = std::bitset<8>(10);
for (int i = x.size() - 1; i >= 0; i--) {
std::cout << x[i];
}
In this example, I stored the binary form of 10 in x.
8ul defines the size of your bits, so 7ul means seven bits and so on.
Here is my function without adding Boost library :
usage : BOOST_BINARY(00010001);
int BOOST_BINARY(int a){
int b = 0;
for (int i = 0;i < 8;i++){
b += a % 10 << i;
a = a / 10;
}
return b;
}
I nominate my solution:
#define B(x) \
((((x) >> 0) & 0x01) \
| (((x) >> 2) & 0x02) \
| (((x) >> 4) & 0x04) \
| (((x) >> 6) & 0x08) \
| (((x) >> 8) & 0x10) \
| (((x) >> 10) & 0x20) \
| (((x) >> 12) & 0x40) \
| (((x) >> 14) & 0x80))
const uint8 font6[] = {
B(00001110), //[00]
B(00010001),
B(00000001),
B(00000010),
B(00000100),
B(00000000),
B(00000100),
B(00000000),
I define 8-bit fonts and graphics this way, but could work with wider fonts as well. The macro B can be defined to produce the 0b format, if supported by the compiler.
Operation: the binary numbers are interpreted in octal, and then the bits are masked and shifted together. The intermediate value is limited by the largest integer the compiler can work with, I guess 64 bits should be OK.
It's entirely processed by the compiler, no code needed runtime.
Binary constants are to be standardised in C23. As of writing, 6.4.4.1/4 of the latest C2x draft standard says of the proposed notation:
[...] A binary constant consists of the prefix 0b or 0B followed by a sequence of the digits 0 or 1.
C++ provides a standard template named std::bitset. Try it if you like.
usage : BINARY(00010001);
int BINARY(int a){
int b = 0;
for (int i = 0;i < 8;i++){
b += a % 10 << i;
a = a / 10;
}
return b;
}
You could try using an array of bool:
bool i[8] = {0,0,1,1,0,1,0,1}