Automatically generate bit positions - c++

I have some bitmasks that look like this:
namespace bits {
const unsigned bit_one = 1u << 0;
const unsigned bit_two = 1u << 1;
const unsigned bit_three = 1u << 2;
......
const unsigned bit_ten = 1u << 10;
}
except that there are more bits and the names are actually meaningful flags for my program. But sometimes I remove bits, add bits, regroup similar bits, etc. Ideally I could do something like this:
namespace bits {
const unsigned bit_one = 1u << COUNTER;
const unsigned bit_two = 1u << COUNTER;
const unsigned bit_three = 1u << COUNTER;
......
const unsigned bit_ten = 1u << COUNTER;
}
Is there some template / macro do automate this process? I know about __COUNTER__, but this is a header so if it gets included in some other source that uses __COUNTER__ too it may break. I'm working in a framework which is pre-C++11, so while upgrading my compiler will happen eventually, a solution that doesn't use C++11 would be ideal.

Why not use a macro with an argument?
#define BIT(n) (1 << (n))

You can use the __LINE__ macro, which is part of standard C and C++. Use with caution and document your intent so that somebody else reading the code will understand.
#include <iostream>
namespace Bits
{
const unsigned Base = __LINE__ + 1;
const unsigned BitOne = 1u << __LINE__-Base;
const unsigned BitTwo = 1u << __LINE__-Base;
const unsigned BitThree = 1u << __LINE__-Base;
}
int main(void)
{
std::cout << Bits::BitOne << '\n';
std::cout << Bits::BitTwo << '\n';
std::cout << Bits::BitThree << '\n';
return 0;
}

The following will do the trick:
#define NEXT_MASK(x) \
DUMMY1_##x, \
x = (1U << DUMMY1_##x), \
DUMMY2_##x = DUMMY1_##x
enum {
NEXT_MASK(one),
NEXT_MASK(two),
NEXT_MASK(three),
NEXT_MASK(four)
};
#include <stdio.h>
int main()
{
printf("%x\n", one);
printf("%x\n", two);
printf("%x\n", three);
printf("%x\n", four);
return 0;
}
The program will emit:
1
2
4
8
The idea is that the first dummy enum steps up one step from the one before. The x is the mask, and the second dummy restores the value, so that the next macro will have a good starting point.

The classic solution would be an enumeration of the fields:
enum foo_flags {
alpha,
beta,
gamma,
count
};
and then using either std::bitset<count> or the BIT macro as H2CO3 suggested:
BIT(alpha)

Microsoft C++ has the
__COUNTER__
predefined macro, so you could...
#define NEXTBIT (1u << __COUNTER__)
namespace bits {
const unsigned bit_one = NEXTBIT;
const unsigned bit_two = NEXTBIT;
const unsigned bit_three = NEXTBIT;
}

Related

How to assign a 32-bit unsigned integer to a bit field containing 32 bits

I am trying to create a bit-field struct which has 32 total bits, but when I try to assign a 32-bit number to it, I get this error:
Implicit truncation from 'unsigned int' to bit-field changes value from 4278190080 to 0
Here is my struct and how I'm trying to use it
struct Color32 {
uint32_t a : 8;
uint32_t r : 8;
uint32_t g : 8;
uint32_t b : 8;
};
Color32 BLACK = {0xFF000000}; // this line has the compilation error
I see other questions around bit-field assignment, but they all seem to use bit-wise operations to set the individual fields.
There's also this reference which has the following sample, which seems to be the same way I'm using it, only mine won't compile:
#include <iostream>
struct S {
// three-bit unsigned field,
// allowed values are 0...7
unsigned int b : 3;
};
int main()
{
S s = {6};
++s.b; // store the value 7 in the bit field
std::cout << s.b << '\n';
++s.b; // the value 8 does not fit in this bit field
std::cout << s.b << '\n'; // formally implementation-defined, typically 0
}
You could use aggregate initialization here
Color32 BLACK = {0xFF, 0x00, 0x00, 0x00};
By the way, I would suggest modifying your Color32 struct to the following, which will have the same effect as specifying the bit field of your members
struct Color32 {
uint8_t a;
uint8_t r;
uint8_t g;
uint8_t b;
};
Something like this will sort of give you the best of both worlds:
struct Color32 {
union {
uint32_t color;
struct {
uint32_t b : 8;
uint32_t g : 8;
uint32_t r : 8;
uint32_t a : 8;
};
};
};
// will construct using single value
Color32 test{ 0x01020304 };
Color32 black{0xff000000 };
// can assign to individual fields
test.a = 0x04;
test.r = 0x03;
test.g = 0x02;
test.b = 0x01;
// can assign to the whole value like this.
test.color = 0xff000000;
test.color = black.color;
An issue with this is that the order of a, b, g, r in the struct may be dependent upon your specific compiler. For VS2017 compiling to windows target, the order shown will produce the expected results. I believe there may be a way to force the order somehow, but I am not familiar with how to do it.
Bitfield or not, your type has four members, not one.
You seem to be trying to treat it as a union.
Initialise each member individually as you would with any other type, or switch to a union (and then rely on type-punning as many do, with the usual caveats).
The counter-example you give is not the same, as it is a UDT with a single member and a single value in the initialiser; since the number of members given matches, everything is fine there.
After diving more into the topic, i've found that multiple bit fields aren't useful without bitwise operators and valid constructors, with that it depends a lot on the operating system.
The answer is tested on cygwin ( -Wno-unused-variable -O0 -ggdb flags ) on windows 7
Version 1: union
This is basic implementation without any bit fields, most common implementation of 4 byte colour space.
#include <iostream>
union c_space32{
uint32_t space;
uint8_t channels[4];
};
int main(){
{ // just a anonymous scope to keep things clear
union c_space32 temp = {0xff00fe32};
std::cout << "sizeof : " << sizeof( union c_space32 ) << "\n\n";
std::cout << (int)temp.channels[1] << "\t" << std::hex << temp.space << "\n";
++temp.channels[1];
std::cout << (int)temp.channels[1] << "\t" << std::hex << temp.space << "\n";
++temp.channels[1];
std::cout << (int)temp.channels[1] << "\t" << std::hex << temp.space << "\n";
}
return 0;}
The union behaves as normal color space, and every uint8_t part of the union behaves as unique byte, so overall change of value in c_space32.channels doesn't affect the value of c_space32.space outside of the scope of the byte. This is the output i am getting.
sizeof : 4
fe ff00fe32
ff ff00ff32
0 ff000032
Version 2: bit-fields
The issue with bit fields ( among lack of documentation in some cases ), is that they can easily change in size, and that endianness depends on OS so native logic behind structure of bit fields can escape our human logic. Let me give you some examples for future guys/gals who wish to endeavour into this topic.
#include <iostream>
#include <bitset>
struct temp1{
uint8_t a:1;
temp1(uint8_t val){ // just so i can assign it
this->a = (val & 0x1 ); // this is needed to avoid truncated warning
}
};
int main(){
struct temp1 t1 = 3;
uint8_t *ptr = (uint8_t *)&t1;
std::cout << sizeof(struct temp1) << std::endl; // size of 1 byte
std::cout << std::bitset<8>( *ptr ) << std::endl; // 0000-0001 position of our bitfield
return 0;}
So in this case sizeof(struct temp1) returns an size of 1 byte. With the position of our bit field as upmost right. And here is where documentation starts to go MIA.
#include <iostream>
#include <bitset>
struct temp2{
uint8_t a:1;
uint8_t b:1;
uint8_t c:1;
temp2(int VAL){ // just so i can assign it
this->a = (VAL & 0x1 );
this->b = 0;
this->c = (VAL >> 2 ) & 0x1;
}
};
int main(){
struct temp2 t1 = 0xf;
uint8_t *ptr = (uint8_t *)&t1;
std::cout << sizeof(struct temp2) << std::endl; // size of 1
std::cout << std::bitset<8>( *ptr ) << std::endl; // 0000-0101
return 0;}
In this case constructor is a must have, since computer doesn't know how you want to structure the data. Sure in our logic, if we line up bits it is logical that assigning them would be same as they are sharing the memory. But the issue is computer will not do bitwise operators for us. Sure those bits are in order and lined up naturally ,but the computer just grabs some bit and define it as an unique variable, what you choose to place in that variable it is up to you.
If we were to exceed the scope of unit memory size (byte), OS starts interfering in our work.
#include <iostream>
#include <bitset>
struct temp3{
bool b0:1;
bool b1:1;
bool b2:1;
bool b3:1;
bool b4:1;
bool b5:1;
bool b6:1;
bool b7:1;
temp3( int a ){
this->b0 = ( a & 0x1 );
this->b1 = ( a & 0x2 );
this->b2 = ( a & 0x4 );
this->b3 = ( a & 0x8 );
this->b4 = ( a & 0x10 );
this->b5 = ( a & 0x20 );
this->b6 = ( a & 0x40 );
this->b7 = ( a & 0x80 );
}
};
int main(){
struct temp3 t1 = 0xc3;
uint8_t *ptr = (uint8_t *)&t1;
std::cout << sizeof(struct temp3) << std::endl; // still size of 1
std::cout << std::bitset<8>( *ptr ) << std::endl; // 1100-0011
return 0;}
And when we exceed the byte size:
#include <iostream>
#include <bitset>
struct temp4{
bool b0:1;
bool b1:1;
bool b2:1;
bool b3:1;
bool b4:1;
bool b5:1;
bool b6:1;
bool b7:1;
bool b8:1;
temp4( int a ){
this->b0 = ( a & 0x1 );
this->b1 = ( a & 0x2 );
this->b2 = ( a & 0x4 );
this->b3 = ( a & 0x8 );
this->b4 = ( a & 0x10 );
this->b5 = ( a & 0x20 );
this->b6 = ( a & 0x40 );
this->b7 = ( a & 0x80 );
this->b8 = ( a & 0x100 );
}
};
int main(){
struct temp4 t1 = 0x1c3;
uint16_t *ptr = (uint16_t *)&t1;
std::cout << sizeof(struct temp4) << std::endl; // size of 2
std::cout << std::bitset<16>( *ptr ) << std::endl; // 0000-0000 1100-0011
std::cout << t1.b8 << std::endl; // still returns 1
std::cout << "\n\n";
union t_as{
uint16_t space;
temp4 data;
uint8_t bytes[2];
};
union t_as t2 = {0x1c3};
//11000011-00000001
std::cout << std::bitset<8>( t2.bytes[0] ) << "-" << std::bitset<8>( t2.bytes[1] ) << std::endl;
return 0;}
What happened here? Since we added another bool bit-field our struct grew for 1 byte ( since bool is 1 byte ), and our 16 bit pointer doesn't show the last b8 - but the union does. The issue is that OS took over, and in this case stuck the last bit behind our original memory - due to innate OS endianness. As you can see in the union , the byte is still read, but the order is different.
So when exceeding the byte size, normal OS rules apply.
CONCLUSION and ANSWER
struct half_opacity{
uint8_t alpha:4;
uint8_t red;
uint8_t green;
uint8_t blue;
half_opacity(int a){
this->alpha = ( a >> 24 )&0xf;
this->red = ( a >> 16 )&0xff;
this->green = ( a >> 8 )&0xff;
this->blue = ( a & 0xff );
}
operator uint32_t(){
return ( this->alpha << 24 )
| ( this->red << 16 )
| ( this->green << 8 )
| this->blue;
}
};
{
struct half_opacity c_space = 0xff00AABB;
std::cout << "size of : " << sizeof(struct half_opacity) << std::endl; //size of : 4
std::cout << std::hex << (uint32_t)c_space << std::endl; // 0x0f00AABB
}
So unless you plan to confide the original channel to some bit size, i would strongly suggest using union approach, since there isn't any added benefit into splitting the 32 bit integer into individual bytes with bit-fields. The major thing about bit fields is that you need to split them and build then back up as with any other integer field - bit shifts often circumvent the whole OS endianness thing.
The truncation warning you got, was due to multiple members in your struct, and struct naturally assigning the first one , and since you added more than the bit field could handle the compiler warned you that some data will be lost.

Why does std::bitset only support integral data types? Why is float not supported?

On trying to generate the bit pattern of a float as follows:
std::cout << std::bitset<32>(32.5) << std::endl;
the compiler generates this warning:
warning: implicit conversion from 'double' to 'unsigned long long' changes value
from 32.5 to 32 [-Wliteral-conversion]
std::cout << std::bitset<32>(32.5) << std::endl;
Output on ignoring warning :) :
00000000000000000000000000100000
Why cannot bitset detect floats and correctly output bit sequence, when casting to char* and walking memory does show correct sequence?
This works, but is machine dependent on byte ordering and mostly unreadable:
template <typename T>
void printMemory(const T& data) {
const char* begin = reinterpret_cast<const char*>(&data);
const char* end = begin + sizeof(data);
while(begin != end)
std::cout << std::bitset<CHAR_BIT>(*begin++) << " ";
std::cout << std::endl;
}
Output:
00000000 00000000 00000010 01000010
Is there a reason not to support floats? Is there an alternative for floats?
What would you expect to appear in your bitset if you supplied a float? Presumably some sort of representation of an IEEE-7545 binary32 floating point number in big-endian format? What about platforms that don't represent their floats in a way that's even remotely similar to that? Should the implementation bend over backwards to (probably lossily) convert the float supplied to what you want?
The reason it doesn't is that there is no standard defined format for floats. They don't even have to be 32 bits. They just usually are on most platforms.
C++ and C will run on very tiny and/or odd platforms. The standard can't count on what's 'usually the case'. There were/are C/C++ compilers for 8/16 bit 6502 systems who's sorry excuse for a native floating point format was (I think) a 6-byte entity that used packed BCD encoding.
This is the same reason that signed integers are also unsupported. Two's complement is not universal, just almost universal. :-)
With all the usual warnings about floating point formats not being standardised, endianness, etc etc
Here is code that will probably work, at least on x86 hardware.
#include <bitset>
#include <iostream>
#include <type_traits>
#include <cstring>
constexpr std::uint32_t float_to_bits(float in)
{
std::uint32_t result = 0;
static_assert(sizeof(float) == sizeof(result), "float is not 32 bits");
constexpr auto size = sizeof(float);
std::uint8_t buffer[size] = {};
// note - memcpy through a byte buffer to satisfy the
// strict aliasing rule.
// note that this has no detrimental effect on performance
// since memcpy is 'magic'
std::memcpy(buffer, std::addressof(in), size);
std::memcpy(std::addressof(result), buffer, size);
return result;
}
constexpr std::uint64_t float_to_bits(double in)
{
std::uint64_t result = 0;
static_assert(sizeof(double) == sizeof(result), "double is not 64 bits");
constexpr auto size = sizeof(double);
std::uint8_t buffer[size] = {};
std::memcpy(buffer, std::addressof(in), size);
std::memcpy(std::addressof(result), buffer, size);
return result;
}
int main()
{
std::cout << std::bitset<32>(float_to_bits(float(32.5))) << std::endl;
std::cout << std::bitset<64>(float_to_bits(32.5)) << std::endl;
}
example output:
01000010000000100000000000000000
0100000001000000010000000000000000000000000000000000000000000000
#include <iostream>
#include <bitset>
#include <climits>
#include <iomanip>
using namespace std;
template<class T>
auto toBitset(T x) -> bitset<sizeof(T) * CHAR_BIT>
{
return bitset<sizeof(T) * CHAR_BIT>{ *reinterpret_cast<unsigned long long int *>(&x) };
}
int main()
{
double x;
while (cin >> x) {
cout << setw(14) << x << " " << toBitset(x) << endl;
}
return 0;
}
https://wandbox.org/permlink/tCz5WwHqu2X4CV1E
sadly this fails if argument type is bigger than size of unsigned long long, for example it will fail for long double. This is limit of bitset constructor.

C/C++ efficient bit array

Can you recommend efficient/clean way to manipulate arbitrary length bit array?
Right now I am using regular int/char bitmask, but those are not very clean when array length is greater than datatype length.
std vector<bool> is not available for me.
Since you mention C as well as C++, I'll assume that a C++-oriented solution like boost::dynamic_bitset might not be applicable, and talk about a low-level C implementation instead. Note that if something like boost::dynamic_bitset works for you, or there's a pre-existing C library you can find, then using them can be better than rolling your own.
Warning: None of the following code has been tested or even compiled, but it should be very close to what you'd need.
To start, assume you have a fixed bitset size N. Then something like the following works:
typedef uint32_t word_t;
enum { WORD_SIZE = sizeof(word_t) * 8 };
word_t data[N / 32 + 1];
inline int bindex(int b) { return b / WORD_SIZE; }
inline int boffset(int b) { return b % WORD_SIZE; }
void set_bit(int b) {
data[bindex(b)] |= 1 << (boffset(b));
}
void clear_bit(int b) {
data[bindex(b)] &= ~(1 << (boffset(b)));
}
int get_bit(int b) {
return data[bindex(b)] & (1 << (boffset(b));
}
void clear_all() { /* set all elements of data to zero */ }
void set_all() { /* set all elements of data to one */ }
As written, this is a bit crude since it implements only a single global bitset with a fixed size. To address these problems, you want to start with a data struture something like the following:
struct bitset { word_t *words; int nwords; };
and then write functions to create and destroy these bitsets.
struct bitset *bitset_alloc(int nbits) {
struct bitset *bitset = malloc(sizeof(*bitset));
bitset->nwords = (n / WORD_SIZE + 1);
bitset->words = malloc(sizeof(*bitset->words) * bitset->nwords);
bitset_clear(bitset);
return bitset;
}
void bitset_free(struct bitset *bitset) {
free(bitset->words);
free(bitset);
}
Now, it's relatively straightforward to modify the previous functions to take a struct bitset * parameter. There's still no way to re-size a bitset during its lifetime, nor is there any bounds checking, but neither would be hard to add at this point.
boost::dynamic_bitset if the length is only known in run time.
std::bitset if the length is known in compile time (although arbitrary).
I've written a working implementation based off Dale Hagglund's response to provide a bit array in C (BSD license).
https://github.com/noporpoise/BitArray/
Please let me know what you think / give suggestions. I hope people looking for a response to this question find it useful.
This posting is rather old, but there is an efficient bit array suite in C in my ALFLB library.
For many microcontrollers without a hardware-division opcode, this library is EFFICIENT because it doesn't use division: instead, masking and bit-shifting are used. (Yes, I know some compilers will convert division by 8 to a shift, but this varies from compiler to compiler.)
It has been tested on arrays up to 2^32-2 bits (about 4 billion bits stored in 536 MBytes), although last 2 bits should be accessible if not used in a for-loop in your application.
See below for an extract from the doco. Doco is http://alfredo4570.net/src/alflb_doco/alflb.pdf, library is http://alfredo4570.net/src/alflb.zip
Enjoy,
Alf
//------------------------------------------------------------------
BM_DECLARE( arrayName, bitmax);
Macro to instantiate an array to hold bitmax bits.
//------------------------------------------------------------------
UCHAR *BM_ALLOC( BM_SIZE_T bitmax);
mallocs an array (of unsigned char) to hold bitmax bits.
Returns: NULL if memory could not be allocated.
//------------------------------------------------------------------
void BM_SET( UCHAR *bit_array, BM_SIZE_T bit_index);
Sets a bit to 1.
//------------------------------------------------------------------
void BM_CLR( UCHAR *bit_array, BM_SIZE_T bit_index);
Clears a bit to 0.
//------------------------------------------------------------------
int BM_TEST( UCHAR *bit_array, BM_SIZE_T bit_index);
Returns: TRUE (1) or FALSE (0) depending on a bit.
//------------------------------------------------------------------
int BM_ANY( UCHAR *bit_array, int value, BM_SIZE_T bitmax);
Returns: TRUE (1) if array contains the requested value (i.e. 0 or 1).
//------------------------------------------------------------------
UCHAR *BM_ALL( UCHAR *bit_array, int value, BM_SIZE_T bitmax);
Sets or clears all elements of a bit array to your value. Typically used after a BM_ALLOC.
Returns: Copy of address of bit array
//------------------------------------------------------------------
void BM_ASSIGN( UCHAR *bit_array, int value, BM_SIZE_T bit_index);
Sets or clears one element of your bit array to your value.
//------------------------------------------------------------------
BM_MAX_BYTES( int bit_max);
Utility macro to calculate the number of bytes to store bitmax bits.
Returns: A number specifying the number of bytes required to hold bitmax bits.
//------------------------------------------------------------------
You can use std::bitset
int main() {
const bitset<12> mask(2730ul);
cout << "mask = " << mask << endl;
bitset<12> x;
cout << "Enter a 12-bit bitset in binary: " << flush;
if (cin >> x) {
cout << "x = " << x << endl;
cout << "As ulong: " << x.to_ulong() << endl;
cout << "And with mask: " << (x & mask) << endl;
cout << "Or with mask: " << (x | mask) << endl;
}
}
I know it's an old post but I came here to find a simple C bitset implementation and none of the answers quite matched what I was looking for, so I implemented my own based on Dale Hagglund's answer. Here it is :)
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
typedef uint32_t word_t;
enum { BITS_PER_WORD = 32 };
struct bitv { word_t *words; int nwords; int nbits; };
struct bitv* bitv_alloc(int bits) {
struct bitv *b = malloc(sizeof(struct bitv));
if (b == NULL) {
fprintf(stderr, "Failed to alloc bitv\n");
exit(1);
}
b->nwords = (bits >> 5) + 1;
b->nbits = bits;
b->words = malloc(sizeof(*b->words) * b->nwords);
if (b->words == NULL) {
fprintf(stderr, "Failed to alloc bitv->words\n");
exit(1);
}
memset(b->words, 0, sizeof(*b->words) * b->nwords);
return b;
}
static inline void check_bounds(struct bitv *b, int bit) {
if (b->nbits < bit) {
fprintf(stderr, "Attempted to access a bit out of range\n");
exit(1);
}
}
void bitv_set(struct bitv *b, int bit) {
check_bounds(b, bit);
b->words[bit >> 5] |= 1 << (bit % BITS_PER_WORD);
}
void bitv_clear(struct bitv *b, int bit) {
check_bounds(b, bit);
b->words[bit >> 5] &= ~(1 << (bit % BITS_PER_WORD));
}
int bitv_test(struct bitv *b, int bit) {
check_bounds(b, bit);
return b->words[bit >> 5] & (1 << (bit % BITS_PER_WORD));
}
void bitv_free(struct bitv *b) {
if (b != NULL) {
if (b->words != NULL) free(b->words);
free(b);
}
}
void bitv_dump(struct bitv *b) {
if (b == NULL) return;
for(int i = 0; i < b->nwords; i++) {
word_t w = b->words[i];
for (int j = 0; j < BITS_PER_WORD; j++) {
printf("%d", w & 1);
w >>= 1;
}
printf(" ");
}
printf("\n");
}
void test(struct bitv *b, int bit) {
if (bitv_test(b, bit)) printf("Bit %d is set!\n", bit);
else printf("Bit %d is not set!\n", bit);
}
int main(int argc, char *argv[]) {
struct bitv *b = bitv_alloc(32);
bitv_set(b, 1);
bitv_set(b, 3);
bitv_set(b, 5);
bitv_set(b, 7);
bitv_set(b, 9);
bitv_set(b, 32);
bitv_dump(b);
bitv_free(b);
return 0;
}
I use this one:
//#include <bitset>
#include <iostream>
//source http://stackoverflow.com/questions/47981/how-do-you-set-clear-and-toggle-a-single-bit-in-c
#define BIT_SET(a,b) ((a) |= (1<<(b)))
#define BIT_CLEAR(a,b) ((a) &= ~(1<<(b)))
#define BIT_FLIP(a,b) ((a) ^= (1<<(b)))
#define BIT_CHECK(a,b) ((a) & (1<<(b)))
/* x=target variable, y=mask */
#define BITMASK_SET(x,y) ((x) |= (y))
#define BITMASK_CLEAR(x,y) ((x) &= (~(y)))
#define BITMASK_FLIP(x,y) ((x) ^= (y))
#define BITMASK_CHECK(x,y) ((x) & (y))
I have recently released BITSCAN, a C++ bit string library which is specifically oriented towards fast bit scanning operations. BITSCAN is available here. It is in alpha but still pretty well tested since I have used it in recent years for research in combinatorial optimization (e.g. in BBMC, a state of the art exact maximum clique algorithm). A comparison with other well known C++ implementations (STL or BOOST) may be found here.
I hope you find it useful. Any feedback is welcome.
In micro controller development, some times we need to use
2-dimentional array (matrix) with element value of [0, 1] only. That
means if we use 1 byte for element type, it wastes the memory greatly
(memory of micro controller is very limited). The proposed solution is
that we should use 1 bit matrix (element type is 1 bit).
http://htvdanh.blogspot.com/2016/09/one-bit-matrix-for-cc-programming.html
I recently implemented a small header-only library called BitContainer just for this purpose.
It focuses on expressiveness and compiletime abilities and can be found here:
https://github.com/EddyXorb/BitContainer
It is for sure not the classical way to look at bitarrays but can come in handy for strong-typing purposes and memory efficient representation of named properties.
Example:
constexpr Props props(Prop::isHigh(),Prop::isLow()); // intialize BitContainer of type Props with strong-type Prop
constexpr bool result1 = props.contains(Prop::isTiny()) // false
constexpr bool result2 = props.contains(Prop::isLow()) // true

C/C++ check if one bit is set in, i.e. int variable

int temp = 0x5E; // in binary 0b1011110.
Is there such a way to check if bit 3 in temp is 1 or 0 without bit shifting and masking.
Just want to know if there is some built in function for this, or am I forced to write one myself.
In C, if you want to hide bit manipulation, you can write a macro:
#define CHECK_BIT(var,pos) ((var) & (1<<(pos)))
and use it this way to check the nth bit from the right end:
CHECK_BIT(temp, n - 1)
In C++, you can use std::bitset.
Check if bit N (starting from 0) is set:
temp & (1 << N)
There is no builtin function for this.
I would just use a std::bitset if it's C++. Simple. Straight-forward. No chance for stupid errors.
typedef std::bitset<sizeof(int)> IntBits;
bool is_set = IntBits(value).test(position);
or how about this silliness
template<unsigned int Exp>
struct pow_2 {
static const unsigned int value = 2 * pow_2<Exp-1>::value;
};
template<>
struct pow_2<0> {
static const unsigned int value = 1;
};
template<unsigned int Pos>
bool is_bit_set(unsigned int value)
{
return (value & pow_2<Pos>::value) != 0;
}
bool result = is_bit_set<2>(value);
What the selected answer is doing is actually wrong. The below function will return the bit position or 0 depending on if the bit is actually enabled. This is not what the poster was asking for.
#define CHECK_BIT(var,pos) ((var) & (1<<(pos)))
Here is what the poster was originally looking for. The below function will return either a 1 or 0 if the bit is enabled and not the position.
#define CHECK_BIT(var,pos) (((var)>>(pos)) & 1)
Yeah, I know I don't "have" to do it this way. But I usually write:
/* Return type (8/16/32/64 int size) is specified by argument size. */
template<class TYPE> inline TYPE BIT(const TYPE & x)
{ return TYPE(1) << x; }
template<class TYPE> inline bool IsBitSet(const TYPE & x, const TYPE & y)
{ return 0 != (x & y); }
E.g.:
IsBitSet( foo, BIT(3) | BIT(6) ); // Checks if Bit 3 OR 6 is set.
Amongst other things, this approach:
Accommodates 8/16/32/64 bit integers.
Detects IsBitSet(int32,int64) calls without my knowledge & consent.
Inlined Template, so no function calling overhead.
const& references, so nothing needs to be duplicated/copied. And we are guaranteed that the compiler will pick up any typo's that attempt to change the arguments.
0!= makes the code more clear & obvious. The primary point to writing code is always to communicate clearly and efficiently with other programmers, including those of lesser skill.
While not applicable to this particular case... In general, templated functions avoid the issue of evaluating arguments multiple times. A known problem with some #define macros. E.g.: #define ABS(X) (((X)<0) ? - (X) : (X)) ABS(i++);
According to this description of bit-fields, there is a method for defining and accessing fields directly. The example in this entry goes:
struct preferences {
unsigned int likes_ice_cream : 1;
unsigned int plays_golf : 1;
unsigned int watches_tv : 1;
unsigned int reads_books : 1;
};
struct preferences fred;
fred.likes_ice_cream = 1;
fred.plays_golf = 1;
fred.watches_tv = 1;
fred.reads_books = 0;
if (fred.likes_ice_cream == 1)
/* ... */
Also, there is a warning there:
However, bit members in structs have practical drawbacks. First, the ordering of bits in memory is architecture dependent and memory padding rules varies from compiler to compiler. In addition, many popular compilers generate inefficient code for reading and writing bit members, and there are potentially severe thread safety issues relating to bit fields (especially on multiprocessor systems) due to the fact that most machines cannot manipulate arbitrary sets of bits in memory, but must instead load and store whole words.
You can use a Bitset - http://www.cppreference.com/wiki/stl/bitset/start.
Use std::bitset
#include <bitset>
#include <iostream>
int main()
{
int temp = 0x5E;
std::bitset<sizeof(int)*CHAR_BITS> bits(temp);
// 0 -> bit 1
// 2 -> bit 3
std::cout << bits[2] << std::endl;
}
i was trying to read a 32-bit integer which defined the flags for an object in PDFs and this wasn't working for me
what fixed it was changing the define:
#define CHECK_BIT(var,pos) ((var & (1 << pos)) == (1 << pos))
the operand & returns an integer with the flags that both have in 1, and it wasn't casting properly into boolean, this did the trick
I use this:
#define CHECK_BIT(var,pos) ( (((var) & (pos)) > 0 ) ? (1) : (0) )
where "pos" is defined as 2^n (i.g. 1,2,4,8,16,32 ...)
Returns:
1 if true
0 if false
There is, namely the _bittest intrinsic instruction.
#define CHECK_BIT(var,pos) ((var>>pos) & 1)
pos - Bit position strarting from 0.
returns 0 or 1.
For the low-level x86 specific solution use the x86 TEST opcode.
Your compiler should turn _bittest into this though...
The precedent answers show you how to handle bit checks, but more often then not, it is all about flags encoded in an integer, which is not well defined in any of the precedent cases.
In a typical scenario, flags are defined as integers themselves, with a bit to 1 for the specific bit it refers to. In the example hereafter, you can check if the integer has ANY flag from a list of flags (multiple error flags concatenated) or if EVERY flag is in the integer (multiple success flags concatenated).
Following an example of how to handle flags in an integer.
Live example available here:
https://rextester.com/XIKE82408
//g++ 7.4.0
#include <iostream>
#include <stdint.h>
inline bool any_flag_present(unsigned int value, unsigned int flags) {
return bool(value & flags);
}
inline bool all_flags_present(unsigned int value, unsigned int flags) {
return (value & flags) == flags;
}
enum: unsigned int {
ERROR_1 = 1U,
ERROR_2 = 2U, // or 0b10
ERROR_3 = 4U, // or 0b100
SUCCESS_1 = 8U,
SUCCESS_2 = 16U,
OTHER_FLAG = 32U,
};
int main(void)
{
unsigned int value = 0b101011; // ERROR_1, ERROR_2, SUCCESS_1, OTHER_FLAG
unsigned int all_error_flags = ERROR_1 | ERROR_2 | ERROR_3;
unsigned int all_success_flags = SUCCESS_1 | SUCCESS_2;
std::cout << "Was there at least one error: " << any_flag_present(value, all_error_flags) << std::endl;
std::cout << "Are all success flags enabled: " << all_flags_present(value, all_success_flags) << std::endl;
std::cout << "Is the other flag enabled with eror 1: " << all_flags_present(value, ERROR_1 | OTHER_FLAG) << std::endl;
return 0;
}
Why all these bit shifting operations and need for library functions? If you have the value the OP posted: 1011110 and you want to know if the bit in the 3rd position from the right is set, just do:
int temp = 0b1011110;
if( temp & 4 ) /* or (temp & 0b0100) if that's how you roll */
DoSomething();
Or something a bit prettier that may be more easily interpreted by future readers of the code:
#include <stdbool.h>
int temp = 0b1011110;
bool bThirdBitIsSet = (temp & 4) ? true : false;
if( bThirdBitIsSet )
DoSomething();
Or, with no #include needed:
int temp = 0b1011110;
_Bool bThirdBitIsSet = (temp & 4) ? 1 : 0;
if( bThirdBitIsSet )
DoSomething();
You could "simulate" shifting and masking: if((0x5e/(2*2*2))%2) ...
One approach will be checking within the following condition:
if ( (mask >> bit ) & 1)
An explanation program will be:
#include <stdio.h>
unsigned int bitCheck(unsigned int mask, int pin);
int main(void){
unsigned int mask = 6; // 6 = 0110
int pin0 = 0;
int pin1 = 1;
int pin2 = 2;
int pin3 = 3;
unsigned int bit0= bitCheck( mask, pin0);
unsigned int bit1= bitCheck( mask, pin1);
unsigned int bit2= bitCheck( mask, pin2);
unsigned int bit3= bitCheck( mask, pin3);
printf("Mask = %d ==>> 0110\n", mask);
if ( bit0 == 1 ){
printf("Pin %d is Set\n", pin0);
}else{
printf("Pin %d is not Set\n", pin0);
}
if ( bit1 == 1 ){
printf("Pin %d is Set\n", pin1);
}else{
printf("Pin %d is not Set\n", pin1);
}
if ( bit2 == 1 ){
printf("Pin %d is Set\n", pin2);
}else{
printf("Pin %d is not Set\n", pin2);
}
if ( bit3 == 1 ){
printf("Pin %d is Set\n", pin3);
}else{
printf("Pin %d is not Set\n", pin3);
}
}
unsigned int bitCheck(unsigned int mask, int bit){
if ( (mask >> bit ) & 1){
return 1;
}else{
return 0;
}
}
Output:
Mask = 6 ==>> 0110
Pin 0 is not Set
Pin 1 is Set
Pin 2 is Set
Pin 3 is not Set
if you just want a real hard coded way:
#define IS_BIT3_SET(var) ( ((var) & 0x04) == 0x04 )
note this hw dependent and assumes this bit order 7654 3210 and var is 8 bit.
#include "stdafx.h"
#define IS_BIT3_SET(var) ( ((var) & 0x04) == 0x04 )
int _tmain(int argc, _TCHAR* argv[])
{
int temp =0x5E;
printf(" %d \n", IS_BIT3_SET(temp));
temp = 0x00;
printf(" %d \n", IS_BIT3_SET(temp));
temp = 0x04;
printf(" %d \n", IS_BIT3_SET(temp));
temp = 0xfb;
printf(" %d \n", IS_BIT3_SET(temp));
scanf("waitng %d",&temp);
return 0;
}
Results in:
1
0
1
0
While it is quite late to answer now, there is a simple way one could find if Nth bit is set or not, simply using POWER and MODULUS mathematical operators.
Let us say we want to know if 'temp' has Nth bit set or not. The following boolean expression will give true if bit is set, 0 otherwise.
( temp MODULUS 2^N+1 >= 2^N )
Consider the following example:
int temp = 0x5E; // in binary 0b1011110 // BIT 0 is LSB
If I want to know if 3rd bit is set or not, I get
(94 MODULUS 16) = 14 > 2^3
So expression returns true, indicating 3rd bit is set.
Why not use something as simple as this?
uint8_t status = 255;
cout << "binary: ";
for (int i=((sizeof(status)*8)-1); i>-1; i--)
{
if ((status & (1 << i)))
{
cout << "1";
}
else
{
cout << "0";
}
}
OUTPUT: binary: 11111111
I make this:
LATGbits.LATG0=((m&0x8)>0); //to check if bit-2 of m is 1
the fastest way seems to be a lookup table for masks

Binary literals?

In code, I sometimes see people specify constants in hex format like this:
const int has_nukes = 0x0001;
const int has_bio_weapons = 0x0002;
const int has_chem_weapons = 0x0004;
// ...
int arsenal = has_nukes | has_bio_weapons | has_chem_weapons; // all of them
if(arsenal &= has_bio_weapons){
std::cout << "BIO!!"
}
But it doesn't make sense to me to use the hex format here. Is there a way to do it directly in binary? Something like this:
const int has_nukes = 0b00000000000000000000000000000001;
const int has_bio_weapons = 0b00000000000000000000000000000010;
const int has_chem_weapons = 0b00000000000000000000000000000100;
// ...
I know the C/C++ compilers won't compile this, but there must be a workaround? Is it possible in other languages like Java?
In C++14 you will be able to use binary literals with the following syntax:
0b010101010 /* more zeros and ones */
This feature is already implemented in the latest clang and gcc. You can try it if you run those compilers with -std=c++1y option.
I'd use a bit shift operator:
const int has_nukes = 1<<0;
const int has_bio_weapons = 1<<1;
const int has_chem_weapons = 1<<2;
// ...
int dangerous_mask = has_nukes | has_bio_weapons | has_chem_weapons;
bool is_dangerous = (country->flags & dangerous_mask) == dangerous_mask;
It is even better than flood of 0's.
By the way, the next C++ version will support user defined literals. They are already included into the working draft. This allows that sort of stuff (let's hope i don't have too many errors in it):
template<char... digits>
constexpr int operator "" _b() {
return conv2bin<digits...>::value;
}
int main() {
int const v = 110110110_b;
}
conv2bin would be a template like this:
template<char... digits>
struct conv2bin;
template<char high, char... digits>
struct conv2bin<high, digits...> {
static_assert(high == '0' || high == '1', "no bin num!");
static int const value = (high - '0') * (1 << sizeof...(digits)) +
conv2bin<digits...>::value;
};
template<char high>
struct conv2bin<high> {
static_assert(high == '0' || high == '1', "no bin num!");
static int const value = (high - '0');
};
Well, what we get are binary literals that evaluate fully at compile time already, because of the "constexpr" above. The above uses a hard-coded int return type. I think one could even make it depend on the length of the binary string. It's using the following features, for anyone interested:
Generalized Constant Expressions.
Variadic Templates. A brief introduction can be found here
Static Assertions (static_assert)
User defined Literals
Actually, current GCC trunk already implements variadic templates and static assertions. Let's hope it will support the other two soon. I think C++1x will rock the house.
The C++ Standard Library is your friend:
#include <bitset>
const std::bitset <32> has_nukes( "00000000000000000000000000000001" );
GCC supports binary constants as an extension since 4.3. See the announcement (look at the section "New Languages and Language specific improvements").
You can use << if you like.
int hasNukes = 1;
int hasBioWeapons = 1 << 1;
int hasChemWeapons = 1 << 2;
This discussion may be interesting... Might have been, as the link is dead unfortunately. It described a template based approach similar to other answers here.
And also there is a thing called BOOST_BINARY.
The term you want is binary literals
Ruby has them with the syntax you give.
One alternative is to define helper macros to convert for you. I found the following code at http://bytes.com/groups/c/219656-literal-binary
/* Binary constant generator macro
* By Tom Torfs - donated to the public domain
*/
/* All macro's evaluate to compile-time constants */
/* *** helper macros *** */
/* turn a numeric literal into a hex constant
* (avoids problems with leading zeroes)
* 8-bit constants max value 0x11111111, always fits in unsigned long
*/
#define HEX_(n) 0x##n##LU
/* 8-bit conversion function */
#define B8_(x) ((x & 0x0000000FLU) ? 1:0) \
| ((x & 0x000000F0LU) ? 2:0) \
| ((x & 0x00000F00LU) ? 4:0) \
| ((x & 0x0000F000LU) ? 8:0) \
| ((x & 0x000F0000LU) ? 16:0) \
| ((x & 0x00F00000LU) ? 32:0) \
| ((x & 0x0F000000LU) ? 64:0) \
| ((x & 0xF0000000LU) ? 128:0)
/* *** user macros *** /
/* for upto 8-bit binary constants */
#define B8(d) ((unsigned char) B8_(HEX_(d)))
/* for upto 16-bit binary constants, MSB first */
#define B16(dmsb, dlsb) (((unsigned short) B8(dmsb) << 8) \
| B8(dlsb))
/* for upto 32-bit binary constants, MSB first */
#define B32(dmsb, db2, db3, dlsb) (((unsigned long) B8(dmsb) << 24) \
| ((unsigned long) B8( db2) << 16) \
| ((unsigned long) B8( db3) << 8) \
| B8(dlsb))
/* Sample usage:
* B8(01010101) = 85
* B16(10101010,01010101) = 43605
* B32(10000000,11111111,10101010,01010101) = 2164238933
*/
The next version of C++, C++0x, will introduce user defined literals. I'm not sure if binary numbers will be part of the standard but at the worst you'll be able to enable it yourself:
int operator "" _B(int i);
assert( 1010_B == 10);
I write binary literals like this:
const int has_nukes = 0x0001;
const int has_bio_weapons = 0x0002;
const int has_chem_weapons = 0x0004;
It's more compact than your suggested notation, and easier to read. For example:
const int upper_bit = 0b0001000000000000000;
versus:
const int upper_bit = 0x04000;
Did you notice that the binary version wasn't an even multiple of 4 bits? Did you think it was 0x10000?
With a little practice hex or octal are easier for a human than binary. And, in my opinion, easier to read that using shift operators. But I'll concede that my years of assembly language work may bias me on that point.
If you want to use bitset, auto, variadic templates, user-defined literals, static_assert, constexpr, and noexcept try this:
template<char... Bits>
struct __checkbits
{
static const bool valid = false;
};
template<char High, char... Bits>
struct __checkbits<High, Bits...>
{
static const bool valid = (High == '0' || High == '1')
&& __checkbits<Bits...>::valid;
};
template<char High>
struct __checkbits<High>
{
static const bool valid = (High == '0' || High == '1');
};
template<char... Bits>
inline constexpr std::bitset<sizeof...(Bits)>
operator"" bits() noexcept
{
static_assert(__checkbits<Bits...>::valid, "invalid digit in binary string");
return std::bitset<sizeof...(Bits)>((char []){Bits..., '\0'});
}
Use it like this:
int
main()
{
auto bits = 0101010101010101010101010101010101010101010101010101010101010101bits;
std::cout << bits << std::endl;
std::cout << "size = " << bits.size() << std::endl;
std::cout << "count = " << bits.count() << std::endl;
std::cout << "value = " << bits.to_ullong() << std::endl;
// This triggers the static_assert at compile-time.
auto badbits = 2101010101010101010101010101010101010101010101010101010101010101bits;
// This throws at run-time.
std::bitset<64> badbits2("2101010101010101010101010101010101010101010101010101010101010101bits");
}
Thanks to #johannes-schaub-litb
Java doesn't support binary literals either, unfortunately. However, it has enums which can be used with an EnumSet. An EnumSet represents enum values internally with bit fields, and presents a Set interface for manipulating these flags.
Alternatively, you could use bit offsets (in decimal) when defining your values:
const int HAS_NUKES = 0x1 << 0;
const int HAS_BIO_WEAPONS = 0x1 << 1;
const int HAS_CHEM_WEAPONS = 0x1 << 2;
There's no syntax for literal binary constants in C++ the way there is for hexadecimal and octal. The closest thing for what it looks like you're trying to do would probably be to learn and use bitset.
As an aside:
Especially if you're dealing with a large set, instead of going through the [minor] mental effort of writing a sequence of shift amounts, you can make each constant depend on the previously defined constant:
const int has_nukes = 1;
const int has_bio_weapons = has_nukes << 1;
const int has_chem_weapons = has_bio_weapons << 1;
const int has_nunchuks = has_chem_weapons << 1;
// ...
Looks a bit redundant, but it's less typo-prone. Also, you can simply insert a new constant in the middle without having to touch any other line except the one immediately following it:
const int has_nukes = 1;
const int has_gravity_gun = has_nukes << 1; // added
const int has_bio_weapons = has_gravity_gun << 1; // changed
const int has_chem_weapons = has_bio_weapons << 1; // unaffected from here on
const int has_nunchuks = has_chem_weapons << 1;
// ...
Compare to:
const int has_nukes = 1 << 0;
const int has_bio_weapons = 1 << 1;
const int has_chem_weapons = 1 << 2;
const int has_nunchuks = 1 << 3;
// ...
const int has_scimatar = 1 << 28;
const int has_rapier = 1 << 28; // good luck spotting this typo!
const int has_katana = 1 << 30;
And:
const int has_nukes = 1 << 0;
const int has_gravity_gun = 1 << 1; // added
const int has_bio_weapons = 1 << 2; // changed
const int has_chem_weapons = 1 << 3; // changed
const int has_nunchuks = 1 << 4; // changed
// ... // changed all the way
const int has_scimatar = 1 << 29; // changed *sigh*
const int has_rapier = 1 << 30; // changed *sigh*
const int has_katana = 1 << 31; // changed *sigh*
As an aside to my aside, it's probably equally hard to spot a typo like this:
const int has_nukes = 1;
const int has_gravity_gun = has_nukes << 1;
const int has_bio_weapons = has_gravity_gun << 1;
const int has_chem_weapons = has_gravity_gun << 1; // oops!
const int has_nunchuks = has_chem_weapons << 1;
So, I think the main advantage of this cascading syntax is when dealing with insertions and deletions of constants.
Another method:
template<unsigned int N>
class b
{
public:
static unsigned int const x = N;
typedef b_<0> _0000;
typedef b_<1> _0001;
typedef b_<2> _0010;
typedef b_<3> _0011;
typedef b_<4> _0100;
typedef b_<5> _0101;
typedef b_<6> _0110;
typedef b_<7> _0111;
typedef b_<8> _1000;
typedef b_<9> _1001;
typedef b_<10> _1010;
typedef b_<11> _1011;
typedef b_<12> _1100;
typedef b_<13> _1101;
typedef b_<14> _1110;
typedef b_<15> _1111;
private:
template<unsigned int N2>
struct b_: public b<N << 4 | N2> {};
};
typedef b<0> _0000;
typedef b<1> _0001;
typedef b<2> _0010;
typedef b<3> _0011;
typedef b<4> _0100;
typedef b<5> _0101;
typedef b<6> _0110;
typedef b<7> _0111;
typedef b<8> _1000;
typedef b<9> _1001;
typedef b<10> _1010;
typedef b<11> _1011;
typedef b<12> _1100;
typedef b<13> _1101;
typedef b<14> _1110;
typedef b<15> _1111;
Usage:
std::cout << _1101::_1001::_1101::_1101::x;
Implemented in CityLizard++ (citylizard/binary/b.hpp).
I agree that it's useful to have an option for binary literals, and they are present in many programming languages. In C, I've decided to use a macro like this:
#define bitseq(a00,a01,a02,a03,a04,a05,a06,a07,a08,a09,a10,a11,a12,a13,a14,a15, \
a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31) \
(a31|a30<< 1|a29<< 2|a28<< 3|a27<< 4|a26<< 5|a25<< 6|a24<< 7| \
a23<< 8|a22<< 9|a21<<10|a20<<11|a19<<12|a18<<13|a17<<14|a16<<15| \
a15<<16|a14<<17|a13<<18|a12<<19|a11<<20|a10<<21|a09<<22|a08<<23| \
a07<<24|a06<<25|a05<<26|a04<<27|a03<<28|a02<<29|a01<<30|(unsigned)a00<<31)
The usage is pretty much straightforward =)
One, slightly horrible way you could do it is by generating a .h file with lots of #defines...
#define b00000000 0
#define b00000001 1
#define b00000010 2
#define b00000011 3
#define b00000100 4
etc.
This might make sense for 8-bit numbers, but probably not for 16-bit or larger.
Alternatively, do this (similar to Zach Scrivena's answer):
#define bit(x) (1<<x)
int HAS_NUKES = bit(HAS_NUKES_OFFSET);
int HAS_BIO_WEAPONS = bit(HAS_BIO_WEAPONS_OFFSET);
Binary literals are part of the C++ language since C++14. It’s literals that start with 0b or 0B. Reference
Maybe less relevant to binary literals, but this just looks as if it can be solved better with a bit field.
struct DangerCollection : uint32_t {
bool has_nukes : 1;
bool has_bio_weapons : 1;
bool has_chem_weapons : 1;
// .....
};
DangerCollection arsenal{
.has_nukes = true,
.has_bio_weapons = true,
.has_chem_weapons = true,
// ...
};
if(arsenal.has_bio_weapons){
std::cout << "BIO!!"
}
You would still be able to fill it with binary data, since its binary footprint is just a uint32. This is often used in combination with a union, for compact binary serialisation:
union DangerCollectionUnion {
DangerCollection collection;
uint8_t data[sizeof(DangerCollection)];
};
DangerCollectionUnion dc;
std::memcpy(dc.data, bitsIGotFromSomewhere, sizeof(DangerCollection));
if (dc.collection.has_bio_weapons) {
// ....
In my experience less error prone and easy to understand what's going on.