Merge bits, then determine how many 0s the result has - c++

I am trying to write a function which takes in three bit vectors representing the digits in use in the row, col and block of a Sudoku puzzle from positions 1-9. A cell can only use digits that are unused, and the function is supposed to return whether the digits in all the vectors force one possibility or whether there is more than one possibility. I took this to mean that I would have to merge all three vectors, and then determine where there were "unset" bits in the resulting pattern.
However, my function does not seem in gdb to be returning the correct mask even though it was inspired by this derivation: http://graphics.stanford.edu/~seander/bithacks.html#MaskedMerge
I am trying to merge one set among two, then the third set into the previous merge, derive the number of 1s in the final merge, and subtract it to derive how many 0s there are.
Then, I wrote the following function:
bool possibilities(unsigned short row, unsigned short col, unsigned short bloc)
{
unsigned int mask = (1 << col) - 1;
unsigned int inbw = (row & ~mask) | (col & mask);
unsigned int mask2 = (1 << bloc) - 1;
unsigned int final = (inbw & ~mask2) | (bloc & mask2);
int num_1s;
while (result != 0) {
result &= (result - 1);
num_1s++;
}
int zeroes = 32 - num_1s; // 32 being the presumed number of bits in a short.
if (zeroes == 1) return true;
return false;
}

According to this document:
http://www.cplusplus.com/doc/tutorial/variables/
A short is not smaller than char. At least 16 bits.
So you could be wrong calculating the zeroes as 32 - num_1s.
Instead of doing so, you can get an unsigned short and fill it with 1s, setting 0s at first 9 bits.
var = 0xFFFFFE00
By that way you avoid that the solution depends strongly on the size of the variable you use.
A solution to that problem could be this (assuming row, col and bloc like above):
possibilities = row | col | bloc;
while (possibilities != 0) {
num_0s += ((~possibilities)&1);
possibilities = (possibilities >> 1);
}

If I understood correctly that each of row, col, and bloc (sic) are bit masks with individual bits (presumably bits 0-8) representing the presence of digits 1-9, your masks are wrong (and indeed quite pointless). For example, if col has bit 8 set, then mask = (1 << col) - 1 shifts 1 to the left by 256 – since it is extremely unlikely that unsigned short be over 256 bits wide, this results in 0 after the shift and then in a mask with all bits set after you subtract the 1. After this (row & ~mask) | (col & mask) will be only col since ~mask is 0.
A couple of simple options come to mind:
1) Don't merge at all, simply do the popcount on each of the three variables individually. Some modern processors have an instruction for popcount, so if you manage use that, e.g., through a compiler's built-in function (e.g., __builtin_popcount), it will even be faster.
2) Mask the bits on each variable individually and shift them to position, e.g.:
const unsigned int mask = 0x1FF;
unsigned int final = (col & mask) | ((row & mask) << 9) | ((bloc & mask) << 18);
Also, don't subtract the number of 1's from 32 but from 27 (= 3×9) - that's the maximum number of 1 bits if each of the three variables can have at most 9 bits set.
Edit: Could be that I've misunderstood what you are trying to do by merging. If you mean a simple union of all 1 bits in the three variables, then it would be just unsigned int final = col | row | bloc with no need to mask. Then you would subtract the popcount (number of 1 bits) from 9.

Related

C/C++ bit array resolution transform algorithms

Anyone aware of any algorithms to up/down convert bit arrays?
ie: when the resolution is 1/16:
every 1 bit = 16 bits. (low resolution to high resolution)
1010 -> 1111111111111111000000000000000011111111111111110000000000000000
and reverse, 16 bits = 1 bit (high resolution to low resolution)
1111111111111111000000000000000011111111111111110000000000000000 -> 1010
Right now I am looping bit by bit which is not efficient. Using a whole 64-bit word would be better but run into issues when the word isn't divisible by resolution equally (some bits may spill over to the next word).
C++:
std::vector<uint64_t> bitset;
C:
uint64_t *bitset = calloc(total_bits >> 6, sizeof(uint64_t)); // free() when done
which is accessed using:
const uint64_t idx = bit >> 6;
const uint64_t pos = bit % 64;
const bool value = (bitset[idx] >> pos) & 1U;
and set/clear:
bitset[idx] |= (1UL << pos);
bitset[idx] &= ~(1UL << pos);
and the OR (or AND/XOR/AND/NOT) of two bitsets of same resolution are done using the full 64-bit word:
bitset[idx] |= source.bitset[idx];
I am dealing with large enough bitsets (2+ billion bits) that I'm looking for any efficiency in the loops. One way I found to optimize the loop is to check each word using __builtin_popcountll, and skip ahead in the loop:
for (uint64_t bit = 0; bit < total_bits; bit++)
{
const uint64_t idx = bit >> 6;
const uint64_t pos = bit % 64;
const uint64_t bits = __builtin_popcountll(bitset[idx]);
if (!bits)
{
i += 63;
continue;
}
// process
}
I'm looking for algorithms/techniques more than code examples. But if you have code to share, I won't say no. Any academic research papers would be appreciated too.
Thanks in advance!
Does the resolution always between 1/2 and 1/64? Or even 1/32? Because if you need very long sequence, you might need more loop nesting which might cause some slow down.
Are you sequence always very long (millions of bits) or this is a maximum but usually your sequences are shorter? When doing high to low resolution, can you assume that data is valid or not.
Here are some tricks:
uint64_t one = 1;
uint64_t n_one_bits = (one << n) - 1u; // valid for 0 to 63; not sure for 64
If your sequence are so long, you might want to check if n is some power of 2 and have more optimized code for those cases.
You might find some other useful tricks here:
https://graphics.stanford.edu/~seander/bithacks.html
So if your resolution is 1/16, you don't need to loop individual 16 bits but you can check all 16 bits at once. Then you can repeat for next group again and again.
If the number is not an a divider of 64, you can shift bits as appropriate each time you would cross the 64 bits boundary. Say, that your resolution is 1/5, then you could process 60 bits, then shift 4 remaining bit and combine with following 60 bits.
If you can assume that data is valid, then you don't even need to shift the original number as you can pick the value of the appropriate bit each time.

How to build N bits variables in C++?

I am dealing with very large list of booleans in C++, around 2^N items of N booleans each. Because memory is critical in such situation, i.e. an exponential growth, I would like to build a N-bits long variable to store each element.
For small N, for example 24, I am just using unsigned long int. It takes 64MB ((2^24)*32/8/1024/1024). But I need to go up to 36. The only option with build-in variable is unsigned long long int, but it takes 512GB ((2^36)*64/8/1024/1024/1024), which is a bit too much.
With a 36-bits variable, it would work for me because the size drops to 288GB ((2^36)*36/8/1024/1024/1024), which fits on a node of my supercomputer.
I tried std::bitset, but std::bitset< N > creates a element of at least 8B.
So a list of std::bitset< 1 > is much greater than a list of unsigned long int.
It is because the std::bitset just change the representation, not the container.
I also tried boost::dynamic_bitset<> from Boost, but the result is even worst (at least 32B!), for the same reason.
I know an option is to write all elements as one chain of booleans, 2473901162496 (2^36*36), then to store then in 38654705664 (2473901162496/64) unsigned long long int, which gives 288GB (38654705664*64/8/1024/1024/1024). Then to access an element is just a game of finding in which elements the 36 bits are stored (can be either one or two). But it is a lot of rewriting of the existing code (3000 lines) because mapping becomes impossible and because adding and deleting items during the execution in some functions will be surely complicated, confusing, challenging, and the result will be most likely not efficient.
How to build a N-bits variable in C++?
How about a struct with 5 chars (and perhaps some fancy operator overloading as needed to keep it compatible to the existing code)? A struct with a long and a char probably won't work because of padding / alignment...
Basically your own mini BitSet optimized for size:
struct Bitset40 {
unsigned char data[5];
bool getBit(int index) {
return (data[index / 8] & (1 << (index % 8))) != 0;
}
bool setBit(int index, bool newVal) {
if (newVal) {
data[index / 8] |= (1 << (index % 8));
} else {
data[index / 8] &= ~(1 << (index % 8));
}
}
};
Edit: As geza has also pointed out int he comments, the "trick" here is to get as close as possible to the minimum number of bytes needed (without wasting memory by triggering alignment losses, padding or pointer indirection, see http://www.catb.org/esr/structure-packing/).
Edit 2: If you feel adventurous, you could also try a bit field (and please let us know how much space it actually consumes):
struct Bitset36 {
unsigned long long data:36;
}
I'm not an expert, but this is what I would "try". Find the bytes for the smallest type your compiler supports (should be char). You can check with sizeof and you should get 1. That means 1 byte, so 8 bits.
So if you wanted a 24 bit type...you would need 3 chars. For 36 you would need 5 char array and you would have 4 bits of wasted padding on the end. This could easily be accounted for.
i.e.
char typeSize[3] = {0}; // should hold 24 bits
Now make a bit mask to access each position of typeSize.
const unsigned char one = 0b0000'0001;
const unsigned char two = 0b0000'0010;
const unsigned char three = 0b0000'0100;
const unsigned char four = 0b0000'1000;
const unsigned char five = 0b0001'0000;
const unsigned char six = 0b0010'0000;
const unsigned char seven = 0b0100'0000;
const unsigned char eight = 0b1000'0000;
Now you can use the bit-wise or to set the values to 1 where needed..
typeSize[1] |= four;
*typeSize[0] |= (four | five);
To turn off bits use the & operator..
typeSize[0] &= ~four;
typeSize[2] &= ~(four| five);
You can read the position of each bit with the & operator.
typeSize[0] & four
Bear in mind, I don't have a compiler handy to try this out so hopefully this is a useful approach to your problem.
Good luck ;-)
You can use array of unsigned long int and store and retrieve needed bit chains with bitwise operations. This approach excludes space overhead.
Simplified example for unsigned byte array B[] and 12-bit variables V (represented as ushort):
Set V[0]:
B[0] = V & 0xFF; //low byte
B[1] = B[1] & 0xF0; // clear low nibble
B[1] = B[1] | (V >> 8); //fill low nibble of the second byte with the highest nibble of V

Grabbing n bits from a byte

I'm having a little trouble grabbing n bits from a byte.
I have an unsigned integer. Let's say our number in hex is 0x2A, which is 42 in decimal. In binary it looks like this: 0010 1010. How would I grab the first 5 bits which are 00101 and the next 3 bits which are 010, and place them into separate integers?
If anyone could help me that would be great! I know how to extract from one byte which is to simply do
int x = (number >> (8*n)) & 0xff // n being the # byte
which I saw on another post on stack overflow, but I wasn't sure on how to get separate bits out of the byte. If anyone could help me out, that'd be great! Thanks!
Integers are represented inside a machine as a sequence of bits; fortunately for us humans, programming languages provide a mechanism to show us these numbers in decimal (or hexadecimal), but that does not alter their internal representation.
You should review the bitwise operators &, |, ^ and ~ as well as the shift operators << and >>, which will help you understand how to solve problems like this.
The last 3 bits of the integer are:
x & 0x7
The five bits starting from the eight-last bit are:
x >> 3 // all but the last three bits
& 0x1F // the last five bits.
"grabbing" parts of an integer type in C works like this:
You shift the bits you want to the lowest position.
You use & to mask the bits you want - ones means "copy this bit", zeros mean "ignore"
So, in you example. Let's say we have a number int x = 42;
first 5 bits:
(x >> 3) & ((1 << 5)-1);
or
(x >> 3) & 31;
To fetch the lower three bits:
(x >> 0) & ((1 << 3)-1)
or:
x & 7;
Say you want hi bits from the top, and lo bits from the bottom. (5 and 3 in your example)
top = (n >> lo) & ((1 << hi) - 1)
bottom = n & ((1 << lo) - 1)
Explanation:
For the top, first get rid of the lower bits (shift right), then mask the remaining with an "all ones" mask (if you have a binary number like 0010000, subtracting one results 0001111 - the same number of 1s as you had 0-s in the original number).
For the bottom it's the same, just don't have to care with the initial shifting.
top = (42 >> 3) & ((1 << 5) - 1) = 5 & (32 - 1) = 5 = 00101b
bottom = 42 & ((1 << 3) - 1) = 42 & (8 - 1) = 2 = 010b
You could use bitfields for this. Bitfields are special structs where you can specify variables in bits.
typedef struct {
unsigned char a:5;
unsigned char b:3;
} my_bit_t;
unsigned char c = 0x42;
my_bit_t * n = &c;
int first = n->a;
int sec = n->b;
Bit fields are described in more detail at http://www.cs.cf.ac.uk/Dave/C/node13.html#SECTION001320000000000000000
The charm of bit fields is, that you do not have to deal with shift operators etc. The notation is quite easy. As always with manipulating bits there is a portability issue.
int x = (number >> 3) & 0x1f;
will give you an integer where the last 5 bits are the 8-4 bits of number and zeros in the other bits.
Similarly,
int y = number & 0x7;
will give you an integer with the last 3 bits set the last 3 bits of number and the zeros in the rest.
just get rid of the 8* in your code.
int input = 42;
int high3 = input >> 5;
int low5 = input & (32 - 1); // 32 = 2^5
bool isBit3On = input & 4; // 4 = 2^(3-1)

How to read/write arbitrary bits in C/C++

Assuming I have a byte b with the binary value of 11111111
How do I for example read a 3 bit integer value starting at the second bit or write a four bit integer value starting at the fifth bit?
Some 2+ years after I asked this question I'd like to explain it the way I'd want it explained back when I was still a complete newb and would be most beneficial to people who want to understand the process.
First of all, forget the "11111111" example value, which is not really all that suited for the visual explanation of the process. So let the initial value be 10111011 (187 decimal) which will be a little more illustrative of the process.
1 - how to read a 3 bit value starting from the second bit:
___ <- those 3 bits
10111011
The value is 101, or 5 in decimal, there are 2 possible ways to get it:
mask and shift
In this approach, the needed bits are first masked with the value 00001110 (14 decimal) after which it is shifted in place:
___
10111011 AND
00001110 =
00001010 >> 1 =
___
00000101
The expression for this would be: (value & 14) >> 1
shift and mask
This approach is similar, but the order of operations is reversed, meaning the original value is shifted and then masked with 00000111 (7) to only leave the last 3 bits:
___
10111011 >> 1
___
01011101 AND
00000111
00000101
The expression for this would be: (value >> 1) & 7
Both approaches involve the same amount of complexity, and therefore will not differ in performance.
2 - how to write a 3 bit value starting from the second bit:
In this case, the initial value is known, and when this is the case in code, you may be able to come up with a way to set the known value to another known value which uses less operations, but in reality this is rarely the case, most of the time the code will know neither the initial value, nor the one which is to be written.
This means that in order for the new value to be successfully "spliced" into byte, the target bits must be set to zero, after which the shifted value is "spliced" in place, which is the first step:
___
10111011 AND
11110001 (241) =
10110001 (masked original value)
The second step is to shift the value we want to write in the 3 bits, say we want to change that from 101 (5) to 110 (6)
___
00000110 << 1 =
___
00001100 (shifted "splice" value)
The third and final step is to splice the masked original value with the shifted "splice" value:
10110001 OR
00001100 =
___
10111101
The expression for the whole process would be: (value & 241) | (6 << 1)
Bonus - how to generate the read and write masks:
Naturally, using a binary to decimal converter is far from elegant, especially in the case of 32 and 64 bit containers - decimal values get crazy big. It is possible to easily generate the masks with expressions, which the compiler can efficiently resolve during compilation:
read mask for "mask and shift": ((1 << fieldLength) - 1) << (fieldIndex - 1), assuming that the index at the first bit is 1 (not zero)
read mask for "shift and mask": (1 << fieldLength) - 1 (index does not play a role here since it is always shifted to the first bit
write mask : just invert the "mask and shift" mask expression with the ~ operator
How does it work (with the 3bit field beginning at the second bit from the examples above)?
00000001 << 3
00001000 - 1
00000111 << 1
00001110 ~ (read mask)
11110001 (write mask)
The same examples apply to wider integers and arbitrary bit width and position of the fields, with the shift and mask values varying accordingly.
Also note that the examples assume unsigned integer, which is what you want to use in order to use integers as portable bit-field alternative (regular bit-fields are in no way guaranteed by the standard to be portable), both left and right shift insert a padding 0, which is not the case with right shifting a signed integer.
Even easier:
Using this set of macros (but only in C++ since it relies on the generation of member functions):
#define GETMASK(index, size) ((((size_t)1 << (size)) - 1) << (index))
#define READFROM(data, index, size) (((data) & GETMASK((index), (size))) >> (index))
#define WRITETO(data, index, size, value) ((data) = (((data) & (~GETMASK((index), (size)))) | (((value) << (index)) & (GETMASK((index), (size))))))
#define FIELD(data, name, index, size) \
inline decltype(data) name() const { return READFROM(data, index, size); } \
inline void set_##name(decltype(data) value) { WRITETO(data, index, size, value); }
You could go for something as simple as:
struct A {
uint bitData;
FIELD(bitData, one, 0, 1)
FIELD(bitData, two, 1, 2)
};
And have the bit fields implemented as properties you can easily access:
A a;
a.set_two(3);
cout << a.two();
Replace decltype with gcc's typeof pre-C++11.
You need to shift and mask the value, so for example...
If you want to read the first two bits, you just need to mask them off like so:
int value = input & 0x3;
If you want to offset it you need to shift right N bits and then mask off the bits you want:
int value = (intput >> 1) & 0x3;
To read three bits like you asked in your question.
int value = (input >> 1) & 0x7;
just use this and feelfree:
#define BitVal(data,y) ( (data>>y) & 1) /** Return Data.Y value **/
#define SetBit(data,y) data |= (1 << y) /** Set Data.Y to 1 **/
#define ClearBit(data,y) data &= ~(1 << y) /** Clear Data.Y to 0 **/
#define TogleBit(data,y) (data ^=BitVal(y)) /** Togle Data.Y value **/
#define Togle(data) (data =~data ) /** Togle Data value **/
for example:
uint8_t number = 0x05; //0b00000101
uint8_t bit_2 = BitVal(number,2); // bit_2 = 1
uint8_t bit_1 = BitVal(number,1); // bit_1 = 0
SetBit(number,1); // number = 0x07 => 0b00000111
ClearBit(number,2); // number =0x03 => 0b0000011
You have to do a shift and mask (AND) operation.
Let b be any byte and p be the index (>= 0) of the bit from which you want to take n bits (>= 1).
First you have to shift right b by p times:
x = b >> p;
Second you have to mask the result with n ones:
mask = (1 << n) - 1;
y = x & mask;
You can put everything in a macro:
#define TAKE_N_BITS_FROM(b, p, n) ((b) >> (p)) & ((1 << (n)) - 1)
"How do I for example read a 3 bit integer value starting at the second bit?"
int number = // whatever;
uint8_t val; // uint8_t is the smallest data type capable of holding 3 bits
val = (number & (1 << 2 | 1 << 3 | 1 << 4)) >> 2;
(I assumed that "second bit" is bit #2, i. e. the third bit really.)
To read bytes use std::bitset
const int bits_in_byte = 8;
char myChar = 's';
cout << bitset<sizeof(myChar) * bits_in_byte>(myChar);
To write you need to use bit-wise operators such as & ^ | & << >>. make sure to learn what they do.
For example to have 00100100 you need to set the first bit to 1, and shift it with the << >> operators 5 times. if you want to continue writing you just continue to set the first bit and shift it. it's very much like an old typewriter: you write, and shift the paper.
For 00100100: set the first bit to 1, shift 5 times, set the first bit to 1, and shift 2 times:
const int bits_in_byte = 8;
char myChar = 0;
myChar = myChar | (0x1 << 5 | 0x1 << 2);
cout << bitset<sizeof(myChar) * bits_in_byte>(myChar);
int x = 0xFF; //your number - 11111111
How do I for example read a 3 bit integer value starting at the second bit
int y = x & ( 0x7 << 2 ) // 0x7 is 111
// and you shift it 2 to the left
If you keep grabbing bits from your data, you might want to use a bitfield. You'll just have to set up a struct and load it with only ones and zeroes:
struct bitfield{
unsigned int bit : 1
}
struct bitfield *bitstream;
then later on load it like this (replacing char with int or whatever data you are loading):
long int i;
int j, k;
unsigned char c, d;
bitstream=malloc(sizeof(struct bitfield)*charstreamlength*sizeof(char));
for (i=0; i<charstreamlength; i++){
c=charstream[i];
for(j=0; j < sizeof(char)*8; j++){
d=c;
d=d>>(sizeof(char)*8-j-1);
d=d<<(sizeof(char)*8-1);
k=d;
if(k==0){
bitstream[sizeof(char)*8*i + j].bit=0;
}else{
bitstream[sizeof(char)*8*i + j].bit=1;
}
}
}
Then access elements:
bitstream[bitpointer].bit=...
or
...=bitstream[bitpointer].bit
All of this is assuming are working on i86/64, not arm, since arm can be big or little endian.

Find "edges" in 32 bits word bitpattern

Im trying to find the most efficient algorithm to count "edges" in a bit-pattern. An edge meaning a change from 0 to 1 or 1 to 0. I am sampling each bit every 250 us and shifting it into a 32 bit unsigned variable.
This is my algorithm so far
void CountEdges(void)
{
uint_least32_t feedback_samples_copy = feedback_samples;
signal_edges = 0;
while (feedback_samples_copy > 0)
{
uint_least8_t flank_information = (feedback_samples_copy & 0x03);
if (flank_information == 0x01 || flank_information == 0x02)
{
signal_edges++;
}
feedback_samples_copy >>= 1;
}
}
It needs to be at least 2 or 3 times as fast.
You should be able to bitwise XOR them together to get a bit pattern representing the flipped bits. Then use one of the bit counting tricks on this page: http://graphics.stanford.edu/~seander/bithacks.html to count how many 1's there are in the result.
One thing that may help is to precompute the edge count for all possible 8-bit value (a 512 entry lookup table, since you have to include the bit the precedes each value) and then sum up the count 1 byte at a time.
// prevBit is the last bit of the previous 32-bit word
// edgeLut is a 512 entry precomputed edge count table
// Some of the shifts and & are extraneous, but there for clarity
edgeCount =
edgeLut[(prevBit << 8) | (feedback_samples >> 24) & 0xFF] +
edgeLut[(feedback_samples >> 16) & 0x1FF] +
edgeLut[(feedback_samples >> 8) & 0x1FF] +
edgeLut[(feedback_samples >> 0) & 0x1FF];
prevBit = feedback_samples & 0x1;
My suggestion:
copy your input value to a temp variable, left shifted by one
copy the LSB of your input to yout temp variable
XOR the two values. Every bit set in the result value represents one edge.
use this algorithm to count the number of bits set.
This might be the code for the first 3 steps:
uint32 input; //some value
uint32 temp = (input << 1) | (input & 0x00000001);
uint32 result = input ^ temp;
//continue to count the bits set in result
//...
Create a look-up table so you can get the transitions within a byte or 16-bit value in one shot - then all you need to do is look at the differences in the 'edge' bits between bytes (or 16-bit values).
You are looking at only 2 bits during every iteration.
The fastest algorithm would probably be to build a hash table for all possibles values. Since there are 2^32 values that is not the best idea.
But why don't you look at 3, 4, 5 ... bits in one step? You can for instance precalculate for all 4 bit combinations your edgecount. Just take care of possible edges between the pieces.
you could always use a lookup table for say 8 bits at a time
this way you get a speed improvement of around 8 times
don't forget to check for bits in between those 8 bits though. These then have to be checked 'manually'