Does a 64 bit packed structure contains a field set to specified value - bit-manipulation

I have an odd structure with 5 fields of bit length 12 and 4 boolean flags stored in the high bits. This all fits nicely into a 64 bit long, and as such they are stored as a 64 bit word array. What I want to do is search the array and find if any of the 12 bit fields are set to a given value.
I have tried the obvious solution of using bit shifts and masks, however this is a very hot function and needs to be optimized for speed. This led me to the this page containing a way to check for a byte in a word in very few operations. This makes me think it is possible to do something similar with the 12 bit fields, however I am struggling to find what constants I would replace the ones given on that page with.

I'm not very versed in low level languages, but I'm in the mood to fiddle with some bits so I thought I'd give it a try.
POC: JS can't do 64bit longs, but we can check if we can adapt the algorithm to deal with 2x12bit fields + 8boolean flags (noise) in an 32bit (u)int.
The noise because the original algorithm. Dealt with exactly 4 bytes and no further bits, but neither 32 nor 64 can be divided by 12 so we need to ensure that these additional bits don't interfere. Or worse, get matched.
function hasValue(x, n) { return hasZero(x ^ (0x001001 * n)); }
function hasZero(v) { return ((v - 0x001001) & ~(v) & 0x800800); }
function hex(v) { return "0x" + v.toString(16) }
// create a random value, 2x12bit fields plus 8 random flags.
var v = Math.floor(Math.random() * 0x100000000);
console.log("value", hex(v));
// get the two fields
var a = v & 0xFFF;
console.log("check", hex(a), !!hasValue(v, a));
var b = (v >> 12) & 0xFFF;
console.log("check", hex(b), !!hasValue(v, b));
// brute force.
// check if any other value is matched.
// these should only return the 2 values from above.
for (var i = 0; i < 0x1000; ++i) {
if (hasValue(v, i)) {
console.log("matched", hex(i));
}
}
extrapolating from this, your solution should be
#define hasValue(x,n) hasZero(x ^ (0x001001001001001 * n))
#define hasZero(v) ((v - 0x001001001001001) & ~(v) & 0x800800800800800)
where all values are unsigned longs. (sorry don't know if you somehow have to annotate any of these numbers)

Related

C/C++ bit array resolution transform algorithms

Anyone aware of any algorithms to up/down convert bit arrays?
ie: when the resolution is 1/16:
every 1 bit = 16 bits. (low resolution to high resolution)
1010 -> 1111111111111111000000000000000011111111111111110000000000000000
and reverse, 16 bits = 1 bit (high resolution to low resolution)
1111111111111111000000000000000011111111111111110000000000000000 -> 1010
Right now I am looping bit by bit which is not efficient. Using a whole 64-bit word would be better but run into issues when the word isn't divisible by resolution equally (some bits may spill over to the next word).
C++:
std::vector<uint64_t> bitset;
C:
uint64_t *bitset = calloc(total_bits >> 6, sizeof(uint64_t)); // free() when done
which is accessed using:
const uint64_t idx = bit >> 6;
const uint64_t pos = bit % 64;
const bool value = (bitset[idx] >> pos) & 1U;
and set/clear:
bitset[idx] |= (1UL << pos);
bitset[idx] &= ~(1UL << pos);
and the OR (or AND/XOR/AND/NOT) of two bitsets of same resolution are done using the full 64-bit word:
bitset[idx] |= source.bitset[idx];
I am dealing with large enough bitsets (2+ billion bits) that I'm looking for any efficiency in the loops. One way I found to optimize the loop is to check each word using __builtin_popcountll, and skip ahead in the loop:
for (uint64_t bit = 0; bit < total_bits; bit++)
{
const uint64_t idx = bit >> 6;
const uint64_t pos = bit % 64;
const uint64_t bits = __builtin_popcountll(bitset[idx]);
if (!bits)
{
i += 63;
continue;
}
// process
}
I'm looking for algorithms/techniques more than code examples. But if you have code to share, I won't say no. Any academic research papers would be appreciated too.
Thanks in advance!
Does the resolution always between 1/2 and 1/64? Or even 1/32? Because if you need very long sequence, you might need more loop nesting which might cause some slow down.
Are you sequence always very long (millions of bits) or this is a maximum but usually your sequences are shorter? When doing high to low resolution, can you assume that data is valid or not.
Here are some tricks:
uint64_t one = 1;
uint64_t n_one_bits = (one << n) - 1u; // valid for 0 to 63; not sure for 64
If your sequence are so long, you might want to check if n is some power of 2 and have more optimized code for those cases.
You might find some other useful tricks here:
https://graphics.stanford.edu/~seander/bithacks.html
So if your resolution is 1/16, you don't need to loop individual 16 bits but you can check all 16 bits at once. Then you can repeat for next group again and again.
If the number is not an a divider of 64, you can shift bits as appropriate each time you would cross the 64 bits boundary. Say, that your resolution is 1/5, then you could process 60 bits, then shift 4 remaining bit and combine with following 60 bits.
If you can assume that data is valid, then you don't even need to shift the original number as you can pick the value of the appropriate bit each time.

Use bit manipulation to convert a bit from each byte in an 8-byte number to a single byte

I have a 64-bit unsigned integer. I want to check the 6th bit of each byte and return a single byte representing those 6th bits.
The obvious, "brute force" solution is:
inline const unsigned char Get6thBits(unsigned long long num) {
unsigned char byte(0);
for (int i = 7; i >= 0; --i) {
byte <<= 1;
byte |= bool((0x20 << 8 * i) & num);
}
return byte;
}
I could unroll the loop into a bunch of concatenated | statements to avoid the int allocation, but that's still pretty ugly.
Is there a faster, more clever way to do it? Maybe use a bitmask to get the 6th bits, 0x2020202020202020 and then somehow convert that to a byte?
If _pext_u64 is a possibility (this will work on Haswell and newer, it's very slow on Ryzen though), you could write this:
int extracted = _pext_u64(num, 0x2020202020202020);
This is a really literal way to implement it. pext takes a value (first argument) and a mask (second argument), at every position that the mask has a set bit it takes the corresponding bit from the value, and all bits are concatenated.
_mm_movemask_epi8 is more widely usable, you could use it like this:
__m128i n = _mm_set_epi64x(0, num);
int extracted = _mm_movemask_epi8(_mm_slli_epi64(n, 2));
pmovmskb takes the high bit of every byte in its input vector and concatenates them. The bits we want are not the high bit of every byte, so I move them up two positions with psllq (of course you could shift num directly). The _mm_set_epi64x is just some way to get num into a vector.
Don't forget to #include <intrin.h>, and none of this was tested.
Codegen seems reasonable enough
A weirder option is gathering the bits with a multiplication: (only slightly tested)
int extracted = (num & 0x2020202020202020) * 0x08102040810204 >> 56;
The idea here is that num & 0x2020202020202020 only has very few bits set, so we can arrange a product that never carries into bits that we need (or indeed at all). The multiplier is constructed to do this:
a0000000b0000000c0000000d0000000e0000000f0000000g0000000h0000000 +
0b0000000c0000000d0000000e0000000f0000000g0000000h00000000000000 +
00c0000000d0000000e0000000f0000000g0000000h000000000000000000000 etc..
Then the top byte will have all the bits "compacted" together. The lower bytes actually have something like that too, but they're missing bits that would have to come from "higher" (bits can only move to the left in a multiplication).

How to implement bit vectors with bitwise operations?

I am studying a question in the book Programming Pearls, and they recommended this function to set a bit in a bit vector. I'm a bit confused at to what it does.
#define BITSPERWORD 32
#define MASK 0x1F
#define SHIFT 5
#define N 1000000
int a[1 + N/BITSPERWORD];
void set(int i){
a[i >> SHIFT] |= (1 << (i & MASK));
}
Here is my (probably wrong) interpretation of this code.
if i = 64,
1) first, it takes i and shifts it to the right by SHIFT (which is 5) bits. This is equivalent to DIVIDING (not multiplying, as I first thought) i by 2^5. So if i is 64, the index of a is 2 (64 / 2^5)
2) a[2] |= (1 << (64 & MASK))
64 & 1 = 1000000 & 01 = 1000001.
So 1 gets left shifted how many bits????
It seems how this method works, even though I feel like there are better ways to set a bit. Is to find the index of the ith bit it essentially divides by 32 because that is the number of bits per word.
Since the operator used here is | the function is setting the bit to one not toggling the bit
0x1F is actually 31 and when anded with the i you get the remainder (not sure why they just didn't use %)
And lastly the shift takes the 1 to the proper location and or's it with the right slot in the vector.
If you are planning to use this code
you could write it a lot clear without defines and using more obvious methods of doing it, I doubt it would make a difference in speed.
Also you should probably just use std::bitset
the use of the mask to get the remainder particularly annoyed me because I'm pretty sure it would not necessarily work for every number, 31 happens to work because it's all 1's

C/C++ Bit Array or Bit Vector

I am learning C/C++ programming & have encountered the usage of 'Bit arrays' or 'Bit Vectors'. Am not able to understand their purpose? here are my doubts -
Are they used as boolean flags?
Can one use int arrays instead? (more memory of course, but..)
What's this concept of Bit-Masking?
If bit-masking is simple bit operations to get an appropriate flag, how do one program for them? is it not difficult to do this operation in head to see what the flag would be, as apposed to decimal numbers?
I am looking for applications, so that I can understand better. for Eg -
Q. You are given a file containing integers in the range (1 to 1 million). There are some duplicates and hence some numbers are missing. Find the fastest way of finding missing
numbers?
For the above question, I have read solutions telling me to use bit arrays. How would one store each integer in a bit?
I think you've got yourself confused between arrays and numbers, specifically what it means to manipulate binary numbers.
I'll go about this by example. Say you have a number of error messages and you want to return them in a return value from a function. Now, you might label your errors 1,2,3,4... which makes sense to your mind, but then how do you, given just one number, work out which errors have occured?
Now, try labelling the errors 1,2,4,8,16... increasing powers of two, basically. Why does this work? Well, when you work base 2 you are manipulating a number like 00000000 where each digit corresponds to a power of 2 multiplied by its position from the right. So let's say errors 1, 4 and 8 occur. Well, then that could be represented as 00001101. In reverse, the first digit = 1*2^0, the third digit 1*2^2 and the fourth digit 1*2^3. Adding them all up gives you 13.
Now, we are able to test if such an error has occured by applying a bitmask. By example, if you wanted to work out if error 8 has occured, use the bit representation of 8 = 00001000. Now, in order to extract whether or not that error has occured, use a binary and like so:
00001101
& 00001000
= 00001000
I'm sure you know how an and works or can deduce it from the above - working digit-wise, if any two digits are both 1, the result is 1, else it is 0.
Now, in C:
int func(...)
{
int retval = 0;
if ( sometestthatmeans an error )
{
retval += 1;
}
if ( sometestthatmeans an error )
{
retval += 2;
}
return retval
}
int anotherfunc(...)
{
uint8_t x = func(...)
/* binary and with 8 and shift 3 plaes to the right
* so that the resultant expression is either 1 or 0 */
if ( ( ( x & 0x08 ) >> 3 ) == 1 )
{
/* that error occurred */
}
}
Now, to practicalities. When memory was sparse and protocols didn't have the luxury of verbose xml etc, it was common to delimit a field as being so many bits wide. In that field, you assign various bits (flags, powers of 2) to a certain meaning and apply binary operations to deduce if they are set, then operate on these.
I should also add that binary operations are close in idea to the underlying electronics of a computer. Imagine if the bit fields corresponded to the output of various circuits (carrying current or not). By using enough combinations of said circuits, you make... a computer.
regarding the usage the bits array :
if you know there are "only" 1 million numbers - you use an array of 1 million bits. in the beginning all bits will be zero and every time you read a number - use this number as index and change the bit in this index to be one (if it's not one already).
after reading all numbers - the missing numbers are the indices of the zeros in the array.
for example, if we had only numbers between 0 - 4 the array would look like this in the beginning: 0 0 0 0 0.
if we read the numbers : 3, 2, 2
the array would look like this: read 3 --> 0 0 0 1 0. read 3 (again) --> 0 0 0 1 0. read 2 --> 0 0 1 1 0. check the indices of the zeroes: 0,1,4 - those are the missing numbers
BTW, of course you can use integers instead of bits but it may take (depends on the system) 32 times memory
Sivan
Bit Arrays or Bit Vectors can be though as an array of boolean values. Normally a boolean variable needs at least one byte storage, but in a bit array/vector only one bit is needed.
This gets handy if you have lots of such data so you save memory at large.
Another usage is if you have numbers which do not exactly fit in standard variables which are 8,16,32 or 64 bit in size. You could this way store into a bit vector of 16 bit a number which consists of 4 bit, one that is 2 bit and one that is 10 bits in size. Normally you would have to use 3 variables with sizes of 8,8 and 16 bit, so you only have 50% of storage wasted.
But all these uses are very rarely used in business aplications, the come to use often when interfacing drivers through pinvoke/interop functions and doing low level programming.
Bit Arrays of Bit Vectors are used as a mapping from position to some bit value. Yes it's basically the same thing as an array of Bool, but typical Bool implementation is one to four bytes long and it uses too much space.
We can store the same amount of data much more efficiently by using arrays of words and binary masking operations and shifts to store and retrieve them (less overall memory used, less accesses to memory, less cache miss, less memory page swap). The code to access individual bits is still quite straightforward.
There is also some bit field support builtin in C language (you write things like int i:1; to say "only consume one bit") , but it is not available for arrays and you have less control of the overall result (details of implementation depends on compiler and alignment issues).
Below is a possible way to answer to your "search missing numbers" question. I fixed int size to 32 bits to keep things simple, but it could be written using sizeof(int) to make it portable. And (depending on the compiler and target processor) the code could only be made faster using >> 5 instead of / 32 and & 31 instead of % 32, but that is just to give the idea.
#include <stdio.h>
#include <errno.h>
#include <stdint.h>
int main(){
/* put all numbers from 1 to 1000000 in a file, except 765 and 777777 */
{
printf("writing test file\n");
int x = 0;
FILE * f = fopen("testfile.txt", "w");
for (x=0; x < 1000000; ++x){
if (x == 765 || x == 777760 || x == 777791){
continue;
}
fprintf(f, "%d\n", x);
}
fprintf(f, "%d\n", 57768); /* this one is a duplicate */
fclose(f);
}
uint32_t bitarray[1000000 / 32];
/* read file containing integers in the range [1,1000000] */
/* any non number is considered as separator */
/* the goal is to find missing numbers */
printf("Reading test file\n");
{
unsigned int x = 0;
FILE * f = fopen("testfile.txt", "r");
while (1 == fscanf(f, " %u",&x)){
bitarray[x / 32] |= 1 << (x % 32);
}
fclose(f);
}
/* find missing number in bitarray */
{
int x = 0;
for (x=0; x < (1000000 / 32) ; ++x){
int n = bitarray[x];
if (n != (uint32_t)-1){
printf("Missing number(s) between %d and %d [%x]\n",
x * 32, (x+1) * 32, bitarray[x]);
int b;
for (b = 0 ; b < 32 ; ++b){
if (0 == (n & (1 << b))){
printf("missing number is %d\n", x*32+b);
}
}
}
}
}
}
That is used for bit flags storage, as well as for parsing different binary protocols fields, where 1 byte is divided into a number of bit-fields. This is widely used, in protocols like TCP/IP, up to ASN.1 encodings, OpenPGP packets, and so on.

Find "edges" in 32 bits word bitpattern

Im trying to find the most efficient algorithm to count "edges" in a bit-pattern. An edge meaning a change from 0 to 1 or 1 to 0. I am sampling each bit every 250 us and shifting it into a 32 bit unsigned variable.
This is my algorithm so far
void CountEdges(void)
{
uint_least32_t feedback_samples_copy = feedback_samples;
signal_edges = 0;
while (feedback_samples_copy > 0)
{
uint_least8_t flank_information = (feedback_samples_copy & 0x03);
if (flank_information == 0x01 || flank_information == 0x02)
{
signal_edges++;
}
feedback_samples_copy >>= 1;
}
}
It needs to be at least 2 or 3 times as fast.
You should be able to bitwise XOR them together to get a bit pattern representing the flipped bits. Then use one of the bit counting tricks on this page: http://graphics.stanford.edu/~seander/bithacks.html to count how many 1's there are in the result.
One thing that may help is to precompute the edge count for all possible 8-bit value (a 512 entry lookup table, since you have to include the bit the precedes each value) and then sum up the count 1 byte at a time.
// prevBit is the last bit of the previous 32-bit word
// edgeLut is a 512 entry precomputed edge count table
// Some of the shifts and & are extraneous, but there for clarity
edgeCount =
edgeLut[(prevBit << 8) | (feedback_samples >> 24) & 0xFF] +
edgeLut[(feedback_samples >> 16) & 0x1FF] +
edgeLut[(feedback_samples >> 8) & 0x1FF] +
edgeLut[(feedback_samples >> 0) & 0x1FF];
prevBit = feedback_samples & 0x1;
My suggestion:
copy your input value to a temp variable, left shifted by one
copy the LSB of your input to yout temp variable
XOR the two values. Every bit set in the result value represents one edge.
use this algorithm to count the number of bits set.
This might be the code for the first 3 steps:
uint32 input; //some value
uint32 temp = (input << 1) | (input & 0x00000001);
uint32 result = input ^ temp;
//continue to count the bits set in result
//...
Create a look-up table so you can get the transitions within a byte or 16-bit value in one shot - then all you need to do is look at the differences in the 'edge' bits between bytes (or 16-bit values).
You are looking at only 2 bits during every iteration.
The fastest algorithm would probably be to build a hash table for all possibles values. Since there are 2^32 values that is not the best idea.
But why don't you look at 3, 4, 5 ... bits in one step? You can for instance precalculate for all 4 bit combinations your edgecount. Just take care of possible edges between the pieces.
you could always use a lookup table for say 8 bits at a time
this way you get a speed improvement of around 8 times
don't forget to check for bits in between those 8 bits though. These then have to be checked 'manually'