How to ignore specific bits in a 32-bit integer - c++

I am polling a 32-bit register in a motor driver for a value.
Only bits 0-9 are required, the rest need to be ignored.
How do I ignore bits 10-31?
Image of register bits
In order to poll the motor driver for a value, I send the location of the register, which sends back the entire 32-bit number. But I only need bits 0-9 to display.
Serial.println(sendData(0x35, 0))

If you want to extract such bits then you must mask the whole integer with a value that keeps just the bits you are interested in.
This can be done with bitwise AND (&) operator, eg:
uint32_t value = reg & 0x3ff;
uint32_t value = reg & 0b1111111111; // if you have C++11

Rather than Serial.println() I'd go with Serial.print().
You can then just print out the specific bits that you're interested in with a for loop.
auto data = sendData(0x35, 0);
for (int i=0; i<=9; ++i)
Serial.print(data && (1<<i));
Any other method will result in extra bits being printed since there's no data structure that holds 10 bits.

You do a bitwise and with a number with the last 10 bits set to 1. This will set all the other bits to 0. For example:
value = value & ((1<<10) - 1);
Or
value = value & 0x3FF;

Related

Do I have to set most significant bits to zero if I shift right?

Let's say I have a 64-bit number and some bits that are set that hold a value, let's say three bits. I have a mask to get that value. To get the value of those three bits I bitwise 'AND' the number with the mask. This sets all other bits to zero. I then need to shift right towards the least significant bits so the least significant bit of the three-bit number is in the position of the least significant bit of the 64 bit number. After I shift right, do I need to mask again to ensure only all the bits to the left of those three bits are zero?
You can do shift first then the mask and accomplish what you want:
int value = 0xdeadbeef;
value >>= 15;
value &= 0x7;
In prior versions of the C++ standard, right-shifts of negative values were implementation-defined because signed integers could be one's-complement, two's-complement or sign+magnitude. So the behavior of right shift of a negative was implementation defined.
But all implementations of (modern) C++ are for CPUs using two's-complement and a lot of existing code relies on that implementation detail. In C++ 2020 this was finally acknowledged and signed integers are now defined as two's-complement.
The way shift right works depends on the type of the argument:
int value = -1;
value >>= 10;
Assuming two's-complement, which is now required, this will use an arithmetic shift and preserves the sign bit. So after the shift the value will still be -1 and have all bits set. If you mask before the shift then after the shift you get more bits then you bargained for.
unsigned int value = 0xFFFFFFFF;
value >>= 10;
This will use a logical shift and add zeroes to the left. So if you mask before the shift then you still get the right bits after the shift.
But why mask before the shift? If you mask after the shift then you always get the right bits regardless of the type.
Do I have to set most significant bits to zero if I shift right?
After I shift right, do I need to mask again to ensure only all the bits to the left of those three bits are zero?
Yes, if the result of the mask was a signed type, a mask needed to cope with the sign bit shifted.
No if the result of the mask was a unsigned type.
uint64_t mask = ...;
uint64_t masked_value = mask & value;
uint64_t final = masked_value >> shift_amount;
If code did:
int64_t mask = 7 << shift_amount;
int64_t masked_value = mask & value;
int64_t almost_final = masked_value >> shift_amount;
int final = (int) (masked_value & 7);
A smart compiler may emit efficient as as the unsigned approach above.

General algorithm for reading n bits and padding with zeros

I need a function to read n bits starting from bit x(bit index should start from zero), and if the result is not byte aligned, pad it with zeros. The function will receive uint8_t array on the input, and should return uint8_t array as well. For example, I have file with following contents:
1011 0011 0110 0000
Read three bits from the third bit(x=2,n=3); Result:
1100 0000
There's no (theoretical) limit on input and bit pattern lengths
Implementing such a bitfield extraction efficiently without beyond the direct bit-serial algorithm isn't precisely hard but a tad cumbersome.
Effectively it boils down to an innerloop reading a pair of bytes from the input for each output byte, shifting the resulting word into place based on the source bit-offset, and writing back the upper or lower byte. In addition the final output byte is masked based on the length.
Below is my (poorly-tested) attempt at an implementation:
void extract_bitfield(unsigned char *dstptr, const unsigned char *srcptr, size_t bitpos, size_t bitlen) {
// Skip to the source byte covering the first bit of the range
srcptr += bitpos / CHAR_BIT;
// Similarly work out the expected, inclusive, final output byte
unsigned char *endptr = &dstptr[bitlen / CHAR_BIT];
// Truncate the bit-positions to offsets within a byte
bitpos %= CHAR_BIT;
bitlen %= CHAR_BIT;
// Scan through and write out a correctly shifted version of every destination byte
// via an intermediate shifter register
unsigned long accum = *srcptr++;
while(dstptr <= endptr) {
accum = accum << CHAR_BIT | *srcptr++;
*dstptr++ = accum << bitpos >> CHAR_BIT;
}
// Mask out the unwanted LSB bits not covered by the length
*endptr &= ~(UCHAR_MAX >> bitlen);
}
Beware that the code above may read past the end of the source buffer and somewhat messy special handling is required if you can't set up the overhead to allow this. It also assumes sizeof(long) != 1.
Of course to get efficiency out of this you will want to use as wide of a native word as possible. However if the target buffer necessarily word-aligned then things get even messier. Furthermore little-endian systems will need byte swizzling fix-ups.
Another subtlety to take heed of is the potential inability to shift a whole word, that is shift counts are frequently interpreted modulo the word length.
Anyway, happy bit-hacking!
Basically it's still a bunch of shift and addition operations.
I'll use a slightly larger example to demonstrate this.
Suppose we are give an input of 4 characters, and x = 10, n = 18.
00101011 10001001 10101110 01011100
First we need to locate the character contains our first bit, by x / 8, which gives us 1 (the second character) in this case. We also need the offset in that character, by x % 8, which equals to 2.
Now we can get out first character of the solution in three operations.
Left shift the second character 10001001 with 2 bits, gives us 00100100.
Right shift the third character 10101110 with 6 (comes from 8 - 2) bits, gives us 00000010.
Add these two characters gives us the first character in your return string, gives 00100110.
Loop this routine for n / 8 rounds. And if n % 8 is not 0, extract that many bits from the next character, you can do it in many approaches.
So in this example, our second round will give us 10111001, and the last step we get 10, then pad the rest bits with 0s.

Shift left/right adding zeroes/ones and dropping first bits

I've got to program a function that receives
a binary number like 10001, and
a decimal number that indicates how many shifts I should perform.
The problem is that if I use the C++ operator <<, the zeroes are pushed from behind but the first numbers aren't dropped... For example
shifLeftAddingZeroes(10001,1)
returns 100010 instead of 00010 that is what I want.
I hope I've made myself clear =P
I assume you are storing that information in int. Take into consideration, that this number actually has more leading zeroes than what you see, ergo your number is most likely 16 bits, meaning 00000000 00000001 . Maybe try AND-ing it with number having as many 1 as the number you want to have after shifting? (Assuming you want to stick to bitwise operations).
What you want is to bit shift and then limit the number of output bits which can be active (hold a value of 1). One way to do this is to create a mask for the number of bits you want, then AND the bitshifted value with that mask. Below is a code sample for doing that, just replace int_type with the type of value your using -- or make it a template type.
int_type shiftLeftLimitingBitSize(int_type value, int numshift, int_type numbits=some_default) {
int_type mask = 0;
for (unsigned int bit=0; bit < numbits; bit++) {
mask += 1 << bit;
}
return (value << numshift) & mask;
}
Your output for 10001,1 would now be shiftLeftLimitingBitSize(0b10001, 1, 5) == 0b00010.
Realize that unless your numbits is exactly the length of your integer type, you will always have excess 0 bits on the 'front' of your number.

C++: Bitwise AND

I am trying to understand how to use Bitwise AND to extract the values of individual bytes.
What I have is a 4-byte array and am casting the last 2 bytes into a single 2 byte value. Then I am trying to extract the original single byte values from that 2 byte value. See the attachment for a screen shot of my code and values.
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
How would I go about doing this with Bitwise AND?
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
Your 2byte integer is formed with the values 3 and 4 (since your pointer is to a[1]). As you have already seen in your tests, you can get the 3 by applying the mask 0xFF. Now, to get the 4 you need to remove the lower bits and shift the value. In your example, by using the mask 0xFF00 you effectively remove the 3 from the 16bit number, but you leave the 4 in the high byte of your 2byte number, which is the value 1024 == 2^10 -- 11th bit set, which is the third bit in the second byte (counting from the least representative)
You can shift that result 8 bits to the right to get your 4, or else you can ignore the mask altogether, since by just shifting to the right the lowest bits will disappear:
4 == ( x>>8 )
More interesting results to test bitwise and can be obtained by working with a single number:
int x = 7; // or char, for what matters:
(x & 0x1) == 1;
(x & (0x1<<1) ) == 2; // (x & 0x2)
(x & ~(0x2)) == 5;
You need to add some bit-shifting to convert the masked value from the upper byte to the lower byte.
The problem I am having is I am not able to get the value of the last
byte in the 2 byte value.
Not sure where that "watch" table comes from or if there is more code involved, but it looks to me like the result is correct. Remember, one of them is a high byte and so the value is shifted << 8 places. On a little endian machine, the high byte would be the second one.

Efficient way of determining minimum field size required to store variances in user input

Sorry about the clumsy title; I couldn't find a bit way of expressing what I'm trying to do.
I am getting an input from the user of multiple 32-bit integers. For example, the user may enter the following values (showing in hex for ease of explanation):
0x00001234
0x00005678
0x0000abcd
In this particular case, the first 2 bytes of each input is constant, and the last 2 bytes are variable. For efficiency purposes, I could store 0x0000 as a single constant, and create a vector of uint16_t values to store the variable portion of the input (0x1234, 0x5678, 0xabcd).
Now let's say the user enters the following:
0x00000234
0x56780000
0x00001000
In this case I would need a vector of uint32_t values to store the variable portion of the input as each value affects different bytes.
My current thought is to do the following:
uint32_t myVal = 0;
myVal |= input1;
myVal |= input2;
// ...
And then at the end find the distance between the first and last "toggled" (i.e. 1) bit in myVal. The distance will give me required field size for the variable portion of all of the inputs.
However, this doesn't sound like it would scale well for a large number of user inputs. Any recommendations about an elegant and efficient way of determining this?
Update:
I simplified the problem in my above explanation.
Just to be clear, I am not doing this to save memory (I have better things to do than to try and conserve a few bytes and this isn't for optimization purposes).
In summary, component A provides component B in my system with values. Sometimes these values are 128-bit, but component B only supports 32-bit values.
If the variable portion of the 128-bit value can be expressed with a 32-bit value, I can accept it. Otherwise I will need to reject it with an error.
I'm not in a position to modify component B to allow 128-bit values, or modify component A to prevent its use of 128-bit values (there are hardware limitations here too).
Though I can't see a reason for all that... Why just not to compare an input with the std::numeric_limits<uint16_t>::max()? If the input gives a larger value then you need to use uint32_t.
Answering your edit:
I suppose for for better performance you should use hardware specific low level instructions. You could iterate over 32-bit parts of the input 128-bit value and subsequently add each one to the some variable and check the difference between next value and current sum. If the difference isn't equal to the sum then you should skip this 128-bit value, otherwise you'll get the necessary result in the end. The sample follows:
uint32_t get_value( uint32_t v1, uint32_t v2, uint32_t v3, uint32_t v4)
{
uint32_t temp = v1;
if ( temp - v2 != temp ) throw exception;
temp += v2; if ( temp - v3 != temp ) throw exception;
temp += v3; if ( temp - v4 != temp ) throw exception;
temp = v4;
return temp;
}
In this C++ example it may be looks silly but I believe in the assembly code this should efficiently process the input stream.
Store the first full 128 bit number you encounter, then push the lower order 32 bits of it onto a vector, set bool reject_all = false. For each remaining number, if high-order (128-32=96) bits differ from the first number's then set reject_all = true, otherwise push their lower-order bits on the vector. At the end of the loop, use reject_all to decide whether to use the vector of values.
The most efficient way to store a series of unsigned integers in the range [0, (2^32)-1] is by just using uint32_t. Jumping through hoops to save 2 bytes from user input is not worth your time--the user cannot possibly, in his lifetime, enter enough integers that your code would have to start compressing them. He or she would die of old age long before memory constraints became apparent on any modern system.
It looks like you have to come up with a cumulative bitmask -- which you can then look at to see whether you have trailing or leading constant bits. An algorithm that operates on each input will be required (making it an O(n) algorithm, where n is the number of values to inspect).
The algorithm would be similar to something like what you've already done:
unsigned long long bitmask = 0uL;
std::size_t count = val.size();
for (std::size_t i = 0; i < count; ++i)
bitmask |= val[i];
You can then check to see how many bits/bytes leading/trailing can be made constant, and whether you're going to use the full 32 bits. If you have access to SSE instructions, you can vectorize this using OpenMP.
There's also a possible optimization by short-circuiting to see if the distance between the first 1 bit and the last 1 bit is already greater than 32, in which case you can stop.
For this algorithm to scale better, you're going to have to do it in parallel. Your friend would be vector processing (maybe using CUDA for Nvidia GPUs, or OpenCL if you're on the Mac or on platforms that already support OpenCL, or just OpenMP annotations).
Use
uint32_t ORVal = 0;
uint32_t ANDVal = 0xFFFFFFFF;
ORVal |= input1;
ANDVal &= input1;
ORVal |= input2;
ANDVal &= input2;
ORVal |= input3;
ANDVal &= input3; // etc.
// At end of input...
mask = ORVal ^ ANDVal;
// bit positions set to 0 were constant, bit positions set to 1 changed
A bit position in ORVal will be 1 if at least one input had 1 in that position and 0 if ALL inputs had 0 in that position. A bit position in ANDVal will be 0 if at least one input had 0 in that bit position and 1 if ALL inputs had 1 in that position.
If a bit position in inputs was always 1, then ORVal and ANDVal will both be set to 1.
If a bit position in inputs was always 0, then ORVal and ANDVal will both be set to 0.
If there was a mix of 0 and 1 in a bit position then ORVal will be set to 1 and ANDVal set to 0, hence the XOR at the end gives the mask for bit positions that changed.