Get signed integer from 2 16-bit signed bytes? - c++

So this sensor I have returns a signed value between -500-500 by returning two (high and low) signed bytes. How can I use these to figure out what the actual value is? I know I need to do 2's complement, but I'm not sure how. This is what I have now -
real_velocity = temp.values[0];
if(temp.values[1] != -1)
real_velocity += temp.values[1];
//if high byte > 1, negative number - take 2's compliment
if(temp.values[1] > 1) {
real_velocity = ~real_velocity;
real_velocity += 1;
}
But it just returns the negative value of what would be a positive. So for instance, -200 returns bytes 255 (high) and 56(low). Added these are 311. But when I run the above code it tells me -311. Thank you for any help.

-200 in hex is 0xFF38,
you're getting two bytes 0xFF and 0x38,
converting these back to decimal you might recognise them
0xFF = 255,
0x38 = 56
your sensor is not returning 2 signed bytes but a simply the high and low byte of a signed 16 bit number.
so your result is
value = (highbyte << 8) + lowbyte
value being a 16 bit signed variable.

Based on the example you gave, it appears that the value is already 2's complement. You just need to shift the high byte left 8 bits and OR the values together.
real_velocity = (short) (temp.values[0] | (temp.values[1] << 8));

You can shift the bits and mask the values.
int main()
{
char data[2];
data[0] = 0xFF; //high
data[1] = 56; //low
int value = 0;
if (data[0] & 0x80) //sign
value = 0xFFFF8000;
value |= ((data[0] & 0x7F) << 8) | data[1];
std::cout<<std::hex<<value<<std::endl;
std::cout<<std::dec<<value<<std::endl;
std::cin.get();
}
Output:
ffffff38
-200

real_velocity = temp.values[0];
real_velocity = real_velocity << 8;
real_velocity |= temp.values[1];
// And, assuming 32-bit integers
real_velocity <<= 16;
real_velocity >>= 16;

For 8-bit bytes, first just convert to unsigned:
typedef unsigned char Byte;
unsigned const u = (Byte( temp.values[1] ) << 8) | Byte( temp.values[0] );
Then if that is greater than the upper range for 16-bit two's complement, subtract 216:
int const i = int(u >= (1u << 15)? u - (1u << 16) : u);
You could do tricks at the bit level, but I don't think there's any point in that.
The above assuming that CHAR_BIT = 8, that unsigned is more than 16 bits, and that the machine and desired result is two's complement.
#include <iostream>
using namespace std;
int main()
{
typedef unsigned char Byte;
struct { char values[2]; } const temp = { 56, 255 };
unsigned const u = (Byte( temp.values[1] ) << 8) | Byte( temp.values[0] );
int const i = int(u >= (1u << 15)? u - (1u << 16) : u);
cout << i << endl;
}

Related

C++ Bitshift 4 int_8t into a normal integer (32 bit )

I had already asked a question how to get 4 int8_t into a 32bit int, I was told that I have to cast the int8_t to a uint8_t first to pack it with bitshifting into a 32bit integer.
int8_t offsetX = -10;
int8_t offsetY = 120;
int8_t offsetZ = -60;
using U = std::uint8_t;
int toShader = (U(offsetX) << 24) | (U(offsetY) << 16) | (U(offsetZ) << 8) | (0 << 0);
std::cout << (int)(toShader >> 24) << " "<< (int)(toShader >> 16) << " " << (int)(toShader >> 8) << std::endl;
My Output is
-10 -2440 -624444
It's not what I expected, of course, does anyone have a solution?
In the shader I want to unpack the int16 later and that is only possible with a 32bit integer because glsl does not have any other data types.
int offsetX = data[gl_InstanceID * 3 + 2] >> 24;
int offsetY = data[gl_InstanceID * 3 + 2] >> 16 ;
int offsetZ = data[gl_InstanceID * 3 + 2] >> 8 ;
What is written in the square bracket does not matter it is about the correct shifting of the bits or casting after the bracket.
If any of the offsets is negative, then the shift results in undefined behaviour.
Solution: Convert the offsets to an unsigned type first.
However, this brings another potential problem: If you convert to unsigned, then negative numbers will have very large values with set bits in most significant bytes, and OR operation with those bits will always result in 1 regardless of offsetX and offsetY. A solution is to convert into a small unsigned type (std::uint8_t), and another is to mask the unused bytes. Former is probably simpler:
using U = std::uint8_t;
int third = U(offsetX) << 24u
| U(offsetY) << 16u
| U(offsetZ) << 8u
| 0u << 0u;
I think you're forgetting to mask the bits that you care about before shifting them.
Perhaps this is what you're looking for:
int32 offsetX = (data[gl_InstanceID * 3 + 2] & 0xFF000000) >> 24;
int32 offsetY = (data[gl_InstanceID * 3 + 2] & 0x00FF0000) >> 16 ;
int32 offsetZ = (data[gl_InstanceID * 3 + 2] & 0x0000FF00) >> 8 ;
if (offsetX & 0x80) offsetX |= 0xFFFFFF00;
if (offsetY & 0x80) offsetY |= 0xFFFFFF00;
if (offsetZ & 0x80) offsetZ |= 0xFFFFFF00;
Without the bit mask, the X part will end up in offsetY, and the X and Y part in offsetZ.
on CPU side you can use union to avoid bit shifts and bit masking and branches ...
int8_t x,y,z,w; // your 8bit ints
int32_t i; // your 32bit int
union my_union // just helper union for the casting
{
int8_t i8[4];
int32_t i32;
} a;
// 4x8bit -> 32bit
a.i8[0]=x;
a.i8[1]=y;
a.i8[2]=z;
a.i8[3]=w;
i=a.i32;
// 32bit -> 4x8bit
a.i32=i;
x=a.i8[0];
y=a.i8[1];
z=a.i8[2];
w=a.i8[3];
If you do not like unions the same can be done with pointers...
Beware on GLSL side is this not possible (nor unions nor pointers) and you have to use bitshifts and masks like in the other answer...

Difference between bitshifting mask vs unsigned int

For a project, I had to find the individual 8-bits of a unsigned int. I first tried bit-shifting the mask to find the numbers, but that didn't work, so I tried bit-shifting the value and it worked.
What's the difference between these two? Why didn't the first one work?
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk(value & (0x00FF << (i * 8)));
}
}
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk((value >> (i * 8)) & 0x00FF);
}
}
Take the value 0xAABBCCDD as an example.
The expression value & (0xFF << (i * 8)) assumes the values:
0xAABBCCDD & 0x000000FF = 0x000000DD
0xAABBCCDD & 0x0000FF00 = 0x0000CC00
0xAABBCCDD & 0x00FF0000 = 0x00BB0000
0xAABBCCDD & 0xFF000000 = 0xAA000000
While the expression (value >> (i * 8)) & 0xFF assumes the values:
0xAABBCCDD & 0x000000FF = 0x000000DD
0x00AABBCC & 0x000000FF = 0x000000CC
0x0000AABB & 0x000000FF = 0x000000BB
0x000000AA & 0x000000FF = 0x000000AA
As you can see, the results are quite different after i = 0, because the first expression is only "selecting" 8 bits from value, while the second expression is shifting them down to the least significant byte first.
Note that in the first case, the expression (0xFF << (i * 8)) is shifting an int literal (0xFF) left. You should cast the literal to unsigned int to avoid signed integer overflow, which is undefined behavior:
value & ((unsigned int)0xFF << (i * 8))
In this code:
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk(value & (0x00FF << (i * 8)));
}
}
You are shifting the bits of 0x00FF itself, producing new masks of 0x00FF, 0xFF00, 0xFF0000, and 0xFF000000, and then you are masking value with each of those masks. The result contains only the 8 bits of value that you are interested in, but those 8 bits are not moving position at all.
In this code:
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk((value >> (i * 8)) & 0x00FF);
}
}
You are shifting the bits of value, thus moving those 8 bits that you want, and then you are masking the result with 0x00FF to extract those 8 bits.

how to create a mask for a given data type?

I have an integer, whose sizeof will be determined at run time. Now, I want to use this for masking based on its size.
For example, if size of int is 2 bytes, then the mask value is 0xFF. If the size of integer is 4bytes, then the mask value is 0xFFFF.How can I do so? Also, finally I want to extract the Most Significant nibble from a number. How can I do so in a smart way?
//assuming you take signed int
0XFFFFFFFF == -1 (in 4byte int machine)
0XFFFF == -1 (in 2byte int machine)
0XFF == -1 (in 1byte in machine which is not there anymore)
so at runtime even if size increases so does your variable assigned to -1.
To calaculate MSB x being your number
if( ( (0 | 1) << sizeof(int) & x) > 0)
//MSB is 1
else
// MSB is 0
#include <stdio.h>
#include <limits.h>
unsigned Most_Significant_nibble(int number){
int numOfHalfBit = sizeof(number) * CHAR_BIT / 2;
unsigned num = number, mask = (1u << numOfHalfBit)-1;
return (num >> numOfHalfBit) & mask;
}
int main(void){
int n = 0x87654321;//size of int is 4
unsigned msn = Most_Significant_nibble(n);
printf("%#x\n", msn);//0x8765
return 0;
}

Extract n most significant non-zero bits from int in C++ without loops

I want to extract the n most significant bits from an integer in C++ and convert those n bits to an integer.
For example
int a=1200;
// its binary representation within 32 bit word-size is
// 00000000000000000000010010110000
Now I want to extract the 4 most significant digits from that representation, i.e. 1111
00000000000000000000010010110000
^^^^
and convert them again to an integer (1001 in decimal = 9).
How is possible with a simple c++ function without loops?
Some processors have an instruction to count the leading binary zeros of an integer, and some compilers have instrinsics to allow you to use that instruction. For example, using GCC:
uint32_t significant_bits(uint32_t value, unsigned bits) {
unsigned leading_zeros = __builtin_clz(value);
unsigned highest_bit = 32 - leading_zeros;
unsigned lowest_bit = highest_bit - bits;
return value >> lowest_bit;
}
For simplicity, I left out checks that the requested number of bits are available. For Microsoft's compiler, the intrinsic is called __lzcnt.
If your compiler doesn't provide that intrinsic, and you processor doesn't have a suitable instruction, then one way to count the zeros quickly is with a binary search:
unsigned leading_zeros(int32_t value) {
unsigned count = 0;
if ((value & 0xffff0000u) == 0) {
count += 16;
value <<= 16;
}
if ((value & 0xff000000u) == 0) {
count += 8;
value <<= 8;
}
if ((value & 0xf0000000u) == 0) {
count += 4;
value <<= 4;
}
if ((value & 0xc0000000u) == 0) {
count += 2;
value <<= 2;
}
if ((value & 0x80000000u) == 0) {
count += 1;
}
return count;
}
It's not fast, but (int)(log(x)/log(2) + .5) + 1 will tell you the position of the most significant non-zero bit. Finishing the algorithm from there is fairly straight-forward.
This seems to work (done in C# with UInt32 then ported so apologies to Bjarne):
unsigned int input = 1200;
unsigned int most_significant_bits_to_get = 4;
// shift + or the msb over all the lower bits
unsigned int m1 = input | input >> 8 | input >> 16 | input >> 24;
unsigned int m2 = m1 | m1 >> 2 | m1 >> 4 | m1 >> 6;
unsigned int m3 = m2 | m2 >> 1;
unsigned int nbitsmask = m3 ^ m3 >> most_significant_bits_to_get;
unsigned int v = nbitsmask;
unsigned int c = 32; // c will be the number of zero bits on the right
v &= -((int)v);
if (v>0) c--;
if ((v & 0x0000FFFF) >0) c -= 16;
if ((v & 0x00FF00FF) >0) c -= 8;
if ((v & 0x0F0F0F0F) >0 ) c -= 4;
if ((v & 0x33333333) >0) c -= 2;
if ((v & 0x55555555) >0) c -= 1;
unsigned int result = (input & nbitsmask) >> c;
I assumed you meant using only integer math.
I used some code from #OliCharlesworth's link, you could remove the conditionals too by using the LUT for trailing zeroes code there.

Check value of least significant bit (LSB) and most significant bit (MSB) in C/C++

I need to check the value of the least significant bit (LSB) and most significant bit (MSB) of an integer in C/C++. How would I do this?
//int value;
int LSB = value & 1;
Alternatively (which is not theoretically portable, but practically it is - see Steve's comment)
//int value;
int LSB = value % 2;
Details:
The second formula is simpler. The % operator is the remainder operator. A number's LSB is 1 iff it is an odd number and 0 otherwise. So we check the remainder of dividing with 2. The logic of the first formula is this: number 1 in binary is this:
0000...0001
If you binary-AND this with an arbitrary number, all the bits of the result will be 0 except the last one because 0 AND anything else is 0. The last bit of the result will be 1 iff the last bit of your number was 1 because 1 & 1 == 1 and 1 & 0 == 0
This is a good tutorial for bitwise operations.
HTH.
You can do something like this:
#include <iostream>
int main(int argc, char **argv)
{
int a = 3;
std::cout << (a & 1) << std::endl;
return 0;
}
This way you AND your variable with the LSB, because
3: 011
1: 001
in 3-bit representation. So being AND:
AND
-----
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
You will be able to know if LSB is 1 or not.
edit: find MSB.
First of all read Endianess article to agree on what MSB means. In the following lines we suppose to handle with big-endian notation.
To find the MSB, in the following snippet we will focus applying a right shift until the MSB will be ANDed with 1.
Consider the following code:
#include <iostream>
#include <limits.h>
int main(int argc, char **argv)
{
unsigned int a = 128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(unsigned int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(unsigned int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '0', because the 32-bit representation of
// unsigned int 128 is:
// 00000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
If you print MSB outside of the cycle you will get 0.
If you change the value of a:
unsigned int a = UINT_MAX; // found in <limits.h>
MSB will be 1, because its 32-bit representation is:
UINT_MAX: 11111111111111111111111111111111
However, if you do the same thing with a signed integer things will be different.
#include <iostream>
#include <limits.h>
int main(int argc, char **argv)
{
int a = -128; // we want to find MSB of this 32-bit unsigned int
int MSB = 0; // this variable will represent the MSB we're looking for
// sizeof(int) = 4 (in Bytes)
// 1 Byte = 8 bits
// So 4 Bytes are 4 * 8 = 32 bits
// We have to perform a right shift 32 times to have the
// MSB in the LSB position.
for (int i = sizeof(int) * 8; i > 0; i--) {
MSB = (a & 1); // in the last iteration this contains the MSB value
a >>= 1; // perform the 1-bit right shift
}
// this prints out '1', because the 32-bit representation of
// int -128 is:
// 10000000000000000000000010000000
std::cout << "MSB: " << MSB << std::endl;
return 0;
}
As I said in the comment below, the MSB of a positive integer is always 0, while the MSB of a negative integer is always 1.
You can check INT_MAX 32-bit representation:
INT_MAX: 01111111111111111111111111111111
Now. Why the cycle uses sizeof()?
If you simply do the cycle as I wrote in the comment: (sorry for the = missing in comment)
for (; a != 0; a >>= 1)
MSB = a & 1;
you will get 1 always, because C++ won't consider the 'zero-pad bits' (because you specified a != 0 as exit statement) higher than the highest 1. For example for 32-bit integers we have:
int 7 : 00000000000000000000000000000111
^ this will be your fake MSB
without considering the full size
of the variable.
int 16: 00000000000000000000000000010000
^ fake MSB
int LSB = value & 1;
int MSB = value >> (sizeof(value)*8 - 1) & 1;
Others have already mentioned:
int LSB = value & 1;
for getting the least significant bit. But there is a cheatier way to get the MSB than has been mentioned. If the value is a signed type already, just do:
int MSB = value < 0;
If it's an unsigned quantity, cast it to the signed type of the same size, e.g. if value was declared as unsigned, do:
int MSB = (int)value < 0;
Yes, officially, not portable, undefined behavior, whatever. But on every two's complement system and every compiler for them that I'm aware of, it happens to work; after all, the high bit is the sign bit, so if the signed form is negative, then the MSB is 1, if it's non-negative, the MSB is 0. So conveniently, a signed test for negative numbers is equivalent to retrieving the MSB.
LSB is easy. Just x & 1.
MSSB is a bit trickier, as bytes may not be 8 bits and sizeof(int) may not be 4, and there might be padding bits to the right.
Also, with a signed integer, do you mean the sign bit of the MS value bit.
If you mean the sign bit, life is easy. It's just x < 0
If you mean the most significant value bit, to be completely portable.
int answer = 0;
int rack = 1;
int mask = 1;
while(rack < INT_MAX)
{
rack << = 1;
mask << = 1;
rack |= 1;
}
return x & mask;
That's a long-winded way of doing it. In reality
x & (1 << (sizeof(int) * CHAR_BIT) - 2);
will be quite portable enough and your ints won't have padding bits.