This question already has answers here:
Split up two byte char into two single byte chars
(4 answers)
Closed 1 year ago.
I have an unsigned char lets say 0x5E which I want to split into two equal parts.
What is the required bit shifts in order to get this done? I did the following to convert hex in unsigned long to two parts
unsigned int first_half = (my_long & 0xffffffff00000000) >> 32;
unsigned int second_half = my_long & 0x00000000ffffffff;
How to go by doing it with the unsigned character. does 32 get replaced by the 8 because its a character.?
The original code does >> 32 because it tries to shift half of the bits down. my_long was a unsigned long int, which has 64 bits, so half of that is 32.
A character is a byte, which is 8, so it would be shifted 4 bits.
Related
How do I merge two unsigned chars into a single unsigned short in c++.
The Most Significant Byte in the array is contained in array[0] and the Least Significant Byte is located at array[1] . (Big endian)
(array[0] << 8) | array[1]
Note that an unsigned char is implicitly converted ("promoted") to int when you use it for any calculations. Therefore array[0] << 8 doesn't overflow. Also the result of this calculation is an int, so you may cast it back to unsigned short if your compiler issues a warning.
Set the short to the least significant byte, shift the most significant byte by 8 and perform a binary or operation to keep the lower half
unsigned short s;
s = array[1];
s |= (unsigned short) array[0] << 8;
I originally had 2 WORD (that's 4 bytes). I have stored them in an unsigned int. How can I split this such that I have 2 (left-most) bytes in one unsigned short variable and the other 2 bytes in another unsigned short variable?
I hope my question is clear, otherwise please tell me and I will add more details! :)
Example: I have this hexadecimal stored in unsigned int: 4f07aabb
How can I turn this into two unsigned shorts so one of them holds 4f07 and the other holds aabb?
If you are sure that unsigned int has at least 4 bytes on your target system (this is not guaranteed!), you can do:
unsigned short one = static_cast<unsigned short>(original >> (2 * 8));
unsigned short two = static_cast<unsigned short>(original % (1 << (2 * 8)));
This is only guaranteed to work if the original value indeed only contains a 4-byte value (possibly with padding zeroes in front). If you're not fond of bitshifting, you could also do
uint32_t original = 0x4f07aabb; // guarantee 32 bits
uint16_t parts[2];
std::memcpy(&parts[0], &original, sizeof(uint32_t));
unsigned short one = static_cast<unsigned short>(parts[0]);
unsigned short two = static_cast<unsigned short>(parts[1]);
This will yield the two values depending on the target system's endianness; on a litte-endian architecture, the results are reversed. You can check endianness with upcoming C++20's std::endian::native.
I have two bytes read from a sensor over I2C, these are stored as unsigned char. I do know which is the most significant byte, and which is the least significant byte.
unsigned char yawMSB;
unsigned char yawLSB;
How would I go about converting these two bytes of data, into a single int?
I've had this implemented properly in C# using
BitConverter.ToInt16(new byte[] { yawLSB, yawMSB }, 0)
In a 16-bit integer, the top 8 bits are the most significant byte, and the low 8 bits are the least significant byte.
To make the most significant byte occupy the top bits, you need to shift its value up (by 8 bits), which is done with the left-shift bitwise operator <<.
Then to get the least significant byte you just add the low 8 bits using bitwise or |.
Put together it will be something like yawMSB << 8 | yawLSB.
I do know which is the most significant byte, and which is the least significant byte.
MSB means Most Significant byte.
LSB means Least Significant byte.
How would I go about converting these two bytes of data, into a single float?
You can then build a float whose value is:
const float yaw = yawMSB << 8 + yawLSB;
Actually, the value yawMSB << 8 + yawLSB is probably on a scale defined by your implementation. If it is true, and if it is a linear scale from 0 to MAX_YAW, you should define your value as:
const float yaw = float(yawMSB << 8 + yawLSB) / MAX_YAW; // gives yaw in [0.f, 1.f].
I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".
I came from this question where I wanted to write 2 integers to a single byte that were garunteed to be between 0-16 (4 bits each).
Now if I close the file, and run a different program that reads....
for (int i = 0; i < 2; ++i)
{
char byteToRead;
file.seekg(i, std::ios::beg);
file.read(&byteToRead, sizeof(char));
bool correct = file.bad();
unsigned int num1 = (byteToRead >> 4);
unsigned int num2 = (byteToRead & 0x0F);
}
The issue is, sometimes this works but other times I'm having the first number come out negative and the second number is something like 10 or 9 all the time and they were most certainly not the numbers I wrote!
So here, for example, the first two numbers work, but the next number does not. For examplem, the output of the read above would be:
At byte 0, num1 = 5 and num2 = 6
At byte 1, num1 = 4294967289 and num2 = 12
At byte 1, num1 should be 9. It seems the 12 writes fine but the 9 << 4 isn't working. The byteToWrite on my end is byteToWrite -100 'œ''
I checked out this question which has a similar problem I think but I feel like my endian is right here.
The right-shift operator preserves the value of the left-most bit. If the left-most bit is 0 before the shift, it will still be 0 after the shift; if it is 1, it will still be 1 after the shift. This allow to preserve the value's sign.
In your case, you combine 9 (0b1001) with 12 (0b1100), so you write 0b10011100 (0x9C). The bit #7 is 1.
When byteToRead is right-shifted, you get 0b11111001 (0xF9), but it is implicitly converted to an int. The convertion from char to int also preserve the value's sign, so it produce 0xFFFFFFF9. Then the implicit int is implicitly converted to a unsigned int. So num1 contains 0xFFFFFFF9 which is 4294967289.
There is 2 solutions:
cast byteToRead into a unsigned char when doing the right-shift;
apply a mask to the shift's result to only keep the 4 bits you want.
The problem originates with byteToRead >> 4 . In C, any arithmetic operations are performed in at least int precision. So the first thing that happens is that byteToRead is promoted to int.
These promotions are value-preserving. Your system has plain char as signed, i.e. having range -128 through to 127. Your char might have been initially -112 (bit pattern 10010000), and then after promotion to int it retains its value of -112 (bit pattern 11111...1110010000).
The right-shift of a negative value is implementation-defined but a common implementation is to do an "arithmetic shift", i.e. perform division by two; so you end up with the result of byteToRead >> 4 being -7 (bit pattern 11111....111001).
Converting -7 to unsigned int results in UINT_MAX - 6 which is 4295967289, because unsigned arithmetic is defined as wrapping around mod UINT_MAX+1 .
To fix this you need to convert to unsigned before performing the arithmetic . You could cast (or alias) byteToRead to unsigned char, e.g.:
unsigned char byteToRead;
file.read( (char *)&byteToRead, 1 );