How to use high and low bytes? - c++

I am trying to represent 32768 using 2 bytes. For the high byte, do I use the same values as the low byte and it will interpret them differently or do I put the actual values? So would I put something like
32678 0 or 256 0? Or neither of those? Any help is appreciated.

In hexadecimal, your number is 0x8000 which is 0x80 and 0x00.
To get the low byte from the input, use low=input & 0xff and to get the high byte, use high=(input>>8) & 0xff.
Get the input back from the low and high byes like so: input=low | (high<<8).
Make sure the integer types you use are big enough to store these numbers. On 16-bit systems, unsigned int/short or signed/unsigned long should be be large enough.

Bytes can only contain values from 0 to 255, inclusive. 32768 is 0x8000, so the high byte is 128 and the low byte is 0.

Try this function.
Pass your Hi_Byte and Lo_Byte to the function, it returns the value as Word.
WORD MAKE_WORD( const BYTE Byte_hi, const BYTE Byte_lo)
{
return (( Byte_hi << 8 ) | Byte_lo & 0x00FF );
}

Pointers can do this easily, are MUCH FASTER than shifts and requires no processor math.
Check this answer
BUT:
If I understood your problem, you need up to 32768 stored in 2 bytes, so you need 2 unsigned int's, or 1 unsigned long.
Just change int for long and char for int, and you're good to go.

32768 is 0x8000, so you would put 0x80 (128) in your high byte and 0 in your low byte.
That's assuming unsigned values, of course. 32768 isn't actually a legal value for a signed 16-bit value.

32768 in hex is 0080 on a little-endian platform. The "high" (second in our case) byte contains 128, and the "low" one 0.

Related

c++ parse two bytes to int

I have two bytes read from a sensor over I2C, these are stored as unsigned char. I do know which is the most significant byte, and which is the least significant byte.
unsigned char yawMSB;
unsigned char yawLSB;
How would I go about converting these two bytes of data, into a single int?
I've had this implemented properly in C# using
BitConverter.ToInt16(new byte[] { yawLSB, yawMSB }, 0)
In a 16-bit integer, the top 8 bits are the most significant byte, and the low 8 bits are the least significant byte.
To make the most significant byte occupy the top bits, you need to shift its value up (by 8 bits), which is done with the left-shift bitwise operator <<.
Then to get the least significant byte you just add the low 8 bits using bitwise or |.
Put together it will be something like yawMSB << 8 | yawLSB.
I do know which is the most significant byte, and which is the least significant byte.
MSB means Most Significant byte.
LSB means Least Significant byte.
How would I go about converting these two bytes of data, into a single float?
You can then build a float whose value is:
const float yaw = yawMSB << 8 + yawLSB;
Actually, the value yawMSB << 8 + yawLSB is probably on a scale defined by your implementation. If it is true, and if it is a linear scale from 0 to MAX_YAW, you should define your value as:
const float yaw = float(yawMSB << 8 + yawLSB) / MAX_YAW; // gives yaw in [0.f, 1.f].

16-bit to 10-bit conversion code explanation

I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".

Printing integers as a set of 4 bytes arranged in little endian?

I have an array of 256 unsigned integers called frequencies[256] (one integer for each ascii value). My goal is to read through an input and for each character i increment the integer in the array that corresponds to it (for example the character 'A' will cause the frequencies[65] integer to increase by one) and when the input is over I must output each integer as 4 characters in little endian form.
So far I have made a loop that goes through the input and increases each corresponding integer in the array. But i am very confused on how to output each integer in little endian form. I understand that each byte of the four bytes of each integer should be output as a character (for instance the unsigned integer 1 in little endian is "00000001 00000000 00000000 00000000" which i would want to output as the 4 ascii characters that correspond to those bytes).
But how do i get at the binary representation of an unsigned integer in my code and how would i go about chopping it up and rearranging it?
Thanks for the help.
For hardware portability, please use the following solution:
int freqs[256];
for (int i = 0; i < 256; ++i)
printf("%02x %02x %02x %02x\n", (freqs[i] >> 0 ) & 0xFF
, (freqs[i] >> 8 ) & 0xFF
, (freqs[i] >> 16) & 0xFF
, (freqs[i] >> 24) & 0xFF);
You can use memcpy which copies a block of memory.
char tab[4] ;
memcpy(tab, frequencies+i, sizeof(int));
now, tab[0], tab[1], etc. will be your characters.
A program to swap from big to little endian: Little Endian - Big Endian Problem.
To understand if your system is little or big endian: https://stackoverflow.com/a/1024954/2436175.
Transform your chars/integers in a set of printable bits: https://stackoverflow.com/a/7349767/2436175
It's not really clear what you mean by "little endian" here.
Integers don't have endianness per se; endianness only comes
into play when you cut them up into smaller pieces. So which
smaller pieces to you mean: bytes or characters. If characters,
just convert in the normal way, and reverse the generated
string. If bytes (or any other smaller piece), each individual
byte can be represented as a function of the int: i & 0xFF
calculates the low order byte, (i >> 8) & 0xFF the next
lowest, and so forth. (If the bytes aren't 8 bits, then change
the shift value and the mask correspondingly.)
And with regards to your second paragraph: a single byte of an
int doesn't necessarily correspond to a character, regardless
of the encodig. For the four bytes you show, for example, none
of them corresponds to a character in any of the usual
encodings.
With regards to the last paragraph: to get the binary
representation of an unsigned integer, use the same algorithm
that you would use for any representation:
std::string
asText( unsigned int value, int base, int minDigits = 1 )
{
static std::string digits( "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" );
assert( base >= 2 && base <= digits.size() );
std::string results;
while ( value != 0 || minDigits > 0 ) {
results += digits[ value % base ];
value /= base;
-- minDigits;
}
// results is now little endian. For the normal big-endian
std::reverse( results.begin(), results.end() );
return results;
}
Called with base equal to 2, this will give you your binary
representation.

Shifting syntax error

I have a byte array:
byte data[2]
I want to to keep the 7 less significant bits from the first and the 3 most significant bits from the second.
I do this:
unsigned int the=((data[0]<<8 | data[1])<<1)>>6;
Can you give me a hint why this does not work?
If I do it in different lines it works fine.
Can you give me a hint why this does not work?
Hint:
You have two bytes and want to preserve 7 less significant bits from the first and the 3 most significant bits from the second:
data[0]: -xxxxxxx data[1]: xxx-----
-'s represent bits to remove, x's represent bits to preserve.
After this
(data[0]<<8 | data[1])<<1
you have:
the: 00000000 0000000- xxxxxxxx xx-----0
Then you make >>6 and result is:
the: 00000000 00000000 00000-xx xxxxxxxx
See, you did not remove high bit from data[0].
Keep the 7 less significant bits from the first and the 3 most significant bits from the second.
Assuming the 10 bits to be preserved should be the LSB of the unsigned int value, and should be contiguous, and that the 3 bits should be the LSB of the result, this should do the job:
unsigned int value = ((data[0] & 0x7F) << 3) | ((data[1] & 0xE0) >> 5);
You might not need all the masking operands; it depends in part on the definition of byte (probably unsigned char, or perhaps plain char on a machine where char is unsigned), but what's written should work anywhere (16-bit, 32-bit or 64-bit int; signed or unsigned 8-bit (or 16-bit, or 32-bit, or 64-bit) values for byte).
Your code does not remove the high bit from data[0] at any point — unless, perhaps, you're on a platform where unsigned int is a 16-bit value, but if that's the case, it is unusual enough these days to warrant a comment.

How is a pipe reading with a size of 4 bytes into a 4 byte int returning more data?

Reading from a pipe:
unsigned int sample_in = 0; //4 bytes - 32bits, right?
unsigned int len = sizeof(sample_in); // = 4 in debugger
while (len > 0)
{
if (0 == ReadFile(hRead,
&sample_in,
sizeof(sample_in),
&bytesRead,
0))
{
printf("ReadFile failed\n");
}
len-= bytesRead; //bytesRead always = 4, so far
}
In the debugger, first iteration through:
sample_in = 536739282 //36 bits?
How is this possible if sample in is an unsigned int? I think I'm missing something very basic, go easy on me!
Thanks
Judging from your comment that says //36 bits? I suspect that you're expecting the data to be sent in a BCD-style format: In other words, where each digit is a number that takes up four bits, or two digits per byte. This way would result in wasted space however, you would use four bits, but values "10" to "15" aren't used.
In fact integers are represented in binary internally, thus allowing a 32-bit number to represent up to 2^32 different values. This comes out to 4,294,967,295 (unsigned) which happens to be rather larger than the number you saw in sample_in.
536739282 is well within the maximum boundary of an unsigned 4 byte integer, which is upwards of 4 billion.
536,739,282 will easily fit in an unsigned int and 32bits. The cap on an unsigned int is 4,200,000,000 or so.
unsigned int, your 4 byte unsigned integer, allows for values from 0 to 4,294,967,295. This will easily fit your value of 536,739,282. (This would, in fact, even fit in a standard signed int.)
For details on allowable ranges, see MSDN's Data Type Ranges page for C++.