I'm trying to convert a byte array to integer:
QByteArray b = QByteArray::fromHex("00008000");
quint32 result = b[3];
result += b[2] << 8;
result += b[1] << 16;
result += b[0] << 24;
but I'm getting 4294934528 instead of 32768. What is the problem here?
QByteArray is an array of chars. Apparently, char on your platform is signed and 8-bit wide. Thus, your problem can be distilled to:
char c = 0x80;
quint32 = c << 8;
The standard mandates that:
N4606 § 4.8 [conv.integral] / 3
If the destination type is signed, the value is unchanged if it can be
represented in the destination type; otherwise, the value is
implementation-defined.
In this case (as usual on 2's complement systems), 0x80 is mapped to std::numeric_limits<char>::min() == -128, which is only logical because they share the same underlying bit pattern.
Now, -128 << 8 is defined as -128 * 28, which is -32768.
Finally, conversion from -32768 to 32 bit unsigned integer is well defined and yields 4294934528
Related
My actual concern is about this:
The left-shift bit operation is used to multiply values of integer variables quickly.
But an integer variable has a defined range of available integers it can store, which is obviously very logical due to the place in bytes which is reserved for it.
Depending on 16-bit or 32-bit system, it preserves either 2 or 4 bytes, which range the available integers from
-32,768 to 32,767 [for signed int] (2 bytes), or
0 to 65,535 [for unsigned int] (2 bytes) on 16-bit
OR
-2,147,483,648 to 2,147,483,647 [for signed int] (4 bytes), or
0 to 4,294,967,295 [for unsigned int] (4 bytes) on 32-bit
My thought is, it should´t be able to multiply the values over the exact half of the maximum integer of the according range.
But what happens then to the values if you proceed the bitwise operation after the value has reached the integer value of the half of the max int value?
Is there an arithmetic pattern which will be applied to it?
One example (in case of 32-bit system):
unsigned int redfox_1 = 2147483647;
unsigned int redfox_2;
redfox_2 = redfox_1 << 1;
/* Which value has redfox_2 now? */
redfox_2 = redfox_1 << 2;
/* Which value has redfox_2 now? */
redfox_2 = redfox_1 << 3;
/* Which value has redfox_2 now? */
/* And so on and on */
/* Is there a arithmetic pattern what will be applied to the value of redfox_2 now? */
the value stored inside redfox_2 shouldn´t be able to go over 2.147.483.647 because its datatype is unsigned int, which can handle only integers up to 4,294,967,295.
What will happen now with the value of redfox_2?
And Is there a arithmetic pattern in what will happen to the value of redfox_2?
Hope you can understand what i mean.
Thank you very much for any answers.
Per the C 2018 standard, 6.5.7 4:
The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are filled with zeros. If E1 has an unsigned type, the value of the result is E1 × 2E2, reduced modulo one more than the maximum value representable in the result type. If E1 has a signed type and nonnegative value, and E1 × 2E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.
So, for unsigned integer types, the bits are merely shifted left, and vacated bit positions are filled with zeroes. For signed integer types, the consequences of overflow are not defined by the C standard.
Many C implementations will, in signed shifts, slavishly shift the bits, including shifting value bits into the sign bit, resulting in various positive or negative values that a naïve programmer might not expect. However, since the behavior is not defined by the C standard, a C implementation could also:
Clamp the result at INT_MAX or INT_MIN (for int, or the corresponding maxima for the particular type).
Shift the value bits without affecting the sign bit.
Generate a trap.
Transform the program, when the undefined shift is recognized during compilation and optimization, in arbitrary ways, such as removing the entire code path that performs the shift.
If you really want to see the pattern, then just write a program that prints it:
#include <iostream>
#include <ios>
#include <bitset>
int main()
{
unsigned int redfox = 2147483647;
std::bitset<32> b;
for (int i = 0; i < 32; ++i)
{
redfox = redfox << 1;
b = redfox;
std::cout << std::dec << redfox << ", " << std::hex << redfox << ", " << b << std::endl;
}
}
This produces:
4294967294, fffffffe, 11111111111111111111111111111110
4294967292, fffffffc, 11111111111111111111111111111100
4294967288, fffffff8, 11111111111111111111111111111000
4294967280, fffffff0, 11111111111111111111111111110000
4294967264, ffffffe0, 11111111111111111111111111100000
4294967232, ffffffc0, 11111111111111111111111111000000
4294967168, ffffff80, 11111111111111111111111110000000
4294967040, ffffff00, 11111111111111111111111100000000
4294966784, fffffe00, 11111111111111111111111000000000
4294966272, fffffc00, 11111111111111111111110000000000
4294965248, fffff800, 11111111111111111111100000000000
4294963200, fffff000, 11111111111111111111000000000000
4294959104, ffffe000, 11111111111111111110000000000000
4294950912, ffffc000, 11111111111111111100000000000000
4294934528, ffff8000, 11111111111111111000000000000000
4294901760, ffff0000, 11111111111111110000000000000000
4294836224, fffe0000, 11111111111111100000000000000000
4294705152, fffc0000, 11111111111111000000000000000000
4294443008, fff80000, 11111111111110000000000000000000
4293918720, fff00000, 11111111111100000000000000000000
4292870144, ffe00000, 11111111111000000000000000000000
4290772992, ffc00000, 11111111110000000000000000000000
4286578688, ff800000, 11111111100000000000000000000000
4278190080, ff000000, 11111111000000000000000000000000
4261412864, fe000000, 11111110000000000000000000000000
4227858432, fc000000, 11111100000000000000000000000000
4160749568, f8000000, 11111000000000000000000000000000
4026531840, f0000000, 11110000000000000000000000000000
3758096384, e0000000, 11100000000000000000000000000000
3221225472, c0000000, 11000000000000000000000000000000
2147483648, 80000000, 10000000000000000000000000000000
0, 0, 00000000000000000000000000000000
I am developing C++ libraries for the Arduino 2560 Mega and I have come across an interesting bug.
uint8_t resolution = 15;
uint32_t numDiscreteLevels = (1 << resolution); //yields a value of 0xFFFF8000
uint32_t numDiscreteLevels = ((uint32_t)1 << resolution); //yields 0x8000 (correct value)
It seems that in the first line, signed bits are padded onto the value before being assigned to the variable. According to promotion rules I believe that the 1 should be cast to an unsigned integer. But even then, I thought signed padding only occurs when you shift left.
On the AVR architecture, an int is 16 bits -- not 32! This means that all numbers, including integer constants, are treated as a int16_t unless otherwise specified.
This means that 1 << 8 is (int16_t) 0x8000, not (int32_t) 0x00008000 as it would be on a 32-bit platform. Since this is a signed value and it has its high bit set, it's negative (specifically, -32768), and sign-extending it to a uint32_t gives 0xffff8000.
You could provide the mask value as an unsigned directly to see how that affects the behavior, which should be as expected.:
uint8_t resolution = 15;
uint32_t numDiscreteLevels = 1u << resolution;
1u << 15 is 0x8000u whereas 1 << 15 as a 16-bit value is -32767.
I'm trying to merge a char*-array into a uint16_t. This is my code so far:
char* arr = new char[2]{0};
arr[0] = 0x0; // Binary: 00000000
arr[1] = 0xBE; // Binary: 10111110
uint16_t merged = (arr[0] << 8) + arr[1];
cout << merged << " As Bitset: " << bitset<16>(merged) << endl;
I was expecting merged to be 0xBE, or in binary 00000000 10111110.
But the output of the application is:
65470 As Bitset: 1111111110111110
In the following descriptions I'm reading bits from left to right.
So arr[1] is at the right position, which is the last 8 bits.
The first 8 bits however are set to 1, which they should not be.
Also, if I change the values to
arr[0] = 0x1; // Binary: 00000001
arr[1] = 0xBE; // Binary: 10111110
The output is:
0000000010111110
Again, arr[1] is at the right position. But now the first 8 bits are 0, whereas the last on of the first 8 should be 1.
Basically what I wan't to do is append arr[1] to arr[0] and interpret the new number as whole.
Perhaps char is signed type in your case, and you are left-shifting 0xBE vhich is interpreted as signed negative value (-66 in a likely case of two's complement).
It is undefined behavior according to the standard. In practice it is often results in extending the sign bit, hence leading ones.
3.9.1 Fundamental types....
It is implementationdefined
whether a char object can hold negative values.
5.8 Shift operators....
The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned
type, the value of the result is E1 × 2E2, reduced modulo one more than the maximum value representable
in the result type. Otherwise, if E1 has a signed type and non-negative value, and E1×2E2 is representable
in the corresponding unsigned type of the result type, then that value, converted to the result type, is the
resulting value; otherwise, the behavior is undefined.
You need to assign to the wider type before shifting, otherwise you're shifting away† your high bits before they ever even hit the only variable here that's big enough to hold them.
uint16_t merged = arr[0];
merged <<= 8;
merged += arr[1];
Or, arguably:
const uint16_t merged = ((uint16_t)arr[0] << 8) + arr[1];
You also may want to consider converting through unsigned char first to avoid oddities with the high bit set. Try out a few different values in your unit test suite and see what happens.
† Well, your program has undefined behaviour from this out-of-range shift, so who knows what might happen!
uint32_t a = 0xFF << 8;
uint32_t b = 0xFF;
uint32_t c = b << 8;
I'm compiling for the Uno (1.0.x and 1.5) and it would seem obvious that a and c should be the same value, but they are not... at least not when running on the target. I compile the same code on the host and have no issues.
Right shift works fine, left shift only works when I'm shifting a variable versus a constant.
Can anyone confirm this?
I'm using Visual Micro with VS2013. Compiling with either 1.0.x or 1.5 Arduino results in the same failure.
EDIT:
On the target:
A = 0xFFFFFF00
C = 0x0000FF00
The problem is related to the signed/unsigned implicit cast.
With uint32_t a = 0xFF << 8; you mean
0xFF is declared; it is a signed char;
There is a << operation, so that variable is converted to int. Since it was a signed char (and so its value was -1) it is padded with 1, to preserve the sign. So the variable is 0xFFFFFFFF;
it is shifted, so a = 0xFFFFFF00.
NOTE: this is slightly wrong, see below for the "more correct" version
If you want to reproduce the same behaviour, try this code:
uint32_t a = 0xFF << 8;
uint32_t b = (signed char)0xFF;
uint32_t c = b << 8;
Serial.println(a, HEX);
Serial.println(b, HEX);
Serial.println(c, HEX);
The result is
FFFFFF00
FFFFFFFF
FFFFFF00
Or, in the other way, if you write
uint32_t a = (unsigned)0xFF << 8;
you get that a = 0x0000FF00.
There are just two weird things with the compiler:
uint32_t a = (unsigned char)0xFF << 8; returns a = 0xFFFFFF00
uint32_t a = 0x000000FF << 8; returns a = 0xFFFFFF00 too.
Maybe it's a wrong cast in the compiler....
EDIT:
As phuclv pointed out, the above explanation is slightly wrong. The correct explanation is that, with uint32_t a = 0xFF << 8;, the compiler does this operations:
0xFF is declared; it is an int;
There is a << operation, and thus this becomes 0xFF00; it was an int, so it is negative
it is then promoted to uint32_t. Since it was negative, 1s are prepended, resulting in a 0xFFFFFF00
The difference with the above explanation is that if you write uint32_t a = 0xFF << 7; you get 0x7F80 rather than 0xFFFFFF80.
This also explains the two "weird" things I wrote in the end of the previous answer.
For reference, in the thread linked in the comment there are some more explanations on how the compiler interpretes literals. Particularly in this answer there is a table with the types the compiler assigns to the literals. In this case (no suffix, hexadecimal value) the compiler assigns this type, according to what is the smallest type that fits the value:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
This leads to some more considerations:
uint32_t a = 0x7FFF << 8; this means that the literal is interpreted as a signed integer; the promotion to the bigger integer extends the sign, and so the result is 0xFFFFFF00
uint32_t b = 0xFFFF << 8; the literal in this case is interpreted as an unsigned integer. The result of the promotion to the 32-bit integer is therefore 0x0000FF00
The most important thing here is that in Arduino int is a 16-bit type. That'll explain everything
For uint32_t a = 0xFF << 8: 0xFF is of type int1. 0xFF << 8 results in 0xFF00 which is a signed negative value in 16-bit int2. When assigning the int value to a uint32_t variable again it'll be sign-extended 3 when upcasting, thus the result becomes 0xFFFFFF00U
For the following lines
uint32_t b = 0xFF;
uint32_t c = b << 8;
0xFF is positive in 16-bit int, therefore b also contains 0xFF. Then shifting it left 8 bits results in 0x0000FF00, because b << 8 is an uint32_t expression. It's wider than int so there's no promotion to int happening here
Similarly with uint32_t a = (unsigned)0xFF << 8 the output is 0x0000FF00 because the positive 0xFF when converted to unsigned int is still positive. Upcasting unsigned int to uint32_t does a zero extension, but the sign bit is already zero so even if you do int32_t b = 0xFF; uint32_t c = b << 8 the high bits are still zero. Same to the "weird" uint32_t a = 0x000000FF << 8. Instead of (unsigned)0xFF you can just use the exact equivalent version (but shorter) 0xFFU
OTOH if you declare b as uint8_t b = 0xFF or int8_t b = 0xFF then things will be different, integer promotion occurs and the result will be similar to the first line (0xFFFFFF00U). And if you cast 0xFF to signed char like this
uint32_t b = (signed char)0xFF;
uint32_t c = b << 8;
then upon promoting to int it'll be sign-extended to 0xFFFF. Similarly casting it to int32_t or uint32_t will result in a sign-extension from signed char to the 32-bit wide value 0xFFFFFFFF
If you cast to unsigned char like in uint32_t a = (unsigned char)0xFF << 8; instead then the (unsigned char)0xFF will be promoted to int using zero extension4, therefore the result will be exactly the same as uint32_t a = 0xFF << 8;
In summary: When in doubt, consult the standard. The compiler rarely lies to you
1 Type of integer literals not int by default?
The type of an integer constant is the first of the corresponding list in which its value can be represented.
Suffix Decimal Constant Octal or Hexadecimal Constant
-------------------------------------------------------------------
none int int
long int unsigned int
long long int long int
unsigned long int
long long int
unsigned long long int
2 Strictly speaking shifting into sign bit like that is undefined behavior
1 << 31 produces the error, "The result of the '<<' expression is undefined"
Defining (1 << 31) or using 0x80000000? Result is different
3 The rule is to add UINT_MAX + 1
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Signed to unsigned conversion in C - is it always safe?
4A cast will always preserve the input value if the value fits in the target type, so casting a signed type to a wider signed type will be done by a sign-extension, and casting an unsigned type to a wider type will be done by a zero-extension
[Credit goes to Mats Petersson]
Using a cast operator to force the compiler to treat the 0xFF as a uint32_t addresses the issue. Seems like the Arduino xcompiler treats constants a little differently since I've never had cast before a shift.
Thanks!
I have an std::string that contains the response of a server. After parsing the string a bit, I come across a short. The short is big-endian and is stored in the string accordingly:
raw[0] == 0xa5;
raw[1] == 0x69;
I know this as
file << raw[0] << std::endl << raw[1];
when viewed as hex results to "0xa5 0x0a 0x69".
I write them to a short like this and then write to a file as inspired by https://stackoverflow.com/a/300837/1318909:
short x = (raw[1] << 8) | raw[0];
file << std::to_string(x);
expected result? 27045 (0x69a5).
actual result? -91 (0xffa5) <-- overflow!.
Why is this? I tested and it worked fine with a value of 2402. I also did some additional testing and it works with
short x = (raw[1] << 8) | 0xa5;
but not with
short x = (0x69 << 8) | raw[0];
char may (or may not) be signed, and it looks like it is on your platform. That means that 0xa5 is sign extended to 0xffa5 before you or in the upper byte.
Try casting each byte to unsigned char before doing your bit manipulation.
raw[0] is an object of type char, which happens to be 8-bit signed integer type on your platform. When you stuff 0xA5 value into such signed char object, it actually acquires value of -91. When raw[0] is used as an operand of | operator, it is subjected to usual arithmetic conversions. The latter convert it to value of type int with value -91 (0xFFA5). That FF in the higher byte if what causes the observed result.