I have a question with this snippet of code:
uint32_t c = 1 << 31;
uint64_t d = 1 << 31;
cout << "c: " << std::bitset<64>(c) << endl;
cout << "d: " << std::bitset<64>(d) << endl;
cout << (c == d ? "equal" : "not equal") << endl;
The result is:
c: 0000000000000000000000000000000010000000000000000000000000000000
d: 1111111111111111111111111111111110000000000000000000000000000000
not equal
Yes, I know that the solution for 'd' is to use '1ULL'. But I cannot understand why this happens when the shift is of 31 bits. I read somewhere that it is safe to shift size-1 bits, so if I write the instruction without the 'UUL' and the literal '1' is 32 bits long then it should be safe to shift it 31 bits, right?
What am I missing here?
Regards
YotKay
The problem is that the expression that you shift left, namely, the constant 1, is treated as a signed integer. That is why the compiler performs sign extension on it before assigning the result to d, causing the result that you see.
Adding suffix U to 1 will fix the problem (demo).
uint64_t d = 1U << 31;
Related
Please, could somebody explain what's happening under the hood there?
The example runs on an Intel machine. Would the behavior be the same on other architectures?
Actually, I have a hardware counter which overruns every now and then, and I have to make sure that the intervals are always computed correctly. I thought that integer arithmetics should always do the trick but when there is a sign change, binary subtraction yields an overflow bit which appears to be actually interpreted as the sign.
Do I really have to handle the sign by myself or is there a more elegant way to compute the interval regardless of the hardware or the implementation?
TIA
std::cout << "\nTest integer arithmetics\n";
int8_t iFirst = -2;
int8_t iSecond = 2;
int8_t iResult = iSecond - iFirst;
std::cout << "\n" << std::to_string(iSecond) << " - " << std::to_string(iFirst) << " = " << std::to_string(iResult);
iResult = iFirst - iSecond;
std::cout << "\n" << std::to_string(iFirst) << " - " << std::to_string(iSecond) << " = " << std::to_string(iResult);
iFirst = SCHAR_MIN + 1; iSecond = SCHAR_MAX - 2;
iResult = iSecond - iFirst;
std::cout << "\n" << std::to_string(iSecond) << " - " << std::to_string(iFirst) << " = " << std::to_string(iResult);
iResult = iFirst - iSecond;
std::cout << "\n" << std::to_string(iFirst) << " - " << std::to_string(iSecond) << " = " << std::to_string(iResult) << "\n\n";
And this is what I get:
Test integer arithmetics
2 - -2 = 4
-2 - 2 = -4
125 - -127 = -4
-127 - 125 = 4
What happens with iResult = iFirst - iSecond is that first both variables iFirst and iSecond are promoted to int due to usual arithmetic conversion. The result is an int. That int result is truncated to int8_t for the assignment (in effect, the top 24 bits of the 32-bit int is cut away).
The int result of -127 - 125 is -252. With two's complement representation that will be 0xFFFFFF04. Truncation only leaves the 0x04 part. Therefore iResult will be equal to 4.
the problem is that your variable is 8 bit. 8 bits can hold up to 256 numbers. So, your variables can only represent numbers within -128~127 range. Any number out of that range will give wrong output. Both of your last calculations produce numbers beyond the variable's range (252 and -252). There is no elegant or even possible way to handle it as it is. You can only handle the overflow bit yourself.
PS. This is not hardware problem. Any processor would give same results.
Consider the following code snippet:
#include <cstdint>
#include <limits>
#include <iostream>
int main(void)
{
uint64_t a = UINT32_MAX;
std::cout << "a: " << a << std::endl;
++a;
std::cout << "a: " << a << std::endl;
uint64_t b = (UINT32_MAX) + 1;
std::cout << "b: " << b << std::endl;
uint64_t c = std::numeric_limits<uint32_t>::max();
std::cout << "c: " << c << std::endl;
uint64_t d = std::numeric_limits<uint32_t>::max() + 1;
std::cout << "d: " << d << std::endl;
return 0;
}
Which gives the following output:
a: 4294967295
a: 4294967296
b: 0
c: 4294967295
d: 0
Why are b and d both 0? I cannot seem to find an explanation for this.
This behaviour is referred to as an overflow. uint32_t takes up 4 bytes or 32 bits of memory. When you use UINT32_MAX you are setting each of the 32 bits to 1 which is the maximum value 4 bytes of memory can represent. 1 is an integer literal which typically takes up 4 bytes of memory too. So you're basically adding 1 to the maximum value 4 bytes can represent. This is how the maximum value looks like in memory:
1111 1111 1111 1111 1111 1111 1111 1111
When you add one to this, there is no more room to represent one greater than the maximum value and hence all bits are set to 0 and back to their minimum value.
Although you're assigning to a uint64_t that has twice the capacity of uint32_t, it is only assigned after the addition operation is complete.
The addition operation checks the types of both the left and the right operands and this is what decides the type of the result. If atleast one value were of type uint64_t, the other operand would automatically be promoted to uint64_t too.
If you do:
(UINT32_MAX) + (uint64_t)1;
or:
(unint64_t)(UINT32_MAX) + 1;
,
you'll get what you expect. In languages like C#, you can use a checked block to check for overflow and prevent this from happening implicitly.
I want to replicate the behaviour of a micro controller.
If the memory location of the program counter contains 0x26 then I want to check that the value in the next memory location is positive or negative.
If it is positive then I add it to the program counter PC and if it is negative then I add it to the program counter PC, which is essentially subtracting it.
I am using bit masking to do this but I am having issues determining a negative value.
{
if (value_in_mem && 128 == 128)
{
cout << "\nNext byte is : " << value_in_mem << endl;
cout << "\nNumber is positive!" << endl;
PC = PC + value_in_mem;
cout << "\n(Program Counter has been increased)" << endl;
}
else if (value_in_mem && 128 == 0)
{
cout << "\nNext byte is : - " << value_in_mem << endl;
cout << "\nNumber is negative!" << endl;
PC = PC + value_in_mem;
cout << "\n(Program Counter has been decreased)" << endl;
}
}
My method is to && the value_in_mem (an 8 bit signed int) with 128 (0b10000000) to determine if the most significant bit is 1 or 0, negative or postitve respectively.
value_in_mem is a 8-bit hexadecimal value and I think this is where my confusion lies. I'm not entirely sure how negative hexadecimal values work, could someone possibly explain this and the errors in my attempt at the code?
1) You're using && which is a logical AND but you should use & which is a bitwise AND.
// It would be better to use hex values when you're working with bits
if ( value_in_mem & 0x80 == 0x80 )
{
// it's negative
}
else
{
// it's positive
}
2) You can simply compare your value to 0 (if value_in_mem is declared as char)
if ( value_in_mem < 0 )
{
// it's negative
}
else
{
// it's positive
}
Make sure you are using correct types for your values (or cast them where it matters, if you prefer for example to have memory values as unsigned bytes most of the time (I certainly would), then cast it to signed 8 bit integer only for the particular calculation/comparison by static_cast<int8_t>(value_in_mem) ).
To demonstrate the importance of correct typing, and how C++ compiler will then do all the dirty work for you, so you don't have to bother with bits and can use also if (x < 0):
#include <iostream>
int main()
{
{
uint16_t pc = 65530; int8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // unsigned 16 + signed 8
// 65529 (b works as -1, 65530 - 1 = 65529)
}
{
int16_t pc = 65530; int8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // signed 16 + signed 8
// -7 (b works as -1, 65530 as int16_t is -6, -6 + -1 = -7)
}
{
int16_t pc = 65530; uint8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // signed 16 + unsigned 8
// 249 (b works as +255, 65530 as int16_t is -6, -6 + 255 = 249)
}
{
uint16_t pc = 65530; uint8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // unsigned 16 + unsigned 8
// 249 (b = +255, 65530 + 255 = 65785 (0x100F9), truncated to 16 bits = 249)
}
}
I want to append two unsigned 32bit integers into 1 64 bit integer. I have tried this code, but it fails. However, it works for 16bit integers into 1 32 bit
Code:
char buffer[33];
char buffer2[33];
char buffer3[33];
/*
uint16 int1 = 6535;
uint16 int2 = 6532;
uint32 int3;
*/
uint32 int1 = 653545;
uint32 int2 = 562425;
uint64 int3;
int3 = int1;
int3 = (int3 << 32 /*(when I am doing 16 bit integers, this 32 turns into a 16)*/) | int2;
itoa(int1, buffer, 2);
itoa(int2, buffer2, 2);
itoa(int3, buffer3, 2);
std::cout << buffer << "|" << buffer2 << " = \n" << buffer3 << "\n";
Output when the 16bit portion is enabled:
1100110000111|1100110000100 =
11001100001110001100110000100
Output when the 32bit portion is enabled:
10011111100011101001|10001001010011111001 =
10001001010011111001
Why is it not working? Thanks
I see nothing wrong with this code. It works for me. If there's a bug, it's in the code that's not shown.
Version of the given code, using standardized type declarations and iostream manipulations, instead of platform-specific library calls. The bit operations are identical to the example given.
#include <iostream>
#include <iomanip>
#include <stdint.h>
int main()
{
uint32_t int1 = 653545;
uint32_t int2 = 562425;
uint64_t int3;
int3 = int1;
int3 = (int3 << 32) | int2;
std::cout << std::hex << std::setw(8) << std::setfill('0')
<< int1 << " "
<< std::setw(8) << std::setfill('0')
<< int2 << "="
<< std::setw(16) << std::setfill('0')
<< int3 << std::endl;
return (0);
}
Resulting output:
0009f8e9 000894f9=0009f8e9000894f9
The bitwise operation looks correct to me. When working with bits, hexadecimal is more convenient. Any bug, if there is one, is in the code that was not shown in the question. As far as "appending bits in C++" goes, what you have in your code appears to be correct.
Try declaring buffer3 as buffer3[65]
Edit:
Sorry.
But I don't understand what the complaint is about.
In fact the answer is just as expected. You can infer it from your own result for the 16 bit input.
Since when you are oring the 32 '0' bits in lsb with second integer it will have leading zeroes in msb (when assigned to a 32 bit int which is in the signature of atoi) which are truncated in atoi (only the integer value equivalent will be read in the string, hence the string has to be 0X0 terminated, otherwise it would have a determinable size), giving the result.
I just want to concatenate my uint8_t array to uint64_t. In fact, I solved my problem but need to understand the reason. Here is my code;
uint8_t byte_array[5];
byte_array[0] = 0x41;
byte_array[1] = 0x42;
byte_array[2] = 0x43;
byte_array[3] = 0x44;
byte_array[4] = 0x45;
cout << "index 0: " << byte_array[0] << " index 1: " << byte_array[1] << " index 2: " << byte_array[2] << " index 3: " << byte_array[3] << " index 4: " << byte_array[4] << endl;
/* This does not work */
uint64_t reverse_of_value = (byte_array[0] & 0xff) | ((byte_array[1] & 0xff) << 8) | ((byte_array[2] & 0xff) << 16) | ((byte_array[3] & 0xff) << 24) | ((byte_array[4] & 0xff) << 32);
cout << reverse_of_value << endl;
/* this works fine */
reverse_of_value = (uint64_t)(byte_array[0] & 0xff) | ((uint64_t)(byte_array[1] & 0xff) << 8) | ((uint64_t)(byte_array[2] & 0xff) << 16) | ((uint64_t)(byte_array[3] & 0xff) << 24) | ((uint64_t)(byte_array[4] & 0xff) << 32);
cout << reverse_of_value << endl;
The first output will be "44434245" and second one will be "4544434241" that is what I want.
So as we see when I use casting each byte to uint64_t code works, however, if I do not use casting it gives me irrelevant result. Can anybody explain the reason?
Left-shifting a uint8_t that many bits isn't necessarily going to work. The left-hand operand will be promoted to int, whose width you don't know. It could already be 64-bit, but it could be 32-bit or even 16-bit, in which caseā¦ where would the result go? There isn't enough room for it! It doesn't matter that your code later puts the result into a uint64_t: the expression is evaluated in isolation.
You've correctly fixed that in your second version, by converting to uint64_t before the left-shift takes place. In this situation, the expression will assuredly have the desired behaviour.
Here is an example showing left-shift turning the char to 0. At least it does so on my machine, gcc 4.8.4, Ubuntu 14.04 LTS, x86_64.
#include <iostream>
using std::cout;
int main()
{
unsigned char ch;
ch = 0xFF;
cout << "Char before shift: " << static_cast<int>(ch) << '\n';
ch <<= 10;
cout << "Char after shift: " << static_cast<int>(ch) << '\n';
}
Note also my comment to the original question above, on some platforms, the 0x45 shifted 32 bits actually ends up in the least significant byte of the 64-bit value.
Shifting a type by more than the number of bits in the type is undefined behavior in C++. See this answer for more detail: https://stackoverflow.com/a/7401981/1689844