Why does comparing unsigned long with negative number result in false? [duplicate] - c++

This question already has answers here:
Comparison operation on unsigned and signed integers
(7 answers)
Why is (sizeof(int) > -1) false? [duplicate]
(3 answers)
Closed 6 years ago.
unsigned long mynum = 7;
if(mynum > -1) // false
Why does this happen ? is it because -1 is an int, and when it gets "promoted" to unsigned long, it gets the maximum value of unsigned long ?

This might not be right but here's what i think:
When you execute the following code
unsigned long a = -8;
std::cout << a;
Since unsigned values can't be below 0, it will return the max value of an unsigned long - 8 or 4294967288 in this case.
And thats what happened to the -1 in your operation when it got converted to an unsigned long

unsigned variables has the maximum value they don't have a minus sign so the last bit is positive.
assigning a negative value to a unsigned will set the value to the corresponding signed value:
-1 and 255 has the same bitfield set:
#include <iostream>
int main()
{
unsigned char uc1 = 255; // 11111111
unsigned char uc2 = -1;
//signed : -1 : 11111111 : 1 1111111 : -128 + 127
//unsigned: 255: 11111111 : 1 1111111 : 128 + 127
if(uc1 == uc2)
std::cout << "uc1 = uc2" << std::endl;
return 0;
}

This is because of implicit typecast which is performed internally by compiler.
When the operation is going between two different types of variables the compiler itself typecasts (converts temporarily) the lower data type to higher datatype temporarily.
Here in your code -1 temporarily acts as unsigned long because implicit typecasting performed by compiler itself. It behaves as unsigned long because the other variable is pf that type.
here -1 is not treated as -1 but its treated as its equivalent as unsigned long.

Related

Why does NOT give an unexpected result when flipping 1? [duplicate]

This question already has answers here:
Explanation of Bitwise NOT Operator
(7 answers)
Closed last year.
I am currently learning the process of bitwise operators, and I've come across something that I can't quite figure out.
Im currently working with the NOT(~) operator, which should invert all of the bits. So in order to get a better understanding of it, I've tried creating a program that will flip the number 1's bits.
int main()
{
int x = 1; // 0001 in binary
int y = ~x; // should flip the bits, so its 1110, or 14
cout << y;
}
However, when running this, i am getting -2 as the result. Can anyone offer an explanation as to why this isn't working?
You're using signed integers. Your expected result (14) would occur if you were using unsigned integers (and if the integer were only 4 bits, which it is not).
Instead, with signed integers, all values with the highest bit set are negative values - that's how Two's Complement works. For example, if you have 16 bits, then values 0b0000_0000_0000_0000 through 0b0111_1111_1111_1111 are assigned to the positive values ("0" through "32767") meanwhile values 0b1000_0000_0000_0000 through 0b1111_1111_1111_1111 are assigned to the negative values ("-32768" through "-1").
Also, int usually is more than 4 bits; it's often 32-bits. The negation of 1 would be 0b1111_1111_1111_1111_1111_1111_1111_1110, not 0b1110.
0b1111_1111_1111_1111_1111_1111_1111_1110 when treated as a signed integer is -2 using the Two's Complement rules.
0b1111_1111_1111_1111_1111_1111_1111_1110 when treated as an unsigned integer is 4,294,967,294, equal to 2^32 - 2.
#include <stdio.h>
#include <cstdint>
// Program prints the following:
// unsigned int_8: 254
// signed int_8: -2
// unsigned int_16 65534
// signed int_16 -2
// unsigned int_32 4294967294
// signed int_32 -2
int main() {
printf( "unsigned int_8: %hhu\n", (uint8_t)~1 );
printf( "signed int_8: %d\n", (int8_t)~1 );
printf( "unsigned int_16 %hu\n", (uint16_t)~1 );
printf( "signed int_16 %d\n", (int16_t)~1 );
printf( "unsigned int_32 %u\n", (uint32_t)~1 );
printf( "signed int_32 %d\n", (int32_t)~1 );
}

Why are these two numbers comparing equal?

I'm wondering why it's the case that these two numbers are comparing equal. I had a (maybe false) realisation that I messed up my enums, because in my enums I often do:
enum class SomeFlags : unsigned long long { ALL = ~0, FLAG1 = 1, FLAG2 };
And I thought that 0 being an int and being assigned to an unsigned long long, that there was a mistake.
unsigned long long number1 = ~0;
/* I expect the 0 ( which is an int, to be flipped, which would make it
the max value of an unsigned int. It would then get promoted and be
assigned that value to an unsigned long long. So number1 should
have the value of max value of unsigned int*/
int main()
{
unsigned long long number2 = unsigned long long(0);
number2 = number2 - 1;
/* Here I expect number2 to have the max value of unsigned long long*/
if (number1 == number2)
{
std::cout << "Is same\n";
// They are the same, but how???
}
}
This is really confusing. To emphasise the point:
unsigned long long number1 = int(~0);
/* I feel like I'm saying:
- initialize an int with the value of zero with its bits flipped
- assign it to the unsigned long long
And this somehow ends up as the max value of unsigned long long???*/
cppreference: "The result of operator~ is the bitwise NOT (one's complement) value of the argument (after promotion."
No promotion (to int or unsigned int) is however needed here - but flipping all bits in a signed type with the value 0 will make it -1 (all bits set) which when converted to an unsigned type will be the largest value that type can hold.
To avoid this, make 0 unsigned. The result of ~0U will be an unsigned int with all bits set, which can be converted to any larger unsigned type perfectly.
unsigned long long number1 = ~0U;
You seems to be thinking that the conversion of int to long long is done just by prepending 0 in front of the int. While this is true for the conversion of positive integer, it's not true for negative integers.
unsigned long long number1 = ~0
is identical as
unsigned long long number1 = -1
So when the conversion takes place, it is not simply copying the bits into number1
Since C++11 unsigned arithmetic must use modulo wrapping.
Which means that if you have N bits then any operation OP works like this:
result = (a OP b) % (2^N)
Say you have 8 bits and substract as operation.
unsigned char result1 = (0 - 1) % 256 = -1 % 256 = 255 = 0b11111111
unsigned char result2 = ~(0b00000000) = 0b11111111 = 255
The same holds for larger N and thus your result
In C/C++ integer promotion is governed by the type of the SOURCE argument, not the DESTINATION.
So - the promotion from signed integer types will be sign extended - regardless the type of the destination, be it signed or unsigned.
The promotion from unsigned integer types will be zero extended - again, regardless the type of destination.
In your case 0 is signed integer and ~0 is also signed integer. This will be sign extended to unsigned long long.
The conversion from int to unsigned long long is specified based on the value of the int (aka "the source integer") not the bit pattern. The value is -1 in this case:
4.7 Integral conversions [conv.integral]
A prvalue of an integer type can be converted to a prvalue of another integer type. A prvalue of an unscoped enumeration type can be converted to a prvalue of an integer type.
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2^n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
For an integer with a value of -1 the "least unsigned integer congruent to the source integer" modulo 2^64 is (-1 + 2^64).
The simple-to-understand effect of this rule is that int is sign-extended when being converted to an unsigned type.

Wrap around range of unsigned int in C++?

I am going through C++ Primer 5th Edition and am currently doing the signed/unsigned section. A quick question I have is when there is a wrap-around, say, in this block of code:
unsigned u = 10;
int i = -42;
std::cout << i + i << std::endl; // prints -84
std::cout << u + i << std::endl; // if 32-bit ints, prints 4294967264
I thought that the max range was 4294967295 with the 0 being counted, so I was wondering why the wrap-around seems to be done from 4294967296 in this problem.
Unsigned arithmetic is modulo (maximum value of the type plus 1).
If maximum value of an unsigned is 4294967295 (2^32 - 1), the result will be mathematically equal to (10-42) modulo 4294967296 which equals 10-42+4294967296 i.e. 4294967264
When an out-of-range value is converted to an unsigned type, the result is the remainder of it modulo the number of values the target unsigned type can hold. For instance, the result of n converted to unsigned char is n % 256, because unsigned char can hold values 0 to 255.
It's similar in your example, the wrap-around is done using 4294967296, the number of values that a 32-bit unsigned integer can hold.
Given unsigned int that is 32 bits you're correct that the range is [0, 4294967295].
Therefore -1 is 4294967295. Which is logically equivalent to 4294967296 - 1 which should explain the behavior you're seeing.

adding unsigned int to int [duplicate]

This question already has answers here:
Signed to unsigned conversion in C - is it always safe?
(8 answers)
Closed 9 years ago.
#include <iostream>
int main ()
{
using namespace std;
unsigned int i = 4;
int a = -40;
cout<<a+i<<endl;
return 0;
}
Executing this gives me 4294967260
I know there's a conversion taking place, from a signed int to unsigned int,
but how and why this particular value?
I noticed it's close to the sum of | 2147483647 | + 2147483647
When an unsigned int and an int are added together, the int is first converted to unsigned int before the addition takes place (and the result is also an unsigned int).
-1, while being the first negative number, is actually equivalent to the largest unsigned number - that is, (unsigned int) -1 === UINT_MAX.
-2 in unsigned form is UINT_MAX - 1, and so on, so -40 === UINT_MAX - 39 === 4294967256 (when using 32bit ints).
Of course, adding 4 then gives your answer:
4294967256 + 4 = 4294967260.
This is a great quiz where you can learn some of the rules of integers in C (and similarly C++): http://blog.regehr.org/archives/721
Represent i and a in hexadecimal:
i = 4: 0x 0000 0004
a = -40: 0x FFFF FFD8
Following the implicit conversion rules of C++, a in a + i will be cast to unsigned int, that is, 4294967256. So a + i = 4294967260

Weird result after assigning 2^31 to a signed and unsigned 32-bit integer variable

As the question title reads, assigning 2^31 to a signed and unsigned 32-bit integer variable gives an unexpected result.
Here is the short program (in C++), which I made to see what's going on:
#include <cstdio>
using namespace std;
int main()
{
unsigned long long n = 1<<31;
long long n2 = 1<<31; // this works as expected
printf("%llu\n",n);
printf("%lld\n",n2);
printf("size of ULL: %d, size of LL: %d\n", sizeof(unsigned long long), sizeof(long long) );
return 0;
}
Here's the output:
MyPC / # c++ test.cpp -o test
MyPC / # ./test
18446744071562067968 <- Should be 2^31 right?
-2147483648 <- This is correct ( -2^31 because of the sign bit)
size of ULL: 8, size of LL: 8
I then added another function p(), to it:
void p()
{
unsigned long long n = 1<<32; // since n is 8 bytes, this should be legal for any integer from 32 to 63
printf("%llu\n",n);
}
On compiling and running, this is what confused me even more:
MyPC / # c++ test.cpp -o test
test.cpp: In function ‘void p()’:
test.cpp:6:28: warning: left shift count >= width of type [enabled by default]
MyPC / # ./test
0
MyPC /
Why should the compiler complain about left shift count being too large? sizeof(unsigned long long) returns 8, so doesn't that mean 2^63-1 is the max value for that data type?
It struck me that maybe n*2 and n<<1, don't always behave in the same manner, so I tried this:
void s()
{
unsigned long long n = 1;
for(int a=0;a<63;a++) n = n*2;
printf("%llu\n",n);
}
This gives the correct value of 2^63 as the output which is 9223372036854775808 (I verified it using python). But what is wrong with doing a left shit?
A left arithmetic shift by n is equivalent to multiplying by 2n
(provided the value does not overflow)
-- Wikipedia
The value is not overflowing, only a minus sign will appear since the value is 2^63 (all bits are set).
I'm still unable to figure out what's going on with left shift, can anyone please explain this?
PS: This program was run on a 32-bit system running linux mint (if that helps)
On this line:
unsigned long long n = 1<<32;
The problem is that the literal 1 is of type int - which is probably only 32 bits. Therefore the shift will push it out of bounds.
Just because you're storing into a larger datatype doesn't mean that everything in the expression is done at that larger size.
So to correct it, you need to either cast it up or make it an unsigned long long literal:
unsigned long long n = (unsigned long long)1 << 32;
unsigned long long n = 1ULL << 32;
The reason 1 << 32 fails is because 1 doesn't have the right type (it is int). The compiler doesn't do any converting magic before the assignment itself actually happens, so 1 << 32 gets evaluated using int arithmic, giving a warning about an overflow.
Try using 1LL or 1ULL instead which respectively have the long long and unsigned long long type.
The line
unsigned long long n = 1<<32;
results in an overflow, because the literal 1 is of type int, so 1 << 32 is also an int, which is 32 bits in most cases.
The line
unsigned long long n = 1<<31;
also overflows, for the same reason. Note that 1 is of type signed int, so it really only has 31 bits for the value and 1 bit for the sign. So when you shift 1 << 31, it overflows the value bits, resulting in -2147483648, which is then converted to an unsigned long long, which is 18446744071562067968. You can verify this in the debugger, if you inspect the variables and convert them.
So use
unsigned long long n = 1ULL << 31;