int to unsigned int conversion - c++

I'm just amazed to know that I can't convert signed to unsigned int by casting!
int i = -62;
unsigned int j = (unsigned int)i;
I thought I already knew this since I started to use casts, but I can't do it!

You can convert an int to an unsigned int. The conversion is valid and well-defined.
Since the value is negative, UINT_MAX + 1 is added to it so that the value is a valid unsigned quantity. (Technically, 2N is added to it, where N is the number of bits used to represent the unsigned type.)
In this case, since int on your platform has a width of 32 bits, 62 is subtracted from 232, yielding 4,294,967,234.

Edit: As has been noted in the other answers, the standard actually guarantees that "the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type)". So even if your platform did not store signed ints as two's complement, the behavior would be the same.
Apparently your signed integer -62 is stored in two's complement (Wikipedia) on your platform:
62 as a 32-bit integer written in binary is
0000 0000 0000 0000 0000 0000 0011 1110
To compute the two's complement (for storing -62), first invert all the bits
1111 1111 1111 1111 1111 1111 1100 0001
then add one
1111 1111 1111 1111 1111 1111 1100 0010
And if you interpret this as an unsigned 32-bit integer (as your computer will do if you cast it), you'll end up with 4294967234 :-)

This conversion is well defined and will yield the value UINT_MAX - 61. On a platform where unsigned int is a 32-bit type (most common platforms, these days), this is precisely the value that others are reporting. Other values are possible, however.
The actual language in the standard is
If the destination type is unsigned,
the resulting value is the least
unsigned integer congruent to the
source integer (modulo 2^n where n is
the number of bits used to represent
the unsigned type).

i=-62 . If you want to convert it to a unsigned representation. It would be 4294967234 for a 32 bit integer.
A simple way would be to
num=-62
unsigned int n;
n = num
cout<<n;
4294967234

with a little help of math
#include <math.h>
int main(){
int a = -1;
unsigned int b;
b = abs(a);
}

Since we know that i is an int, you can just go ahead and unsigneding it!
This would do the trick:
int i = -62;
unsigned int j = unsigned(i);

From c++17, one can use make_unsigned_t
auto uint_variable = std::make_unsigned_t<int>(-62); //4294967234

Related

Wrong output in c++ , i=65530 and when we print value of i it gives -6 instate of 65530. why?

I was creating a simple variable printing program and there is unexpected output.
The program gives output -6, but I would have expected 65530.
Why?
#include<iostream>
int main(){
short int i=65530;
std::cout<<i;
}
This has to do with the binary represenation of the type you have used.
As a 16 bit binary: 65530 === 1111 1111 1111 1010
But you have used short int which is a signed number and in it's binary definition it is represented by 1 bit as a sign and 15 bits as the number:
(1)111 1111 1111 1010
So why are there so many 1's in the representation?
Why don't the 15 bits look like a 6 in binary ( (1)000 0000 0000 0110)?
It's because of the way that negative numbers are represented in binary.
To represent signed numbers in binary, a format is used which is called Two's complement.
So here is an example of this transformation of the number -6
Take the number and transform it to binary. (6 in binary === 0000 0000 0000 0110
Exchange 1's to 0's and 0's to 1's (1111 1111 1111 1001)
To get the result - Add 1 to the previous transformation: (1111 1111 1111 1010)
As You can see exactly the same binary representation is for (unsigned)65530 as is for (signed)-6.
It's all in the interpretation of the bits.
That's why You have to be carefull nearing the maximum values of a representation in a type.
In this case to store this value You could:
Change the type to unsigned short int
Change to larger type.
You declared i as a short int, which is a 16-bit signed type. This means that the highest number it can represent is actually 32767 (2^15-1) and the smallest is -32768 (-2^15). 65530 overflows that limit. So if you had printed 32768, it would instead overflow to -32768, 32769 would overflow to -32767 and so on. See more topics on binary representations of signed numbers to understand this process better

Squaring a number in C++ yields wrong value

If I do
int n = 100000;
long long x = n * n;
then x == 1410065408
1410065408 is 2^31, yet I expect x to be 64 bit
What is going on?
I'm using VSC++ ( default VS c++ compiler )
n*n is too big for an int because it is equal to 10^10. The (erroneous) result gets stored as a long long.
Try:
long long n = 100000;
long long x = n*n;
Here's an answer that references the standard that specifies that the operation long long x = (long long)n*n where n is an int will not cause data loss. Specifically,
If both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank shall be converted to the type of the operand with greater rank.
Since the functional cast has the highest precedence here, it converts the left multiplicand to a long long. The right multiplicand of type int gets converted to a long long according to the standard. No loss occurs.
Declaring n as a long long is the best solution as mentioned previously.
Just as a quick clarification to the original post, 1410065408 is not 2^31, the value comes about as follows:
100,000 ^ 2 = 10,000,000,000 which exists in binary form as:
10 0101 0100 0000 1011 1110 0100 0000 0000
C++ integers are strictly 32 bits in memory. Therefore, the front two bits are ignored and the value is stored in memory as binary:
0101 0100 0000 1011 1110 0100 0000 0000
In decimal, this is equal to exactly 1410065408.
Edit - This is another solution to the problem; what this will do is cast the integer values to a long long before the multiplication so you don't get truncation of bits.
Original Posting
int n = 100000;
long long x = static_cast<long long>( n ) * static_cast<long long>( n );
Edit - The original answer provided by Jossie Calderon was already accepted as a valid answer and this answer adds another valid solution.

reinterpret_cast of signed int reference

Why is the result 4294967292 instead of -4 even when both integer types are signed?
#include <iostream>
void f(int & i)
{
i = -4;
}
int main()
{
long long i = 0;
f(reinterpret_cast<int &>(i));
std::cout << i << std::endl;
return 0;
}
long long seems to be 64-bit and int 32-bit number on your architecture. You're using 32-bit integer references, so you only modified the less significant half (you're evidently compiling on an ordinary x86 machine, which uses little endian and two's complement for signed integer representation).
The sign bit, as well as the more significant half are all 0s (as initialized). It would have to be 1s to print the same negative value:
0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1100
^ ^
sign bit of long long sign bit of int
reinterpret_cast only guarantees that you get back your int with another reinterpret_cast, so you either print it as int, or use long long & parameter.

1's complement using ~ in C/C++

I am using Visual Studio 2013.
Recently I tried the ~ operator for 1's complement:
int a = 10;
cout << ~a << endl;
Output is -11
But for
unsigned int a = 10;
cout << ~a << endl;
the output is 4294967296
I don't get why the output is -11 in the case of signed int.
Please help me with this confusion.
When you put number 10 into 32-bit signed or unsigned integer, you get
0000 0000 0000 0000 0000 0000 0000 1010
When you negate it, you get
1111 1111 1111 1111 1111 1111 1111 0101
These 32 bits mean 4294967285 as an unsigned integer, or -11 as a signed integer (your computer represents negative integers as Two's complement). They can also mean a 32-bit floating point number or four 8-bit characters.
Bits don't have any "absolute" meaning. They can represent anything, depending on how you "look" at them (which type they have).
The ~ operator performs a ones-complement on its argument, and it does not matter whther the argument is a signed or unsigned integer. It merely flips all the bits, so
0000 0000 0000 1010 (bin) / 10 (dec)
becomes
1111 1111 1111 0101 (bin)
(where, presumably, these numbers are 32 bits wide -- I omitted 16 more 0's and 1's.)
How will cout display the result? It looks at the original type. For a signed integer, the most significant bit is its sign. Thus, the result is always going to be negative (because the most significant bit in 10 is 0). To display a negative number as a positive one, you need the two's complement: inverting all bits, then add 1. For example, -1, binary 111..111, displays as (inverting) 000..000 then +1: 000..001. Result: -1.
Applying this to the one's complement of 10 you get 111..110101 -> inverting to 000...001010, then add 1. Result: -11.
For an unsigned number, cout doesn't do this (naturally), and so you get a large number: the largest possible integer minus the original number.
In memory there is stored 4294967285 in both cases (4294967296 properly a typo, 33 bit?), the meaning of this number depends which signdeness you use:
if it's signed, this the number is -11.
if it's unsigned, then it's 4294967285
different interpretations of the same number.
You can reinterpret it as unsigned by casting it, same result:
int a = 10;
cout << (unsigned int) ~a << endl;
Try this
unsigned int getOnesComplement(unsigned int number){
unsigned onesComplement = 1;
if(number < 1)
return onesComplement;
size_t size = (sizeof(unsigned int) * 8 - 1) ;
unsigned int oneShiftedToMSB = 1 << size;
unsigned int shiftedNumber = number;
for ( size_t bitsToBeShifted = 0; bitsToBeShifted < size; bitsToBeShifted++){
shiftedNumber = number << bitsToBeShifted;
if(shiftedNumber & oneShiftedToMSB){
onesComplement = ~shiftedNumber;
onesComplement = onesComplement >> bitsToBeShifted;
break;
}
}
return onesComplement;
}

the idea behind unsigned integer [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What happens if I assign a negative value to an unsigned variable?
I'm new at C++ and I want to know how to use unsigned types. For the unsigned int type, I know that it can take the values from 0 to 4294967296. but when I want to initialize an unsigned int type as follows:
unsigned int x = -10;
cout << x;
The output seems like 4294967286
The got this output = max value - 10. So I want to learn what is happening in the memory? What kind of processes are being done while this calculation is continuing? Thanks for your answers.
You're encountering wrap around behavior.
Unsigned types are cyclic (signed types, on the other hand, may or may not be cyclic, but it's undefined behavior that you shouldn't rely on). That is to say, one less than the minimum possible value is the maximum possible value. You can demonstrate this yourself with the following snippet:
int main()
{
unsigned int x = 5;
for (int i = 0; i < 10; ++i) cout << x-- << endl;
return 0;
}
You'll notice that after reaching zero, the value of x jumps to 2^32-1, the maximum representable value. Subtracting further acts as expected.
When you subtract 1 from unsigned 0, the bit pattern changes in the following way:
0000 0000 0000 0000 0000 0000 0000 0000 // before (0)
1111 1111 1111 1111 1111 1111 1111 1111 // after (2^32 - 1)
With unsigned numbers, negative numbers are treated like positive numbers subtracted from zero. So (unsigned int) -10 will equal ((unsigned int) 0) - ((unsigned int) 10).
I like to think about it as an unsigned int being the lowest 32 bits of a higher-precision arbitrary value. Like this:
v imaginary high order bit
1 0000 0000 0000 0000 0000 0000 0000 0000 // before (2^32)
0 1111 1111 1111 1111 1111 1111 1111 1111 // after (2^32 - 1)
The behavior of the unsigned int in these overflow cases is exactly the same as the behavior of the low 8 bits of an unsigned int when you subtract 1 from 256. It makes more sense to look at an unsigned char (1 byte) like this, because the values 0 and 256 are equal if casted to unsigned char, since the limited precision discards the extra bits.
0 0000 0000 0000 0000 0000 0001 0000 0000 // before (256)
0 0000 0000 0000 0000 0000 0000 1111 1111 // before (255)
As others have pointed out, this is called modulo arithmetic. Using higher precision values to help visualize the transitions made when wrapping around works because you mask off high order bits. It doesn't matter what it was, so it can be anything, it just gets discarded. Integers are values over modulus 2^32, so any multiples of 2^32 equal zero in the space of an integer. That's why I can get away with pretending there's an extra bit on the end.
Modulus operations have their own dedicated operator in case you need to compute them for numbers other than 2^32 in your programs, as used in this statement:
int forty_mod_twelve = 40 % 12;
// value is 4: 4 + n * 12 == 40 for some whole number n
Modulus operations on powers of two (like 2^32) simplify directly to masking off high order bits, and if you take a 64 bit integer and compute it modulo 2^32, the value will be exactly the same as if you had converted it to an unsigned int.
01011010 01011100 10000001 00001101 11111111 11111111 11111111 11111111 // before
00000000 00000000 00000000 00000000 11111111 11111111 11111111 11111111 // after
Programmers like to use this property to speed up programs, because it's easy to chop off some number of bits, but performing a modulus operation is much harder (it's about as hard as doing a division).
Does that make sense?
This involves the standard integral conversions. Here's the applicable rule. We start with the type of the literal 10:
2.14.2 Integer literals [lex.icon]
An integer literal is a sequence of digits that has no period or exponent part. An integer literal may have
a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence
of digits is the most significant. A decimal integer literal (base ten) begins with a digit other than 0 and
consists of a sequence of decimal digits. An octal integer literal (base eight) begins with the digit 0 and
consists of a sequence of octal digits. A hexadecimal integer literal (base sixteen) begins with 0x or 0X and
consists of a sequence of hexadecimal digits, which include the decimal digits and the letters a through f and A through F with decimal values ten through fifteen. [ Example: the number twelve can be written 12, 014, or 0XC. — end example ]
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be
represented.
A table follows, the first type is int and it fits. So the literal's type is int.
The unary minus operator is applied, which doesn't change the type. Then the following rule is applied:
4.7 Integral conversions [conv.integral]
A prvalue of an integer type can be converted to a prvalue of another integer type. A prvalue of an unscoped enumeration type can be converted to a prvalue of an integer type.
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
Instead of printing the value the way you are, print it in hexadecimal format (sorry, I forget how to do that with cout but I know it's possible). You'll see that the representation is the same for both values.
From your context, an integer is 32 bits (this is not always the case). When using a signed integer, the most significant bit is the sign, not part of the value. When using an unsigned integer, the most significant bit is part of the value.