If I do
int n = 100000;
long long x = n * n;
then x == 1410065408
1410065408 is 2^31, yet I expect x to be 64 bit
What is going on?
I'm using VSC++ ( default VS c++ compiler )
n*n is too big for an int because it is equal to 10^10. The (erroneous) result gets stored as a long long.
Try:
long long n = 100000;
long long x = n*n;
Here's an answer that references the standard that specifies that the operation long long x = (long long)n*n where n is an int will not cause data loss. Specifically,
If both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank shall be converted to the type of the operand with greater rank.
Since the functional cast has the highest precedence here, it converts the left multiplicand to a long long. The right multiplicand of type int gets converted to a long long according to the standard. No loss occurs.
Declaring n as a long long is the best solution as mentioned previously.
Just as a quick clarification to the original post, 1410065408 is not 2^31, the value comes about as follows:
100,000 ^ 2 = 10,000,000,000 which exists in binary form as:
10 0101 0100 0000 1011 1110 0100 0000 0000
C++ integers are strictly 32 bits in memory. Therefore, the front two bits are ignored and the value is stored in memory as binary:
0101 0100 0000 1011 1110 0100 0000 0000
In decimal, this is equal to exactly 1410065408.
Edit - This is another solution to the problem; what this will do is cast the integer values to a long long before the multiplication so you don't get truncation of bits.
Original Posting
int n = 100000;
long long x = static_cast<long long>( n ) * static_cast<long long>( n );
Edit - The original answer provided by Jossie Calderon was already accepted as a valid answer and this answer adds another valid solution.
Related
I am using Visual Studio 2013.
Recently I tried the ~ operator for 1's complement:
int a = 10;
cout << ~a << endl;
Output is -11
But for
unsigned int a = 10;
cout << ~a << endl;
the output is 4294967296
I don't get why the output is -11 in the case of signed int.
Please help me with this confusion.
When you put number 10 into 32-bit signed or unsigned integer, you get
0000 0000 0000 0000 0000 0000 0000 1010
When you negate it, you get
1111 1111 1111 1111 1111 1111 1111 0101
These 32 bits mean 4294967285 as an unsigned integer, or -11 as a signed integer (your computer represents negative integers as Two's complement). They can also mean a 32-bit floating point number or four 8-bit characters.
Bits don't have any "absolute" meaning. They can represent anything, depending on how you "look" at them (which type they have).
The ~ operator performs a ones-complement on its argument, and it does not matter whther the argument is a signed or unsigned integer. It merely flips all the bits, so
0000 0000 0000 1010 (bin) / 10 (dec)
becomes
1111 1111 1111 0101 (bin)
(where, presumably, these numbers are 32 bits wide -- I omitted 16 more 0's and 1's.)
How will cout display the result? It looks at the original type. For a signed integer, the most significant bit is its sign. Thus, the result is always going to be negative (because the most significant bit in 10 is 0). To display a negative number as a positive one, you need the two's complement: inverting all bits, then add 1. For example, -1, binary 111..111, displays as (inverting) 000..000 then +1: 000..001. Result: -1.
Applying this to the one's complement of 10 you get 111..110101 -> inverting to 000...001010, then add 1. Result: -11.
For an unsigned number, cout doesn't do this (naturally), and so you get a large number: the largest possible integer minus the original number.
In memory there is stored 4294967285 in both cases (4294967296 properly a typo, 33 bit?), the meaning of this number depends which signdeness you use:
if it's signed, this the number is -11.
if it's unsigned, then it's 4294967285
different interpretations of the same number.
You can reinterpret it as unsigned by casting it, same result:
int a = 10;
cout << (unsigned int) ~a << endl;
Try this
unsigned int getOnesComplement(unsigned int number){
unsigned onesComplement = 1;
if(number < 1)
return onesComplement;
size_t size = (sizeof(unsigned int) * 8 - 1) ;
unsigned int oneShiftedToMSB = 1 << size;
unsigned int shiftedNumber = number;
for ( size_t bitsToBeShifted = 0; bitsToBeShifted < size; bitsToBeShifted++){
shiftedNumber = number << bitsToBeShifted;
if(shiftedNumber & oneShiftedToMSB){
onesComplement = ~shiftedNumber;
onesComplement = onesComplement >> bitsToBeShifted;
break;
}
}
return onesComplement;
}
I try to multiply three number but I get a strange result. Why I get so different results?
unsigned int a = 7;
unsigned int b = 8;
double d1 = -2 * a * b;
double d2 = -2 * (double) a * (double) b;
double d3 = -2 * ( a * b );
// outputs:
// d1 = 4294967184.000000
// d2 = -112.000000
// d3 = 4294967184.000000
In your first example, the number -2 is converted to unsigned int. The multiplication results in -112, which when represented as unsigned is 2^32 - 112 = 4294967184. Then this result is finally converted to double for the assignment.
In the second example, all math is done on doubles, leading to the correct result. You will get the same result if you did:
double d3 = -2.0 * a * b
as -2.0 is a double literal.
double is signed. Which means that the first bit (most significant bit aka sign bit) determines whether this number is positive or negative.
unsigned int cannot handle negative values because it uses the first bit (most significant bit) to expand the range of "positive" numbers it can express. so in
double d1 = -2 * a * b;
when executed, your machine puts the whole (-2 * a * b) in an unsigned int structure (like a and b) and it produces the following binary 1111 1111 1111 1111 1111 1111 1001 0000 (because it's the two's complement of 112 which is 0000 0000 0000 0000 0000 0000 0111 0000). But the problem here is that it's unsigned int so it's treated as a very big positive integer (which is 4294967184) because it doesn't treat the first 1 as a sign bit.
Then you put it in a double that's why you have the .00000 printed.
The other example, works because you typecast a to double and b to double so when multiplying -2 with a double, your computer will put it in a double structure, therefore, the sign bit will be considered.
double d3 = -2 * (double) (a * b)
will work as well.
To get a feeling about signed and unsigned, check this
double d1 = -2 * a * b;
Everything on the right hand side is an integral type, so the right hand side will be computed as an integral type. a and b are unsigned, so that dictates the specific type of the result. What about that -2? It's converted to an unsigned int. Negative integers are converted to unsigned integers using 2s complement arithmetic. That -2 becomes a very large positive unsigned integer.
double d2 = -2 * (double) a * (double) b;
Now the right hand side is mixed integers and floating point numbers, so the right hand side will be computed as a floating point type. What about that -2? It's converted to a double. Now the conversion is straightforward: -2 converted to a double becomes -2.0.
In C and C++ the built-in operators are always applied on two variables of the same type. A very precise set of rules guides the promotion of one (or two) of the two variables if they initially are different (or too small).
In this precise case, -2 is by default of type signed int (synonym to int) while a and b are of type unsigned int. In this case, the rules state that -2 should be promoted to an unsigned int, and because on your system you probably have 32 bits int and a 2-complement representation, this ends up being 2**32 - 2 (4 294 967 294). This number is then multiplied by a and the result taken modulo 2**32 (4 294 967 282), then b, modulo 2**32 once again (4 294 967 184).
It's a weird system really, and has led to countless bugs. The overflow itself, for example, led to the Linux bug on June 30th this year which hanged up so many computers around the world. I hear it also crashed a couple Java systems.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What happens if I assign a negative value to an unsigned variable?
I'm new at C++ and I want to know how to use unsigned types. For the unsigned int type, I know that it can take the values from 0 to 4294967296. but when I want to initialize an unsigned int type as follows:
unsigned int x = -10;
cout << x;
The output seems like 4294967286
The got this output = max value - 10. So I want to learn what is happening in the memory? What kind of processes are being done while this calculation is continuing? Thanks for your answers.
You're encountering wrap around behavior.
Unsigned types are cyclic (signed types, on the other hand, may or may not be cyclic, but it's undefined behavior that you shouldn't rely on). That is to say, one less than the minimum possible value is the maximum possible value. You can demonstrate this yourself with the following snippet:
int main()
{
unsigned int x = 5;
for (int i = 0; i < 10; ++i) cout << x-- << endl;
return 0;
}
You'll notice that after reaching zero, the value of x jumps to 2^32-1, the maximum representable value. Subtracting further acts as expected.
When you subtract 1 from unsigned 0, the bit pattern changes in the following way:
0000 0000 0000 0000 0000 0000 0000 0000 // before (0)
1111 1111 1111 1111 1111 1111 1111 1111 // after (2^32 - 1)
With unsigned numbers, negative numbers are treated like positive numbers subtracted from zero. So (unsigned int) -10 will equal ((unsigned int) 0) - ((unsigned int) 10).
I like to think about it as an unsigned int being the lowest 32 bits of a higher-precision arbitrary value. Like this:
v imaginary high order bit
1 0000 0000 0000 0000 0000 0000 0000 0000 // before (2^32)
0 1111 1111 1111 1111 1111 1111 1111 1111 // after (2^32 - 1)
The behavior of the unsigned int in these overflow cases is exactly the same as the behavior of the low 8 bits of an unsigned int when you subtract 1 from 256. It makes more sense to look at an unsigned char (1 byte) like this, because the values 0 and 256 are equal if casted to unsigned char, since the limited precision discards the extra bits.
0 0000 0000 0000 0000 0000 0001 0000 0000 // before (256)
0 0000 0000 0000 0000 0000 0000 1111 1111 // before (255)
As others have pointed out, this is called modulo arithmetic. Using higher precision values to help visualize the transitions made when wrapping around works because you mask off high order bits. It doesn't matter what it was, so it can be anything, it just gets discarded. Integers are values over modulus 2^32, so any multiples of 2^32 equal zero in the space of an integer. That's why I can get away with pretending there's an extra bit on the end.
Modulus operations have their own dedicated operator in case you need to compute them for numbers other than 2^32 in your programs, as used in this statement:
int forty_mod_twelve = 40 % 12;
// value is 4: 4 + n * 12 == 40 for some whole number n
Modulus operations on powers of two (like 2^32) simplify directly to masking off high order bits, and if you take a 64 bit integer and compute it modulo 2^32, the value will be exactly the same as if you had converted it to an unsigned int.
01011010 01011100 10000001 00001101 11111111 11111111 11111111 11111111 // before
00000000 00000000 00000000 00000000 11111111 11111111 11111111 11111111 // after
Programmers like to use this property to speed up programs, because it's easy to chop off some number of bits, but performing a modulus operation is much harder (it's about as hard as doing a division).
Does that make sense?
This involves the standard integral conversions. Here's the applicable rule. We start with the type of the literal 10:
2.14.2 Integer literals [lex.icon]
An integer literal is a sequence of digits that has no period or exponent part. An integer literal may have
a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence
of digits is the most significant. A decimal integer literal (base ten) begins with a digit other than 0 and
consists of a sequence of decimal digits. An octal integer literal (base eight) begins with the digit 0 and
consists of a sequence of octal digits. A hexadecimal integer literal (base sixteen) begins with 0x or 0X and
consists of a sequence of hexadecimal digits, which include the decimal digits and the letters a through f and A through F with decimal values ten through fifteen. [ Example: the number twelve can be written 12, 014, or 0XC. — end example ]
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be
represented.
A table follows, the first type is int and it fits. So the literal's type is int.
The unary minus operator is applied, which doesn't change the type. Then the following rule is applied:
4.7 Integral conversions [conv.integral]
A prvalue of an integer type can be converted to a prvalue of another integer type. A prvalue of an unscoped enumeration type can be converted to a prvalue of an integer type.
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
Instead of printing the value the way you are, print it in hexadecimal format (sorry, I forget how to do that with cout but I know it's possible). You'll see that the representation is the same for both values.
From your context, an integer is 32 bits (this is not always the case). When using a signed integer, the most significant bit is the sign, not part of the value. When using an unsigned integer, the most significant bit is part of the value.
I'm just amazed to know that I can't convert signed to unsigned int by casting!
int i = -62;
unsigned int j = (unsigned int)i;
I thought I already knew this since I started to use casts, but I can't do it!
You can convert an int to an unsigned int. The conversion is valid and well-defined.
Since the value is negative, UINT_MAX + 1 is added to it so that the value is a valid unsigned quantity. (Technically, 2N is added to it, where N is the number of bits used to represent the unsigned type.)
In this case, since int on your platform has a width of 32 bits, 62 is subtracted from 232, yielding 4,294,967,234.
Edit: As has been noted in the other answers, the standard actually guarantees that "the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type)". So even if your platform did not store signed ints as two's complement, the behavior would be the same.
Apparently your signed integer -62 is stored in two's complement (Wikipedia) on your platform:
62 as a 32-bit integer written in binary is
0000 0000 0000 0000 0000 0000 0011 1110
To compute the two's complement (for storing -62), first invert all the bits
1111 1111 1111 1111 1111 1111 1100 0001
then add one
1111 1111 1111 1111 1111 1111 1100 0010
And if you interpret this as an unsigned 32-bit integer (as your computer will do if you cast it), you'll end up with 4294967234 :-)
This conversion is well defined and will yield the value UINT_MAX - 61. On a platform where unsigned int is a 32-bit type (most common platforms, these days), this is precisely the value that others are reporting. Other values are possible, however.
The actual language in the standard is
If the destination type is unsigned,
the resulting value is the least
unsigned integer congruent to the
source integer (modulo 2^n where n is
the number of bits used to represent
the unsigned type).
i=-62 . If you want to convert it to a unsigned representation. It would be 4294967234 for a 32 bit integer.
A simple way would be to
num=-62
unsigned int n;
n = num
cout<<n;
4294967234
with a little help of math
#include <math.h>
int main(){
int a = -1;
unsigned int b;
b = abs(a);
}
Since we know that i is an int, you can just go ahead and unsigneding it!
This would do the trick:
int i = -62;
unsigned int j = unsigned(i);
From c++17, one can use make_unsigned_t
auto uint_variable = std::make_unsigned_t<int>(-62); //4294967234
I am aware of the 2s complement representation of signed values. But how does binary '10000000' become -128 in decimal(using %d).
for +64 binary rep = '01000000' for -64 binary rep = '11000000' which is 2's complement of '01000000'
can some one please explain?
Program:
int main()
{
char ch = 1;
int count = 0;
while(count != 8)
{
printf("Before shift val of ch = %d,count=%d\n",ch,count);
ch = ch << 1;
printf("After shift val of ch = %d,count=%d\n",ch,count);
//printBinPattern(ch);
printf("*************************************\n");
count++;
}
return 0;
}
Output:
Before shift val of ch = 1, count=0
After shift val of ch = 2, count=0
*************************************
...
... /* Output not shown */
Before shift val of ch = 32, count=5
After shift val of ch = 64, count=5
*************************************
Before shift val of ch = 64, count=6
After shift val of ch = -128, count=6
*************************************
Before shift val of **ch = -128**, count=7
After shift val of ch = 0, count=7
*************************************
Before shift val of ch = 0, count=8
After shift val of ch = 0, count=8
*************************************
Because on your compiler, char means signed char.
Char is just a tiny integer, generally in the range of 0...255 (for unsigned char) or -128...127 (for signed char).
The means of converting a number to 2-complement negative is to "invert the bits and add 1"
128 = "1000 0000". Inverting the bits is "0111 1111". Adding 1 yields: "1000 0000"
I am aware of the 2s complement representation of signed values.
Well, obviously you aren't. A 1 followed by all 0s is always the smallest negative number.
The answer is implementation defined as the type of 'default char' is implementation defined.
$3.9.1/1
Objects declared as characters (char)
shall be large enough to store any
member of the implementation’s basic
character set. If a character from
this set is stored in a character
object, the integral value of that
character object is equal to the value
of the single character literal form
of that character. It is
implementationdefined whether a char
object can hold negative values.
Characters can be explicitly declared
unsigned or signed. Plain char, signed
char, and unsigned char are three
distinct types.
$5.8/1 -
"The operands shall be of integral or
enumeration type and integral
promotions are performed. The type of
the result is that of the promoted
left operand. The behavior is
undefined if the right operand is
negative, or greater than or equal to
the length in bits of the promoted
left operand."
So when the value of char becomes negative, left shift from thereon has undefined behavior.
That's how it works.
-1 = 1111 1111
-2 = 1111 1110
-3 = 1111 1101
-4 = 1111 1110
...
-126 = 1000 0010
-127 = 1000 0001
-128 = 1000 0000
Two's complement is exactly like unsigned binary representation with one slight change:
The MSB (bit n-1) is redefined to have a value of -2n-1 instead of 2n-1.
That's why the addition logic is unchanged: because all the other bits still have the same place value.
This also explains the underflow/overflow detection method, which involves checking the carry from bit (n-2) into bit (n-1).
There is a pretty simple process for converting from a negative two's complement integer value to it's positive equivalent.
0000 0001 ; The x = 1
1000 0000 ; x <<= 7
The two's complement process is two-steps... first, if the high-bit is 1, reverse all bits
0111 1111 ; (-) 127
then add 1
1000 0000 ; (-) 128
Supplying a char to a %d format specifier that expects an int is probably unwise.
Whether an unadorned char is signed or unsigned is implementation defined. In this case not only is it apparently signed, but also the char argument has been pushed on to the stack an an int sized object and sign extended so that the higher order bits are all set to the same value as the high order bit of the original char.
I am not sure whether this is defined behaviour or not without looking it up, but personally I'd have cast the char to an int when formatting it with %d. Not least because some compilers and static analysis tools will trap that error and issue a warning. GCC will do so when -Wformat is used for example.
That is the explanation, if you want a solution (i.e. one that prints 128 rather than -128) then you need to cast to unsigned and mask-off the sign extension bits as well as using a correctly matching format specifier:
printf("%u", (unsigned)ch & 0xff );