Here is my code
int a=2147483647;
int b= a<<1;
cout<<"a="<<a<<", b="<<b;
The output I am getting is-
a=214783647, b=-2
Binary representation of a is
0111 1111 1111 1111 1111 1111 1111 1111
By shifting it by 1 bit, it will change sign bit and replace LSB with 0. So, I think answer will be -ve and magnitude will be subtracted by 1 i.e
-2147483646
But it is giving result as -2 . Please explain.
This is because your computer is using 2 complement for the signed value.
Unsigned shifted value is 0xFFFFFFFE, which is -2 in 2 complement, not -2147483647.
Shifting is implementation defined in C.
BTW, -2147483647 is 0x80000001 on such CPU.
[expr.shift]/1 The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled. ... if E1 has a signed type and non-negative value, and E1 × 2^E2 is representable in the corresponding unsigned type of the result type, then that value, converted to the result type, is the resulting value; otherwise, the behavior is undefined.
Emphasis mine. Your program exhibits undefined behavior.
Edit: Upon closer consideration, I no longer think it's undefined behavior. 2147483647*2 does fit into unsigned int, "the corresponding unsigned type" of int. Its conversion to int is not undefined, but merely implementation-defined. It's entirely reasonable for an implementation using two's complement to define this conversion so that 2147483647*2 == -2, just reinterpreting the bit pattern, as other answers explained.
Well, there is a very long story behind.
Since int is a signed type, it means that the first bit is a sign and the whole system is two-complement.
so x = 0b 1111 1111 1111 1111 1111 1111 1111 0111 is x = -9
and for example x = 0b 1111 1111 1111 1111 1111 1111 1111 1111 is x = -1 and x = 0b 0000 0000 0000 0000 0000 0000 0000 0010 is 2
Learn more about Two complement.
Related
There are two quotes in my textbook (Patterson, 5th edition, Computer Organization) I don't get:
First:
Two’s complement gets its name from the rule that the unsigned sum of an n-bit number and its n-bit negative is 2^n; hence, the negation or complement of a number x is 2^n - x, or its “two’s complement.”
Say I have a 4 bit number 2 = 0010
and it's negative = -2 = 1110
If I add them, I get 10000 (is this overflow? is this bad?) which is 16 right? And 1110 unsigned is 14? Is that what that quote is saying?
and
A third alternative representation to two’s complement and sign and magnitude is called one’s complement. The negative of a one’s complement is found by inverting each bit, from 0 to 1 and from 1 to 0, or x. This relation helps explain its name since the complement of x is 2^n - x - 1.
What does that mean? Can you give an example to help me see why the name is this way?
Say we have the binary representation of 2:
2 = 0000 0000 0000 0000 0000 0000 0000 0010
In one's complement, the negation is:
-2 = 1111 1111 1111 1111 1111 1111 1111 1101
Is that the negative? If so, how is it 2^n - x - 1?
--------------------
I understand two's complement:
Simply invert every 0 to 1 and every 1 to 0, then add one to the result. This shortcut is based on the observation that the sum of a number and its inverted representation must be 111... (32 of them) 111, which represents -1.
So 2 = 0000 0000 0000 0000 0000 0000 0000 0010
if we negate 2:
1111 1111 1111 1111 1111 1111 1111 1101
and add 1:
1111 1111 1111 1111 1111 1111 1111 1110
which is = -2
But I'm a bit confused about one's complement. Is it just two's complement without the addition of 1?
I'm new to C++ and I find something I can't understand. Could anyone provide some help?
For the following codes:
int i = -3;
printf("i=%d\n",i);
i = i >> 1:
printf("i >> 1 evaluates to: %d\n", i);
then I got the result:
i=-3
i >> 1 evaluates to: -2
I don't quite understand.
As 3 is coded as( let is be simple):
3 : 0000 0011
-3 : 1111 1100
then after right shift operation, we should have:
-1 : 1111 1110
right? Why I got -2? (My PC in 64 bit)
Thanks for any help!
Your mistake is in assuming that because 3 is 00000011, -3 is represented simply by inverting bits (the so-called "one's complement" representation of negative numbers) to get 11111100. And that likewise 00000001 becomes 11111110 when negated. In fact that's not the case—instead your computer seems to be using the almost-universal "two's complement" system in which -3 is represented as 11111101, -2 is 11111110 and -1 is 11111111.
One nice intuition pump for the two's-complement system is to consider a series of increments, and to note that the behavior is somewhat consistent and intuitive regardless of whether you imagine them happening in the bit pattern itself, in the signed representation, or in the unsigned. Let's stick to 8 bits for simplicity (imagine the "9th bit" just getting discarded):
bit pattern interpreted as...
signed byte unsigned byte
11111101 -3 253
11111110 -2 254
11111111 -1 255
00000000 0 0 (wrap-around)
00000001 1 1
When it goes from -1 to 0 I can almost "hear" all those bits flipping over one after the other.
Actually -1 = 0xFFFF = 1111 1111 1111 1111b, -3 = 0xFFFD = 1111 1111 1111 1101b(for 4 byte int).
So when you use right shift, you get 1111 1111 1111 1110b which is -2
I'm fairly new to bit manipulation and I'm trying to figure out how (1 << 31) - 1 works.
First I know that 1 << 31 is
1000000000000000000000000000
and I know it's actually complement of minimum int value, but when I tried to figure out (1 << 31) - 1, I found an explanation states that, it's just
10000000000000000000000000000000 - 1 = 01111111111111111111111111111111
I was almost tempted to believe it since it's really straightforward. But is this what really happening? If it's not, why it happens to be right?
My original thought was that, the real process should be: the two's complement of -1 is
11111111111111111111111111111111
then (1 << 31) - 1 =
(1)01111111111111111111111111111111
the leftmost 1 is abandoned, then we have maximum value of int.
I'm really confused about which one is right.
It's both! 1 << 31 is:
1000 0000 0000 0000 0000 0000 0000 0000
Subtracting 1 gives:
0111 1111 1111 1111 1111 1111 1111 1111
One of the nice features about the two's complement layout of signed numbers is that addition and subtraction are exactly the same operations as they are for unsigned numbers. So 10000...000 represents a negative number in two's complement, the largest negative number, which is -2,147,483,648 in this case, and subtracting 1 from it causes wrap-around to the largest positive number, 2,147,483,647, but two's complement numbers are arranged so that we can pretend it's an unsigned number instead, so the subtraction is uncomplicated. Subtracting 1 from 10000...000 simply drops the leading 1 to a 0, and borrows a bunch of 1s, same as in decimal you get a bunch of 9s: 10000 - 1 = 9999.
It's also true that mathematically, (a - b) is the same as (a + (-b)), so we can do (1 << 31) + (-1) instead:
1000 0000 0000 0000 0000 0000 0000 0000 (1 << 31)
1111 1111 1111 1111 1111 1111 1111 1111 (-1)
-----------------------------------------
1 0111 1111 1111 1111 1111 1111 1111 1111 +
0111 1111 1111 1111 1111 1111 1111 1111 (truncate)
A 1 is carried out of the high end, and lost once the result is truncated back into a 32-bit integer.
Either way, that pattern, with a single 0 at the high end, then filled by 1s, is the representation of the maximum positive value for a two's complement integer of any width.
There are other ways to generate that pattern if you prefer, such as ~(1 << 31), and (-1 >>> 1) (where >>> means logical shift right) which is agnostic of the width of the integer.
I have problem: you know the 2s Complement so you can get the negative number of a positive one with the reverse and adding a one. e.g.
8 Bit
121 = 0111 1001
1st= 1000 0110
+ 0000 0001
---------
1000 0111 --> -121
So now if we have a -0
a zero looks as 8 bit
0000 0000
so a minus 0 should look
1111 1111 + 0000 0001
= 10000 0000
but that is 512
so I think that I've misunderstood something
To expand my previous comment to the question
1111 1111 + 0000 0001 in 8 bit is 0000 0000, the ninth bit is lost because there is no place from it.
And, yes the complement of a negative is a positive
-121 = 1000 0111
1st = 0111 1000
+ 0000 0001
---------
0111 1001 --> 121
Think of them as a circle, at one point there is 0, adding 1 at a time you go up to the opposite point (128 in 8 bit) at that point the sign is switched and the absolute value begin to decrease, e.g.: 128 + 1 = -127, as you continue to add 1 the value go back to 0 and the circle is completed.
So given a number of bit, you only have that much bit, no more, and if you want the value to be signed you really have only x-1 bit for the value, as the most significant bit is used for the sign (0 -> +; 1 -> -)
1 0000 0000b is 256, not 512. Truncated to 8 bits, it's 0.
This is because with two's complement, zero is zero. There is no positive or negative zero.
Compare this to one's complement or sign bit, where positive zero and negative zero are different values.
I wrote this piece of code just to see what would happen if I put a negative integer into an unsigned integer array.
#include <iostream>
int main()
{
using namespace std;
unsigned int array[4];
array[0]=4;
array[1]=4;
array[2]=2;
array[3]=-2;
cout << array[0] + array[1] + array[2] + array[3] << endl;
unsigned int b;
b=-2;
cout << b <<endl;
return 0;
}
I was expecting integer overflow to occur in both the cases. However, only in the second case that actually happened. In the first case, everything behaved as if it were an oridinary integer array, not an unsigned integer array. So what exactly is happening that's causing this anomalous behaviour. My compiler is gcc 4.8 in cases that's of any importance. Thank you for your help. EDIT: Here's th output on my computer
8
4294967294
There is an integer overflow. Here is the reason ( the numbers are converted to unsigned int)
1111 1111 1111 1111 1111 1111 1111 1110 // -2
+0000 0000 0000 0000 0000 0000 0000 0100 //+ 4
-------------------------------------------
0000 0000 0000 0000 0000 0000 0000 0010 //= 2
+0000 0000 0000 0000 0000 0000 0000 0100 //+ 4
-------------------------------------------
0000 0000 0000 0000 0000 0000 0000 0110 //= 6
+0000 0000 0000 0000 0000 0000 0000 0010 //+ 2
-------------------------------------------
0000 0000 0000 0000 0000 0000 0000 1000 //= 8 -> the result
when you do (assuming unsigned int is uint32_t):
array[0] = 4;
array[1] = 4;
array[2] = 2;
array[3] = -2; // You store 4294967294 here
And here array[0] + array[1] + array[2] + array[3] is equal to 4294967304 which don't fit in an uint32_t 0x1 0000 0008 which result in 8.
For the signed integers, the last bit is used to hold the sign value. So in your case, when you assign a negative integer to an unsigned integer, the last bit is taken up to represent the number rather than the sign value.
Negative numbers are usually represented in 2's complement form. So
11111110 is represented as −1 if signed
11111110 is represented as 254 if unsigned
Converting -2 to unsigned int results in the value 4294967294 (since unsigned int is 32 bits in the C++ implementation you're using).
unsigned int arithmetic is carried out modulo 4294967296 (or in general UINT_MAX+1). Hence in unsigned int, 4 + 4 + 2 + 4294967294 is 8.
Technically according to the standard this is not called "overflow", because the standard defines the result to depend only on the value of UINT_MAX. Overflow is the undefined behavior when signed integer arithmetic exceeds its bounds.
For signed integer, 31st bit is treated as sign bit (assuming 4 byte is integer size). For unsigned integers, there is no sign bit at all, i.e. each bit is contributes to the absolute value
You're seeing the result of (defined) unsigned integer overflow. Your -2 value becomes a very large unsigned integer, which when added to another unsigned integer, causes an overflow (the result is larger than the largest possible unsigned int value) with the effect that the result is 2 smaller than the other unsigned integer.
Eg:
unsigned int a = -2;
unsigned int b = 4;
unsigned int c = a + b; // result will be 2!