How does std::cout print negative zero in a ones-complement system? - c++

On a ones-complement platform, what would the following code print?
#include <iostream>
int main() {
int i = 1, j = -1;
std::cout << i+j << std::endl;
return 0;
}
I would suspect it would print "0" instead of "-0", but I can't seem to find anything authoritative.
Edit: To clarify, I am interested in how -0 would be printed, several people have suggested that in practice, the implementation of ones-compliment might not generate a negative zero with the above code.
In those cases, the following has been suggested to actually generate a -0:
#include <iostream>
int main() {
std::cout << ~0 << std::endl;
return 0;
}
The question still remains: what will this print?

First of all, just to clarify thing, crafting a negative zero using bitwise operations and then using the resulting value is not portable. That said, nothing specifies in the documentation of fprintf (thus, of std::basic_ostream::operator<<(int)) whether the sign bit in the representation of int corresponds to a padding bit in the representation of unsigned or an actual value bit.
As a conclusion, this is unspecified behaviour.
#include <iostream>
int main() {
std::cout << ~0 << std::endl;
return 0;
}

Indeed adding n to -n should give you a negative zero. But the generation of -0 doesn't happen in practice since 1's complement addition uses a technique called a complementing subtractor (the second argument is complemented and subtracted from the first).
(The idiomatic way of getting a signed floating point zero doesn't apply here since you can't divide an integer by zero).

Looking through the glibc source code, I found these lines in vfprintf.c:
532 is_negative = signed_number < 0; \
533 number.word = is_negative ? (- signed_number) : signed_number; \
534 \
535 goto LABEL (number); \
...
683 if (is_negative) \
684 outchar (L_('-')); \
So it would appear that the condition is signed_number < 0, which would return false for a -0.
as #Ysc mentioned, nothing in the documentation gives any specification to printing -0, so a different implementation of libc (on a ones-compliment) platform may yield a different result.

If we view the theoretical point of the one's complement. Since the zero is defined as (+/-)0 there will be two binary values for 0 if we have 4 Bit values the zero will be 0000 (+0) and 1111 (-0). As a result of this you always have to do a correction if an operation, addition or substraction, has a zero-crossing operation in it.
So for example if we do the following operation -2+6=4 the result will be calculated as followed:
1101 (-2)
+ 0110 (6)
------
1100 (add carry)
======
0011 (+3)
As you can see in the Bit operation the result is incorrect and is only the incomplete result. In this case we have to add a +1 to the value to get the correct result. To identify if the +1 has to be added we have to take a look at the add carry result. If the most left numer 1100 is a ONE than we have to add +1 to the result to get the correct result.
If we have a look at your example:
0001 (+1)
+ 1110 (-1)
------
0000 (add carry)
======
1111 (-0)
We see that the result will be -0 and this will be the final result because the left add carry bit is 0.

Related

Why does ~n give -(n+1)?

I wanted to test what happens when I write this code. I can not explain the following case:
Input: 5
Output: -6
#include <iostream>
int lastBit(int n){ return ~(n); }
int main() { std::cout << lastBit(5); }
Computers express negative numbers in quite a specific way. Values are always stored as series of bits and there is no way of introducing negative sign, so this has to be solved differently: one of bits plays role of a negative sign.
But this is not all - the system must be designed to handle maths properly (and ideally, the same way as for positive numbers).
So for instance 0 == 0b00000000. If you subtract 1 from 0, you get -1, but from the binary perspective, due to "binary underflow", 0b00000000 - 0b00000001 == 0b11111111, hence 0b11111111 == -1.
If you then subtract 1 from -1, you get 0b11111111 - 0b00000001 == 0b11111110 == -2. But 2 == 0b00000010, which shows, why -2 != ~2 (and the same rule applies to next values).
The very short, but maybe more intuitive answer might be: "-5 != ~5, because there is only one zero binarily (eg. 0 == -0), so there is always one more negative value than positive ones"
Not on all systems but on systems that use complement of two for signed values. By definition there, the binary representation of negative X = -n, where n is a positive integer, is ~n + 1, which allows signed and unsigned addition operations to be same.
Until C++20 result of ~(n) for signed negative n here would be undefined, because it depends on platform and compiler. In C++20 it's required to behave as if complement of two is used.
I found out the following
5 = 0101
Therefore, ~(5) = 1010
The 1 at the most significant bit denotes negativee (-)
010 = 6
Therefore, output is -6
Since this is a 2's complement machine

Why is the binary equivalent calculation getting incorrect?

I wrote the following program to output the binary equivalent of a integer taking(I checked that int on my system is of 4 bytes) it is of 4 bytes. But the output doesn't come the right. The code is:
#include<iostream>
#include<iomanip>
using namespace std;
void printBinary(int k){
for(int i = 0; i <= 31; i++){
if(k & ((1 << 31) >> i))
cout << "1";
else
cout << "0";
}
}
int main(){
printBinary(12);
}
Where am I getting it wrong?
The problem is in 1<<31. Because 231 cannot be represented with a 32-bit signed integer (range −231 to 231 − 1), the result is undefined [1].
The fix is easy: 1U<<31.
[1]: The behavior is implementation-defined since C++14.
This expression is incorrect:
if(k & ((1<<31)>>i))
int is a signed type, so when you shift 1 31 times, it becomes the sign bit on your system. After that, shifting the result right i times sign-extends the number, meaning that the top bits remain 1s. You end up with a sequence that looks like this:
80000000 // 10000...00
C0000000 // 11000...00
E0000000 // 11100...00
F0000000 // 11110...00
F8000000
FC000000
...
FFFFFFF8
FFFFFFFC
FFFFFFFE // 11111..10
FFFFFFFF // 11111..11
To fix this, replace the expression with 1 & (k>>(31-i)). This way you would avoid undefined behavior* resulting from shifting 1 to the sign bit position.
* C++14 changed the definition so that shifting 1 31 times to the left in a 32-bit int is no longer undefined (Thanks, Matt McNabb, for pointing this out).
A typical internal memory representation of a signed integer value looks like:
The most significant bit (first from the right) is the sign bit and in signed numbers(like int) it represents whether the number is negative or not.
When you shift additional bits sign extension is performed to preserve the number's sign. This is done by appending digits to the most significant side of the number.(following a procedure dependent on the particular signed number representation used).
In unsigned numbers the first bit from the right is just the MSB of the represented number, thus when you shift additional bits no sign extension is performed.
Note: the enumeration of the bits starts from 0, so 1 << 31 replaces your sign bit and after that every bit shift operation to the left >> results in sign extension. (as pointed out by #dasblinkenlight)
So, the simple solution to your problem is to make the number unsigned (this is what U does in 1U << 31) before you start the bit manipulation. (as pointed out by #Yu Hao)
For further reading see signed number representations and two's complement.(as it's the most common)

What does ~ exactly do when used in bit wise operations?

What is the difference between ~i and INT_MAX^i
Both give the same no. in binary but when we print the no. the output is different as shown in the code below
#include <bits/stdc++.h>
using namespace std;
void binary(int x)
{
int i=30;
while(i>=0)
{
if(x&(1<<i))
cout<<'1';
else
cout<<'0';
i--;
}
cout<<endl;
}
int main() {
int i=31;
int j=INT_MAX;
int k=j^i;
int g=~i;
binary(j);
binary(i);
binary(k);
binary(g);
cout<<k<<endl<<g;
return 0;
}
I get the output as
1111111111111111111111111111111
0000000000000000000000000011111
1111111111111111111111111100000
1111111111111111111111111100000
2147483616
-32
Why are k and g different?
K and g are different - the most significant bit is different. You do not display it since you show only 31 bits. In k the most significant bit is 0 (as the result of XOR of two 0's). In g it is 1 as the result of negation of 0 (the most significant bit of i).
Your test is flawed. If you output all of the integer's bits, you'll see that the values are not the same.
You'll also now see that NOT and XOR are not the same operation.
Try setting i = 31 in your binary function; it is not printing the whole number. You will then see that k and g are not the same; g has the 'negative' flag (1) on the end.
Integers use the 32nd bit to indicate if the number is positive or negative. You are only printing 31 bits.
~ is bitwise NOT; ~11100 = ~00011
^ is bitwise XOR, or true if only one or the other
~ is bitwise NOT, it will flip all the bits
Example
a: 010101
~a: 101010
^ is XOR, it means that a bit will be 1 iff one bit is 0 and the other is 1, otherwise it will set to 0.
a: 010101
b: 001100
a^b: 011001
You want UINT_MAX. And you want to use unsigned int's INT_MAX only does not have the signed bit set. ~ will flip all the bits, but ^ will leave the sign bit alone because it is not set in INT_MAX.
This statement is false:
~i and INT_MAX^i ... Both give the same no. in binary
The reason it appears that they give the same number in binary
is because you printed out only 31 of the 32 bits of each number.
You did not print the sign bit.
The sign bit of INT_MAX is 0 (indicating a positive signed integer)
and is is not changed during INT_MAX^i
because the sign bit of i also is 0,
and the XOR of two zeros is 0.
The sign bit of ~i is 1 because the sign bit of i was 0 and the
~ operation flipped it.
If you printed all 32 bits you would see this difference in the binary output.

Please explain in detail the logic behind the output of x &(~0 << n)

I have doubt in logic behind x &(~0 <<n).
First of all I could not get the meaning of ~0. When I tried this in Java it showed -1. How
can we represent -1 in binary and differentiate it from the positive numbers?
The most common way (and the way that Java uses) to represent negative numbers, is called Two's Complement. As mentioned in my comment, one way to calculate the negative in this system is -x = ~(x - 1). An other, equivalent way, is -x = ~x + 1.
For example, in 8bit,
00000001 // 1
00000000 // 1 - 1
11111111 // ~(1 - 1) = ~0 = -1
Adding one to 11111111 would wrap to zero - it makes sense to call "the number such that adding one to it result in zero" minus one.
The numbers with the highest bit set are regarded as negative.
The wikipedia article I linked to contains more information.
As for x & (~0 << n), ~0 is just a way to represent "all ones" (which also happens to be -1, which is irrelevant for this use really). For most n, "all ones" shifted left by n is a bunch of ones followed by n zeroes.
In total, that expression clears the lower n bits of x.
At least, for 0 <= n <= 31.
a << n in Java, where a is an int, is equivalent to a << (n & 31).
Every bit of the byte 0 is 0, and every bit of -1 is 1, so the bitwise negation of 0 is -1.
Hence ~0 is -1.
As for the rest of the question: what are you actually asking?

Using tilde to get MAX value for int

I tryed to get MAX value for int, using tilde.But output is not what I have expected.
When I run this:
#include <stdio.h>
#include <limits.h>
int main(){
int a=0;
a=~a;
printf("\nMax value: %d",-a);
printf("\nMax value: %d",INT_MAX);
return 0;
}
I get output:
Max value: 1
Max value: 2147483647
I thought,(for exemple) if i have 0000 in RAM (i know that first bit shows is number pozitiv or negativ).After ~ 0000 => 1111 and after -(1111) => 0111 ,that I would get MAX value.
You have a 32-bit two's complement system. So - a = 0 is straightforward. ~a is 0xffffffff. In a 32-bit two's complement representation, 0xffffffff is -1. Basic algebra explains that -(-1) is 1, so that's where your first printout comes from. INT_MAX is 0x7fffffff.
Your logical error is in this statement: "-(1111) => 0111", which is not true. The arithmetic negation operation for a two's complement number is equivalent to ~x+1 - for your example:
~x + 1 = ~(0xffffffff) + 1
= 0x00000000 + 1
= 0x00000001
Is there a reason you can't use std::numeric_limits<int>::max()? Much easier and impossible to make simple mistakes.
In your case, assuming 32 bit int:
int a = 0; // a = 0
a = ~a; // a = 0xffffffff = -1 in any twos-comp system
a = -a; // a = 1
So that math is an incorrect way of computer the max. I can't see a formulaic way to compute the max: Just use numeric_limits (or INT_MAX if you're in a C-only codebase).
Your trick of using '~' to get maximum value works with unsigned integers. As others have pointed out, it doesn't work for signed integers.
Your posting shows an int which is equivalent to signed int. Try changing the type to unsigned int and see what happens.
There is no formula to compute the max value of a signed integer type in C. You simply must use the INT_MAX, etc. macros from limits.h and stdint.h.
binary 1...1111 would always represent -1. Simple math says -1 * -1 = 1!
Always remember there's just one zero: 0...0000. If you'd now swap the MSB and you'd be right, then you'd have 10...0000 which would then be -0 which can't be true (as 0 = -0 in math, but your binary numbers would be different).
Getting the negative value of a number isn't just about swapping the MSB.
It's not quite as straightforward as the top-bit indicating the sign. If it were, you could have both +0 and -0. You should read up on two's complement.
The correct answer is
max = (~0) >> 1;
I'm not a C/C++ expert, so you might need >>> instead. You need the shift operator that does NOT do sign extension.
In 2's complement notation 111111... is -1; now, the unary minus operator does not simply change the sign bit (otherwise it would provide strange results in every normal context), but computes correctly the opposite of the number, i.e. +1.
If you want to change the MSB you could use bitwise operators to simply set it to zero. Notice that however this way of finding the maximum value for the int type is not portable, since you're making assumptions about how the number is represented that are not required by the standard.