What is the difference between ~i and INT_MAX^i
Both give the same no. in binary but when we print the no. the output is different as shown in the code below
#include <bits/stdc++.h>
using namespace std;
void binary(int x)
{
int i=30;
while(i>=0)
{
if(x&(1<<i))
cout<<'1';
else
cout<<'0';
i--;
}
cout<<endl;
}
int main() {
int i=31;
int j=INT_MAX;
int k=j^i;
int g=~i;
binary(j);
binary(i);
binary(k);
binary(g);
cout<<k<<endl<<g;
return 0;
}
I get the output as
1111111111111111111111111111111
0000000000000000000000000011111
1111111111111111111111111100000
1111111111111111111111111100000
2147483616
-32
Why are k and g different?
K and g are different - the most significant bit is different. You do not display it since you show only 31 bits. In k the most significant bit is 0 (as the result of XOR of two 0's). In g it is 1 as the result of negation of 0 (the most significant bit of i).
Your test is flawed. If you output all of the integer's bits, you'll see that the values are not the same.
You'll also now see that NOT and XOR are not the same operation.
Try setting i = 31 in your binary function; it is not printing the whole number. You will then see that k and g are not the same; g has the 'negative' flag (1) on the end.
Integers use the 32nd bit to indicate if the number is positive or negative. You are only printing 31 bits.
~ is bitwise NOT; ~11100 = ~00011
^ is bitwise XOR, or true if only one or the other
~ is bitwise NOT, it will flip all the bits
Example
a: 010101
~a: 101010
^ is XOR, it means that a bit will be 1 iff one bit is 0 and the other is 1, otherwise it will set to 0.
a: 010101
b: 001100
a^b: 011001
You want UINT_MAX. And you want to use unsigned int's INT_MAX only does not have the signed bit set. ~ will flip all the bits, but ^ will leave the sign bit alone because it is not set in INT_MAX.
This statement is false:
~i and INT_MAX^i ... Both give the same no. in binary
The reason it appears that they give the same number in binary
is because you printed out only 31 of the 32 bits of each number.
You did not print the sign bit.
The sign bit of INT_MAX is 0 (indicating a positive signed integer)
and is is not changed during INT_MAX^i
because the sign bit of i also is 0,
and the XOR of two zeros is 0.
The sign bit of ~i is 1 because the sign bit of i was 0 and the
~ operation flipped it.
If you printed all 32 bits you would see this difference in the binary output.
Related
I saw the following line of code here in C.
int mask = ~0;
I have printed the value of mask in C and C++. It always prints -1.
So I do have some questions:
Why assigning value ~0 to the mask variable?
What is the purpose of ~0?
Can we use -1 instead of ~0?
It's a portable way to set all the binary bits in an integer to 1 bits without having to know how many bits are in the integer on the current architecture.
C and C++ allow 3 different signed integer formats: sign-magnitude, one's complement and two's complement
~0 will produce all-one bits regardless of the sign format the system uses. So it's more portable than -1
You can add the U suffix (i.e. -1U) to generate an all-one bit pattern portably1. However ~0 indicates the intention clearer: invert all the bits in the value 0 whereas -1 will show that a value of minus one is needed, not its binary representation
1 because unsigned operations are always reduced modulo the number that is one greater than the largest value that can be represented by the resulting type
That on a 2's complement platform (that is assumed) gives you -1, but writing -1 directly is forbidden by the rules (only integers 0..255, unary !, ~ and binary &, ^, |, +, << and >> are allowed).
You are studying a coding challenge with a number of restrictions on operators and language constructions to perform given tasks.
The first problem is return the value -1 without the use of the - operator.
On machines that represent negative numbers with two's complement, the value -1 is represented with all bits set to 1, so ~0 evaluates to -1:
/*
* minusOne - return a value of -1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 2
* Rating: 1
*/
int minusOne(void) {
// ~0 = 111...111 = -1
return ~0;
}
Other problems in the file are not always implemented correctly. The second problem, returning a boolean value representing the fact the an int value would fit in a 16 bit signed short has a flaw:
/*
* fitsShort - return 1 if x can be represented as a
* 16-bit, two's complement integer.
* Examples: fitsShort(33000) = 0, fitsShort(-32768) = 1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 8
* Rating: 1
*/
int fitsShort(int x) {
/*
* after left shift 16 and right shift 16, the left 16 of x is 00000..00 or 111...1111
* so after shift, if x remains the same, then it means that x can be represent as 16-bit
*/
return !(((x << 16) >> 16) ^ x);
}
Left shifting a negative value or a number whose shifted value is beyond the range of int has undefined behavior, right shifting a negative value is implementation defined, so the above solution is incorrect (although it is probably the expected solution).
Loooong ago this was how you saved memory on extremely limited equipment such as the 1K ZX 80 or ZX 81 computer. In BASIC, you would
Let X = NOT PI
rather than
LET X = 0
Since numbers were stored as 4 byte floating points, the latter takes 2 bytes more than the first NOT PI alternative, where each of NOT and PI takes up a single byte.
There are multiple ways of encoding numbers across all computer architectures. When using 2's complement this will always be true:~0 == -1. On the other hand, some computers use 1's complement for encoding negative numbers for which the above example is untrue, because ~0 == -0. Yup, 1s complement has negative zero, and that is why it is not very intuitive.
So to your questions
the ~0 is assigned to mask so all the bits in mask are equal 1 -> making mask & sth == sth
the ~0 is used to make all bits equal to 1 regardless of the platform used
you can use -1 instead of ~0 if you are sure that your computer platform uses 2's complement number encoding
My personal thought - make your code as much platform-independent as you can. The cost is relatively small and the code becomes fail proof
I wrote the following program to output the binary equivalent of a integer taking(I checked that int on my system is of 4 bytes) it is of 4 bytes. But the output doesn't come the right. The code is:
#include<iostream>
#include<iomanip>
using namespace std;
void printBinary(int k){
for(int i = 0; i <= 31; i++){
if(k & ((1 << 31) >> i))
cout << "1";
else
cout << "0";
}
}
int main(){
printBinary(12);
}
Where am I getting it wrong?
The problem is in 1<<31. Because 231 cannot be represented with a 32-bit signed integer (range −231 to 231 − 1), the result is undefined [1].
The fix is easy: 1U<<31.
[1]: The behavior is implementation-defined since C++14.
This expression is incorrect:
if(k & ((1<<31)>>i))
int is a signed type, so when you shift 1 31 times, it becomes the sign bit on your system. After that, shifting the result right i times sign-extends the number, meaning that the top bits remain 1s. You end up with a sequence that looks like this:
80000000 // 10000...00
C0000000 // 11000...00
E0000000 // 11100...00
F0000000 // 11110...00
F8000000
FC000000
...
FFFFFFF8
FFFFFFFC
FFFFFFFE // 11111..10
FFFFFFFF // 11111..11
To fix this, replace the expression with 1 & (k>>(31-i)). This way you would avoid undefined behavior* resulting from shifting 1 to the sign bit position.
* C++14 changed the definition so that shifting 1 31 times to the left in a 32-bit int is no longer undefined (Thanks, Matt McNabb, for pointing this out).
A typical internal memory representation of a signed integer value looks like:
The most significant bit (first from the right) is the sign bit and in signed numbers(like int) it represents whether the number is negative or not.
When you shift additional bits sign extension is performed to preserve the number's sign. This is done by appending digits to the most significant side of the number.(following a procedure dependent on the particular signed number representation used).
In unsigned numbers the first bit from the right is just the MSB of the represented number, thus when you shift additional bits no sign extension is performed.
Note: the enumeration of the bits starts from 0, so 1 << 31 replaces your sign bit and after that every bit shift operation to the left >> results in sign extension. (as pointed out by #dasblinkenlight)
So, the simple solution to your problem is to make the number unsigned (this is what U does in 1U << 31) before you start the bit manipulation. (as pointed out by #Yu Hao)
For further reading see signed number representations and two's complement.(as it's the most common)
I can't seem to understand why does -1 isn't getting changed under bitwise shift.
why does -1>>1=-1?
A signed bit shift doesn't just shift the bits for negative numbers. The reason for this is you would get the wrong result. -1 in twos complement has all the bits set i.e. it's:
111111...1
If you were to just shift this you'd get:
011111....1
In two's complement this would be the representation of the largest positive number instead of anything that you might expect, it's the same for any negative number simply shifting would make the number positive. A simple way of implementing a right shift in twos complement is this:
rightShiftVal(int val) {
if (int < 0) {
int res = ~val; //not the value.
res = res >> 1; //do the right shift (the direction of the shift is correct).
res = ~res; //Back to twos complement.
return res;
}
return val >> 1;
}
If you put -1 into the above algorithm (i.e. 11111...111) when you not the value you get 0, when you shift the value you get 0, then when you not 0 you back to 11111....111, the representation of -1. For all other values the algorithm should work as expected.
Update
As suggested in one of the other answers you can also shift right and add the bit on at the far left i.e.
if (val < 0) {
unsigned uVal = val;
uVal = (uVal >> 1);
uVal = uVal | 0x80000000; //bitwise or with 1000...0
return uVal
}
Note that for this to work you have to put your int into an unsigned int so it doesn't do the signed bit shift. Again if you work through this with the value 11111...1 (the representation of -1) you are left with 11111....1, so there is no change.
Because >> operator is a signed right shift operator which fills in the bit by moving it by 1 with the sign bit in your case. See this for brief discussion.
The negative nos. are stored in the form of 2's complementin the memory.
So, if you have a no. -5 it will be stored as:
1111 1011
and 2's complement of -1 is: 1111 1111
So, when you right shift it you get 1111 1111 that is again -1.
Note:
1.While storing negative no. MSB is used as sign bit. 0 indicates positive no. and 1 indicates negative no.
2.When you right shift a positive no. a 0 is added to the left and for negative nos. 1 is added to keep the sign.
I've got to program a function that receives
a binary number like 10001, and
a decimal number that indicates how many shifts I should perform.
The problem is that if I use the C++ operator <<, the zeroes are pushed from behind but the first numbers aren't dropped... For example
shifLeftAddingZeroes(10001,1)
returns 100010 instead of 00010 that is what I want.
I hope I've made myself clear =P
I assume you are storing that information in int. Take into consideration, that this number actually has more leading zeroes than what you see, ergo your number is most likely 16 bits, meaning 00000000 00000001 . Maybe try AND-ing it with number having as many 1 as the number you want to have after shifting? (Assuming you want to stick to bitwise operations).
What you want is to bit shift and then limit the number of output bits which can be active (hold a value of 1). One way to do this is to create a mask for the number of bits you want, then AND the bitshifted value with that mask. Below is a code sample for doing that, just replace int_type with the type of value your using -- or make it a template type.
int_type shiftLeftLimitingBitSize(int_type value, int numshift, int_type numbits=some_default) {
int_type mask = 0;
for (unsigned int bit=0; bit < numbits; bit++) {
mask += 1 << bit;
}
return (value << numshift) & mask;
}
Your output for 10001,1 would now be shiftLeftLimitingBitSize(0b10001, 1, 5) == 0b00010.
Realize that unless your numbits is exactly the length of your integer type, you will always have excess 0 bits on the 'front' of your number.
I tryed to get MAX value for int, using tilde.But output is not what I have expected.
When I run this:
#include <stdio.h>
#include <limits.h>
int main(){
int a=0;
a=~a;
printf("\nMax value: %d",-a);
printf("\nMax value: %d",INT_MAX);
return 0;
}
I get output:
Max value: 1
Max value: 2147483647
I thought,(for exemple) if i have 0000 in RAM (i know that first bit shows is number pozitiv or negativ).After ~ 0000 => 1111 and after -(1111) => 0111 ,that I would get MAX value.
You have a 32-bit two's complement system. So - a = 0 is straightforward. ~a is 0xffffffff. In a 32-bit two's complement representation, 0xffffffff is -1. Basic algebra explains that -(-1) is 1, so that's where your first printout comes from. INT_MAX is 0x7fffffff.
Your logical error is in this statement: "-(1111) => 0111", which is not true. The arithmetic negation operation for a two's complement number is equivalent to ~x+1 - for your example:
~x + 1 = ~(0xffffffff) + 1
= 0x00000000 + 1
= 0x00000001
Is there a reason you can't use std::numeric_limits<int>::max()? Much easier and impossible to make simple mistakes.
In your case, assuming 32 bit int:
int a = 0; // a = 0
a = ~a; // a = 0xffffffff = -1 in any twos-comp system
a = -a; // a = 1
So that math is an incorrect way of computer the max. I can't see a formulaic way to compute the max: Just use numeric_limits (or INT_MAX if you're in a C-only codebase).
Your trick of using '~' to get maximum value works with unsigned integers. As others have pointed out, it doesn't work for signed integers.
Your posting shows an int which is equivalent to signed int. Try changing the type to unsigned int and see what happens.
There is no formula to compute the max value of a signed integer type in C. You simply must use the INT_MAX, etc. macros from limits.h and stdint.h.
binary 1...1111 would always represent -1. Simple math says -1 * -1 = 1!
Always remember there's just one zero: 0...0000. If you'd now swap the MSB and you'd be right, then you'd have 10...0000 which would then be -0 which can't be true (as 0 = -0 in math, but your binary numbers would be different).
Getting the negative value of a number isn't just about swapping the MSB.
It's not quite as straightforward as the top-bit indicating the sign. If it were, you could have both +0 and -0. You should read up on two's complement.
The correct answer is
max = (~0) >> 1;
I'm not a C/C++ expert, so you might need >>> instead. You need the shift operator that does NOT do sign extension.
In 2's complement notation 111111... is -1; now, the unary minus operator does not simply change the sign bit (otherwise it would provide strange results in every normal context), but computes correctly the opposite of the number, i.e. +1.
If you want to change the MSB you could use bitwise operators to simply set it to zero. Notice that however this way of finding the maximum value for the int type is not portable, since you're making assumptions about how the number is represented that are not required by the standard.