I am trying to take one's complement of 0 to get 1 but I get 4294967295. Here is what I have done:
unsigned int x = 0;
unsigned int y= ~x;
cout << y;
My output is 4294967295 but I expect 1, why is this so? By the way, I am doing this in C++.
Why do you expect 1? Bit-wise complement flips all the bits.
00000000000000000000000000000000 = 0
|
bitwise NOT
|
v
11111111111111111111111111111111 = 4294967295
Perhaps you are thinking of a logical NOT. In C++ this is written as !x.
You have to look at this in binary to understand exactly what is happening.
unsigned int x = 0, is 00000000 00000000 00000000 00000000 in memory.
The ~x statement flips all bits, meaning the above turns into:
11111111 11111111 11111111 11111111
which converts to 4294967295 in decimal form.
XOR will allow you flip only certain bits. If you only want to flip the least significant bit, use x ^ 1 instead.
Where did you get the expectation of 1 from?
Your understanding of bitwise operations clearly shows is lacking, it would be prudent to work through them first before posting in here...
you're not confusing with a ! which is a logical NOT, are you?
a ~ bitwise complement or a bitwise NOT operation flips all the bits from 1 to 0 and vice versa depending on where in the bitmask is set, so for example, a 1 is
00000000 00000000 00000000 00000001
doing a ~ bitwise NOT on that flips it to
11111111 11111111 11111111 11111110
which gives you the maximum value less 1 of the integer datatype on a 32bit system.
Here is a worthy linky to this which shows you how to do bit-twiddling here.
An integer is more than just 1 bit (it's 4 bytes, or 32 bits). By notting it, you'r flipping everything, so in this case 00000... becomes 11111...
~ flips all of the bits in the input. Your input is an unsigned int, which has 32 bits, all of which are 0. Flipping each of those 0-bits gives you 32 1-bits instead, which is binary for that large number.
If you only want to flip the least significant bit, you can use y = x ^ 1 - that is, use XOR instead.
You can use
unsigned int y= !x;
to get y = 1;
Related
This question already has answers here:
What is “two's complement”?
(24 answers)
Closed 1 year ago.
I am really curious and confused:
how is 0xFFFF (0b11111111111111) or 0xFF (0b11111111) -1?
if 0b01111111 is 127 and the first bit indicates that it is a positive number then shouldn't 0b11111111 be -127?
Am I missing something???
Two's complement form is commonly used to represent signed integers. To swap the sign of a number this way, invert all the bits and add 1. It has the advantage that there is only one representation of zero.
01111111 = 127
To get -127 flip the bits:
10000000
And add 1:
10000001
To negate 1:
00000001
11111110 // flip
11111111 // + 1
With 0:
00000000
11111111 // flip
00000000 // + 1 (the carry bit is discarded and it's still 0)
And just to show it works going the other way:
With -127:
10000001
01111110 // flip
01111111 // + 1 and you are back to +127.
This question already has answers here:
Minimum number of bits to represent a given `int`
(9 answers)
Closed 4 months ago.
This question is not a duplicate of Count the number of set bits in a 32-bit integer. See comment by Daniel S. below.
--
Let's say there is a variable int x;. Its size is 4 bytes, i.e. 32 bits.
Then I assign a value to this variable, x = 4567 (in binary 10001 11010111), so in memory it looks like this:
00000000 00000000 00010001 11010111
Is there a way to get the length of the bits which matter. In my example, the length of bits is 13 (I marked them with bold).
If I use sizeof(x) it returns 4, i.e. 4 bytes, which is the size of the whole int. How do I get the minimum number of bits required to represent the integer without the leading 0s?
unsigned bits, var = (x < 0) ? -x : x;
for(bits = 0; var != 0; ++bits) var >>= 1;
This should do it for you.
Warning: math ahead. If you are squeamish, skip ahead to the TL;DR.
What you are really looking for is the highest bit that is set. Let's write out what the binary number 10001 11010111 actually means:
x = 1 * 2^(12) + 0 * 2^(11) + 0 * 2^(10) + ... + 1 * 2^1 + 1 * 2^0
where * denotes multiplication and ^ is exponentiation.
You can write this as
2^12 * (1 + a)
where 0 < a < 1 (to be precise, a = 0/2 + 0/2^2 + ... + 1/2^11 + 1/2^12).
If you take the logarithm (base 2), let's denote it by log2, of this number you get
log2(2^12 * (1 + a)) = log2(2^12) + log2(1 + a) = 12 + b.
Since a < 1 we can conclude that 1 + a < 2 and therefore b < 1.
In other words, if you take the log2(x) and round it down you will get the most significant power of 2 (in this case, 12). Since the powers start counting at 0, the number of bits is one more than this power, namely 13. So:
TL;DR:
The minimum number of bits needed to represent the number x is given by
numberOfBits = floor(log2(x)) + 1
You're looking for the most significant bit that's set in the number. Let's ignore negative numbers for a second. How can we find it? Well, let's see how many bits we need to set to zero before the whole number is zero.
00000000 00000000 00010001 11010111
00000000 00000000 00010001 11010110
^
00000000 00000000 00010001 11010100
^
00000000 00000000 00010001 11010000
^
00000000 00000000 00010001 11010000
^
00000000 00000000 00010001 11000000
^
00000000 00000000 00010001 11000000
^
00000000 00000000 00010001 10000000
^
...
^
00000000 00000000 00010000 00000000
^
00000000 00000000 00000000 00000000
^
Done! After 13 bits, we've cleared them all. Now how do we do this? Well, the expression 1<< pos is the 1 bit shifted over pos positions. So we can check if (x & (1<<pos)) and if true, remove it: x -= (1<<pos). We can also do this in one operation: x &= ~(1<<pos). ~ gets us the complement: all ones with the pos bit set to zero instead of the other way around. x &= y copies the zero bits of y into x.
Now how do we deal with signed numbers? The easiest is to just ignore it: unsigned xu = x;
Many processors provide an instruction for calculating the number of leading zero bits directly (e.g. x86 has lzcnt / bsr and ARM has clz). Usually C++ compilers provide an intrinsic for accessing one of these instructions. The number of leading zeros can then be used to calculate the bit length.
In GCC, the intrinsic is called __builtin_clz. It counts the number of leading zeros for a 32 bit integer.
However, there is one caveat about __builtin_clz. When the input is 0, then the result is undefined. Therefor we need to take care of this special case. This is done in the following function with (x == 0) ? 32 : ..., which gives the result 32 when x is 0:
uint32_t count_of_leading_0_bits(const uint32_t &x) {
return (x == 0) ? 32 : __builtin_clz(x);
}
The bit length can then be calculated from the number of leading zeros:
uint32_t bitlen(const uint32_t &x) {
return 32 - count_of_leading_0_bits(x);
}
Note that other C++ compilers have different intrinsics for counting the number of leading zero bits, but you can find them quickly with a search on the internet. Here is How to use MSVC intrinsics to get the equivalent of this GCC code? for an equivalent with MSVC.
The portable modern way since C++20 should probably use std::countl_zero, like
#include <bit>
int bit_length(unsigned x)
{
return (8*sizeof x) - std::countl_zero(x);
}
Both gcc and clang emit a single bsr instruction on x86 for this code (with a branch on zero), so it should be pretty much optimal.
Note that std::countl_zero only accepts unsigned arguments though, so deciding how to handle your original int parameter is left as an exercise for the reader.
I knew that ~ operator does NOT operation. But I could not make out the output of the following program (which is -65536). What exactly is happening?
#include <stdio.h>
int main(void) {
int b = 0xFFFF;
printf("%d",~b);
return 0;
}
Assuming 32-bit integers
int b = 0xFFFF; => b = 0x0000FFFF
~b = 0xFFFF0000
The top bit is now set. Assuming 2s complement, this means we have a negative number. Inverting the other bits then adding one gives 0x00010000 or 65536
When you assign the 16-bit value 0xffff to the 32-bit integer b, the variable b actually becomes 0x0000ffff. This means when you do the bitwise complement it becomes 0xffff0000 which is the same as decimal -65536.
The ~ operator in C++ is the bitwise NOT operator. It is also called the bitwise complement. This is flipping the bits of your signed integer.
For instance, if you had
int b = 8;
// b in binary = 1000
// ~b = 0111
This will flip the bits that represent the initial integer value provided.
It is doing a bitwise complement, this output may help you understand what is going on better:
std::cout << std::hex << " b: " << std::setfill('0') << std::setw(8) << b
<< " ~b: " << (~b) << " -65536: " << -65536 << std::endl ;
the result that I receive is as follows:
b: 0000ffff ~b: ffff0000 -65536: ffff0000
So we are setting the lower 16 bits to 1 which gives us 0000ffff and then we do a complement which will set the lower 16 bits to 0 and the upper 16 bits to 1 which gives us ffff0000 which is equal to -65536 in decimal.
In this case since we are working with bitwise operations, examining the data in hex gives us some insight into what is going on.
The result depends on how signed integers are represented on your platform. The most common representation is a 32-bit value using "2s complement" arithmetic to represent negative values. That is, a negative value -x is represented by the same bit pattern as the unsigned value 2^32 - x.
In this case, the original bit pattern has the lower 16 bits set:
0x0000ffff
The bitwise negation clears those bits and sets the upper 16 bits:
0xffff0000
Interpreting this as a negative number gives the value -65536.
Usually, you'll want to use unsigned types when you're messing around with bitwise arithmetic, to avoid this kind of confusion.
Your comment:
If it is NOT of 'b' .. then output should be 0 but why -65536
Suggests that you are expecting the result of:
uint32_t x = 0xFFFF;
uint32_t y = ~x;
to be 0.
That would be true for a logical not operation, such as:
uint32_t x = 0xFFFF;
uint32_t y = !x;
...but operator~ is not a logical NOT, but a bitwise not. There is a big difference.
A logical returns 0 for non-0 values (or false for true values), and 1 for 0 values.
But a bitwise not reverses each bit in a given value. So a binary NOT of 0xF:
0x0F: 00000000 11111111
~0x0F: 11111111 00000000
Is not zero, but 0xF0.
For every binary number in the integer, a bitwise NOT operation turns all 1s into 0s, and all 0s are turned to 1s.
So hexadecimal 0xFFFF is binary 1111 1111 1111 1111 (Each hexadecimal character is 4 bits, and F, being 15, is full 1s in all four bits)
You set a 32 bit integer to that, which means it's now:
0000 0000 0000 0000 1111 1111 1111 1111
You then NOT it, which means it's:
1111 1111 1111 1111 0000 0000 0000 0000
The topmost bit is the signing bit (whether it's positive or negative), so it gives a negative number.
When I give to a variable such value: e = 17|-15; , I get -15 as an answer after compiling.I can't understand what arithmetic c++ uses. How does it perform a bit-wise OR operation on negative decimals?
It's just doing the operation on the binary representations of your numbers. In your case, that appears to be two's complement.
17 -> 00010001
-15 -> 11110001
As you can see, the bitwise OR of those two numbers is still -15.
In your comments above, you indicated that you tried this with the two's complement representations, but you must have done something wrong. Here's the step by step:
15 -> 00001111 // 15 decimal is 00001111 binary
-15 -> ~00001111 + 1 // negation in two's complement is equvalent to ~x + 1
-15 -> 11110000 + 1 // do the complement
-15 -> 11110001 // add the 1
It does OR operations on negative numbers the same way it does so on positive numbers. The numbers are almost certainly represented in two's-complement form, which gives you these values:
17 = 0000000000010001
-15 = 1111111111110001
As you can see, all the bits of 17 are already set in −15, so the result of combining them is again −15.
A bitwise or with a negative number works JUST like a bitwise or with a positive number. The bits in one number are ored with the bits in the other number. How your processor represents negative numbers is a different matter. Most use something called "two's complement", which is essentially "invert the number and add 1".
So, if we have, for simplicity, 8 bit numbers:
15 is 00001111
Inverted we get 11110000
Add one 11110001
17 is 00010001
Ored together 11110001
17 = b00010001
-15 = b11110001 <--- 2s complement
| -15 = b11110001
The operator | is a "bitwise OR" operator, meaning that every bit in the target is computed as the OR-combination of the corresponding bits in the two operands. This means, that a bit in the result is 1 if any of the two bits in the numbers at the same positions are 1, otherwise 0.
Clearly, the result depends on the binary representation of the numbers which again depends on the platform.
Almost all platforms use the Two's complement, which can be thought as a circle of unsigned numbers, in which negative numbers are just in the opposite direction than positive numbers and "wrap around" the circle.
Unsigned integers:
Signed integers:
The calculation of your example is as follows.
17: 00000000 00000000 00000000 00010001
-15: 11111111 11111111 11111111 11110001
------------------------------------------
-15: 11111111 11111111 11111111 11110001
you have to looks at how the bits work
Basically, if either number has a 1 in a particular spot, than the result will also have a 1
-15 : 11110001 (two's complement)
17 : 00010001
-15 | 17 : 11110001
as you can see, the result is the same as -15
The following program gives a signed/unsigned mismatch warning:
#include <iostream>
int main()
{
unsigned int a = 2;
int b = -2;
if(a < b)
std::cout << "a is less than b!";
return 0;
}
I'm trying to understand the problem when it comes to mixing signed and unsigned ints. From what I have been told, an int is typically stored in memory using two's complement.
So, let's say I have the number 2. Based on what I understand it will be represented in memory like this:
00000000 00000000 00000000 00000010
And -2 will be represented as the one's compliment plus 1, or:
11111111 11111111 11111111 11111110
With two's compliment there is no bit reserved for the sign like the "Sign-and-magnitude method". If there is no sign bit, why are unsigned ints capable of storing larger positive numbers? What is an example of a problem which could occur when mixing signed/unsigned ints?
I'm trying to understand the problem when it comes to mixing signed and unsigned ints.
a < b
By the usual arithmetic conversions b is converted to an unsigned int, which is a huge number > a.
Here the expression a < b is the same as:
2U < (unsigned int) -2 which the same as:
2U < UINT_MAX - 1 (in most two's complement systems) which is 1 (true).
With two's compliment there is no bit reserved for the sign like the "Sign-and-magnitude method".
In two's complement representation if the most significant bit of a signed quantity is 1, the number is negative.
What would be the representation of 2 147 483 648 be?
10000000 00000000 00000000 00000000
What would be the representation of -2 147 483 648 be?
10000000 00000000 00000000 00000000
The same! Hence, you need a convention to know the difference. The convention is that the first bit is still used to decide the sign, just not using the naïve sign-magnitude method you would otherwise use. This means every positive number starts with 0, leaving only 31 bits for the actual number. This gives half the positive range of unsigned numbers.
This problem with your code is that the signed integer will be converted to unsigned. For example, -1 will become 4 294 967 295 (they have the same binary representation), and will be much larger than zero, instead of smaller. This is probably not what you expect.
An int can only store 2^32 different values (if it's 32bit), whether it is signed or unsigned.
So a signed int has 1/2 of that range below zero, and 1/2 of that range above zero. An unsigned int has that full range above zero.
While they don't call the most significant bit of a signed int a 'sign bit', it can be treated that way.
Isnt this as easy as (int)a < int(b)??? Isnt C++ like you need to strongly do explicit type casting?
Well, the -1 as signed int is -1 and as unsigned int is 65534, so the problem is with signed values where "-" is required. In case of returning -1 error code, with unsigned int that would be 65534 code.