Hey guys I was doing research on calculating the next power of two and stumbled on a code that looks like this:
int x;//assume that x is already initialized with a value
--x;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
return x+1;
It works fine when I run with positive numbers but it doesn't work with negative numbers which I dont understand because I think it should not matter whether or not a number is positive or negative in terms of finding the next power of two for it. If we have the number 5 and we want to find the next power of 2. If we think about it intuitively we know that its 8 because 8 is bigger than 5 and 8 is the same as 2^3. If I try using a negative number I always keep getting 0 which I don't understand because 0 is not a power of two
The short answer is because the C++ standard states that the value resulting from the >> operator on negative values is implementation defined, whereas on positive values it has a result of dividing by a power of 2.
The term "implementation defined", over-simplistically, means that the standard permits the result to vary between implementations i.e. between compilers. Among other things, that means no guarantee that it will behave in the same way as for positive values (for which the behaviour is unambiguously specified).
The reason is that the representation of a signed int is also implementation-defined. This allows, for example, the usage of twos-complement representation - which (although other representations are used sometimes) is quite commonly used in practice.
Mathematically, a right shift in twos-complement is equivalent to division by a power of two with rounding down toward -infinity, not toward zero. For a positive value, rounding toward zero and toward -infinity have the same effect (both zero and -infinity are less than any positive integral value). For a negative value they do not (rounding is away from zero, not toward zero).
Peter gave you an interpretation of the code based on towards which value you round. Here's another, more bitwise, one:
The successive shifts in this code are "filling in" with 1s all bit positions lower than the highest one set at the start. Let us look at that more in detail:
Let x=0b0001 ???? ???? ???? be the binary representation of a 16 bit number, where ? may be 0 or 1.
x |= x >> 1; // x = 0b0001 ???? ???? ???? | 0b0000 1??? ???? ???? = 0b0001 1??? ???? ????
x |= x >> 2; // x = 0b0001 1??? ???? ???? | 0b0000 011? ???? ???? = 0b0001 111? ???? ????
x |= x >> 4; // x = 0b0001 111? ???? ???? | 0b0000 0001 111? ???? = 0b0001 1111 111? ????
x |= x >> 8; // x = 0b0001 1111 111? ???? | 0b0000 0000 0001 1111 = 0b0001 1111 1111 1111
Hence the shifts are giving you the number of the form 0b00...011...1 that is only 0s then only 1s, which means a number of the form 2^n-1. That is why you add 1 at the end, to get a power of 2. To get the correct result, you also need to remove 1 at the start, to compensate for the one you'll add at the end.
Now for negative numbers, the C++ standard does not define what the most significant bits should be when right-shifting. But whether they are 1 or 0 is irrelevant in this specific case, as long as your representation of negative numbers uses a 1 in its most significant bit position, i.e. for almost all of them.*
Because you always or x with itself, all those left-most bits (where the shifts differ) are going to be ored with 1s. At the end of the algorithm, you will return 0b11111...1 + 1 which in your case means 0 (because you use 2s complement, the result would be 1 in 1s complement and -2number of bits - 1 + 1 in sign-magnitude).
* This holds true for the main negative numbers representations, from most to least popular: that is, 2s complement, sign-magnitude, and 1s complement. An example where this is not true is excess-K representation, which is used for IEEE floating point exponents.
Related
I saw the following line of code here in C.
int mask = ~0;
I have printed the value of mask in C and C++. It always prints -1.
So I do have some questions:
Why assigning value ~0 to the mask variable?
What is the purpose of ~0?
Can we use -1 instead of ~0?
It's a portable way to set all the binary bits in an integer to 1 bits without having to know how many bits are in the integer on the current architecture.
C and C++ allow 3 different signed integer formats: sign-magnitude, one's complement and two's complement
~0 will produce all-one bits regardless of the sign format the system uses. So it's more portable than -1
You can add the U suffix (i.e. -1U) to generate an all-one bit pattern portably1. However ~0 indicates the intention clearer: invert all the bits in the value 0 whereas -1 will show that a value of minus one is needed, not its binary representation
1 because unsigned operations are always reduced modulo the number that is one greater than the largest value that can be represented by the resulting type
That on a 2's complement platform (that is assumed) gives you -1, but writing -1 directly is forbidden by the rules (only integers 0..255, unary !, ~ and binary &, ^, |, +, << and >> are allowed).
You are studying a coding challenge with a number of restrictions on operators and language constructions to perform given tasks.
The first problem is return the value -1 without the use of the - operator.
On machines that represent negative numbers with two's complement, the value -1 is represented with all bits set to 1, so ~0 evaluates to -1:
/*
* minusOne - return a value of -1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 2
* Rating: 1
*/
int minusOne(void) {
// ~0 = 111...111 = -1
return ~0;
}
Other problems in the file are not always implemented correctly. The second problem, returning a boolean value representing the fact the an int value would fit in a 16 bit signed short has a flaw:
/*
* fitsShort - return 1 if x can be represented as a
* 16-bit, two's complement integer.
* Examples: fitsShort(33000) = 0, fitsShort(-32768) = 1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 8
* Rating: 1
*/
int fitsShort(int x) {
/*
* after left shift 16 and right shift 16, the left 16 of x is 00000..00 or 111...1111
* so after shift, if x remains the same, then it means that x can be represent as 16-bit
*/
return !(((x << 16) >> 16) ^ x);
}
Left shifting a negative value or a number whose shifted value is beyond the range of int has undefined behavior, right shifting a negative value is implementation defined, so the above solution is incorrect (although it is probably the expected solution).
Loooong ago this was how you saved memory on extremely limited equipment such as the 1K ZX 80 or ZX 81 computer. In BASIC, you would
Let X = NOT PI
rather than
LET X = 0
Since numbers were stored as 4 byte floating points, the latter takes 2 bytes more than the first NOT PI alternative, where each of NOT and PI takes up a single byte.
There are multiple ways of encoding numbers across all computer architectures. When using 2's complement this will always be true:~0 == -1. On the other hand, some computers use 1's complement for encoding negative numbers for which the above example is untrue, because ~0 == -0. Yup, 1s complement has negative zero, and that is why it is not very intuitive.
So to your questions
the ~0 is assigned to mask so all the bits in mask are equal 1 -> making mask & sth == sth
the ~0 is used to make all bits equal to 1 regardless of the platform used
you can use -1 instead of ~0 if you are sure that your computer platform uses 2's complement number encoding
My personal thought - make your code as much platform-independent as you can. The cost is relatively small and the code becomes fail proof
in c++, I have the following code:
int x = -3;
x &= 0xffff;
cout << x;
This produces
65533
But if I remove the negative, so I have this:
int x = 3;
x &= 0xffff;
cout << x;
I simply get 3 as a result
Why does the first result not produce a negative number? I would expect that -3 would be sign extended to 16 bits, which should still give a twos complement negative number, considering all those extended bits would be 1. Consequently the most significant bit would be 1 too.
It looks like your system uses 32-bit ints with two's complement representation of negatives.
Constant 0xFFFF covers the least significant two bytes, with the upper two bytes are zero.
The value of -3 is 0xFFFFFFFD, so masking it with 0x0000FFFF you get 0x0000FFFD, or 65533 in decimal.
Positive 3 is 0x00000003, so masking with 0x0000FFFF gives you 3 back.
You would get the result that you expect if you specify 16-bit data type, e.g.
int16_t x = -3;
x &= 0xffff;
cout << x;
In your case int is more than 2 bytes. You probably run on modern CPU where usually these days integer is 4 bytes (or 32 bits)
If you take a look the way system stores negative numbers you will see that its a complementary number. And if you take only last 2 bytes as your mask is 0xFFFF then you will get only a part of it.
your 2 options:
use short intstead of int. Usually its a half of integer and will be only 2 bites
use bigger mask like 0xFFFFFFFF that it covers all the bits of your integer
NOTE: I use "usually" because the amount of bits in your int and short depends on your CPU and compiler.
I have some real data. For example +2 and -3. These data are represented in two's complement fixed point with 4 bit binary value where MSB represents the sign bit and number of fractional bit is zero.
So +2 = 0010
-3 = 1101
addition of this two numbers is (+2) + (-3)=-1
(0010)+(1101)=(1111)
But in case of subtraction (+2)-(-3) what should i do?
Is it needed to take the two's complement of 1101 (-3) again and add with 0010?
You can evaluate -(-3) in binary and than simply sums it with the other values.
With two's complement, evaluate the opposite of a number is pretty simple: just apply the NOT binary operation to every digits except for the less significant bit. The equation below uses the tilde to rapresent the NOT operation of a single bit and assumed to deal with integer rapresented by n bits (n = 4 in your example):
In your example (with an informal notation): -(-3) = -(1101) = 0011
I'd like to know the science behind the following. a 32 bit value is shifted left 32 times in a 64 bit type, then a division is performed. somehow the precision is contained within the last 32 bits and in order to retrieve the value as a floating point number, I can multiply by 1 over the max value of an unsigned 32 bit int.
phase = ((uint64) 44100 << 32) / 48000;
(phase & 0xffffffff) * (1.0f / 4294967296.0f);// == 0.918749988
the same as
(float)44100/48000;// == 0.918749988
(...)
If you lose precision when dividing two integer numbers, you should remember the remainder.
The reminder in C++ can be taken by doing 44100%48000 in your case.
Actually these are constants and it's completely clear that 44100/48000 == 0, so remainder is all you have.
Well, the reminder will even be -- guess what -- 44100!
The float type (imposed by the explicit cast) has only 6 significant digits. So 4294967296.0f will be simply 429496e4 (in mathematics: 429496*10^4). That's why this type isn't valuable for anything but playing around.
The best way to get a value of fixed integer type in which all bits are set, and not miss the correct number of 'f' in 0xfffff, is to use the ~ operator and 0 value. In your case, ~uint32_t(0).
Well, I should have said this in the beginning: 44100.0/48000 should give you the result you want. :P
this is the answer I was looking for
bit shifting left will provide that number of bits in which to store the precision vale from a division.
dividing the integer value represented by these bits by 2 to the power of the bit shift amount will return the precision value
e.g
0000 0001 * 2^8 = 1 0000 0000 = 256(base 10)
1 0000 0000 / 2 = 1000 0000 = 128(base 10)
128 / 2^8 = 0.5
I've seen the operators >> and << in various code that I've looked at (none of which I actually understood), but I'm just wondering what they actually do and what some practical uses of them are.
If the shifts are like x * 2 and x / 2, what is the real difference from actually using the * and / operators? Is there a performance difference?
Here is an applet where you can exercise some bit-operations, including shifting.
You have a collection of bits, and you move some of them beyond their bounds:
1111 1110 << 2
1111 1000
It is filled from the right with fresh zeros. :)
0001 1111 >> 3
0000 0011
Filled from the left. A special case is the leading 1. It often indicates a negative value - depending on the language and datatype. So often it is wanted, that if you shift right, the first bit stays as it is.
1100 1100 >> 1
1110 0110
And it is conserved over multiple shifts:
1100 1100 >> 2
1111 0011
If you don't want the first bit to be preserved, you use (in Java, Scala, C++, C as far as I know, and maybe more) a triple-sign-operator:
1100 1100 >>> 1
0110 0110
There isn't any equivalent in the other direction, because it doesn't make any sense - maybe in your very special context, but not in general.
Mathematically, a left-shift is a *=2, 2 left-shifts is a *=4 and so on. A right-shift is a /= 2 and so on.
Left bit shifting to multiply by any power of two and right bit shifting to divide by any power of two.
For example, x = x * 2; can also be written as x<<1 or x = x*8 can be written as x<<3 (since 2 to the power of 3 is 8). Similarly x = x / 2; is x>>1 and so on.
Left Shift
x = x * 2^value (normal operation)
x << value (bit-wise operation)
x = x * 16 (which is the same as 2^4)
The left shift equivalent would be x = x << 4
Right Shift
x = x / 2^value (normal arithmetic operation)
x >> value (bit-wise operation)
x = x / 8 (which is the same as 2^3)
The right shift equivalent would be x = x >> 3
Left shift: It is equal to the product of the value which has to be shifted and 2 raised to the power of number of bits to be shifted.
Example:
1 << 3
0000 0001 ---> 1
Shift by 1 bit
0000 0010 ----> 2 which is equal to 1*2^1
Shift By 2 bits
0000 0100 ----> 4 which is equal to 1*2^2
Shift by 3 bits
0000 1000 ----> 8 which is equal to 1*2^3
Right shift: It is equal to quotient of value which has to be shifted by 2 raised to the power of number of bits to be shifted.
Example:
8 >> 3
0000 1000 ---> 8 which is equal to 8/2^0
Shift by 1 bit
0000 0100 ----> 4 which is equal to 8/2^1
Shift By 2 bits
0000 0010 ----> 2 which is equal to 8/2^2
Shift by 3 bits
0000 0001 ----> 1 which is equal to 8/2^3
Left bit shifting to multiply by any power of two.
Right bit shifting to divide by any power of two.
x = x << 5; // Left shift
y = y >> 5; // Right shift
In C/C++ it can be written as,
#include <math.h>
x = x * pow(2, 5);
y = y / pow(2, 5);
The bit shift operators are more efficient as compared to the / or * operators.
In computer architecture, divide(/) or multiply(*) take more than one time unit and register to compute result, while, bit shift operator, is just one one register and one time unit computation.
Some examples:
Bit operations for example converting to and from Base64 (which is 6 bits instead of 8)
doing power of 2 operations (1 << 4 equal to 2^4 i.e. 16)
Writing more readable code when working with bits. For example, defining constants using
1 << 4 or 1 << 5 is more readable.
Yes, I think performance-wise you might find a difference as bitwise left and right shift operations can be performed with a complexity of o(1) with a huge data set.
For example, calculating the power of 2 ^ n:
int value = 1;
while (exponent<n)
{
// Print out current power of 2
value = value *2; // Equivalent machine level left shift bit wise operation
exponent++;
}
}
Similar code with a bitwise left shift operation would be like:
value = 1 << n;
Moreover, performing a bit-wise operation is like exacting a replica of user level mathematical operations (which is the final machine level instructions processed by the microcontroller and processor).
Here is an example:
#include"stdio.h"
#include"conio.h"
void main()
{
int rm, vivek;
clrscr();
printf("Enter any numbers\t(E.g., 1, 2, 5");
scanf("%d", &rm); // rm = 5(0101) << 2 (two step add zero's), so the value is 10100
printf("This left shift value%d=%d", rm, rm<<4);
printf("This right shift value%d=%d", rm, rm>>2);
getch();
}