Meaning of bitwise and(&) of a positive and negative number? - c++

Can anyone help what n&-n means??
And what is the significance of it.

It's an old trick that gives a number with a single bit in it, the bottom bit that was set in n. At least in two's complement arithmetic, which is just about universal these days.
The reason it works: the negative of a number is produced by inverting the number, then adding 1 (that's the definition of two's complement). When you add 1, every bit starting at the bottom that is set will overflow into the next higher bit; this stops once you reach a zero bit. Those overflowed bits will all be zero, and the bits above the last one affected will be the inverse of each other, so the only bit left is the one that stopped the cascade - the one that started as 1 and was inverted to 0.
P.S. If you're worried about running across one's complement arithmetic here's a version that works with both:
n & (~n + 1)

On pretty much every system that most people actually care about, it will give you the highest power of 2 that n is evenly divisible by.

I believe it is a trick to figure out if n is a power of 2. (n == (n & -n)) IFF n is a power of 2 (1,2,4,8).

N&(-N) will give you position of the first bit '1' in binary form of N.
For example:
N = 144 (0b10010000) => N&(-N) = 0b10000
N = 7 (0b00000111) => N&(-N) = 0b1
One application of this trick is to convert an integer to sum of power-of-2.
For example:
To convert 22 = 16 + 4 + 2 = 2^4 + 2^2 + 2^1
22&(-22) = 2, 22 - 2 = 20
20&(-20) = 4, 20 - 4 = 16
16&(-16) = 16, 16 - 16 = 0

It's just a bitwise-and of the number. Negative numbers are represented as two's complement.
So for instance, bitwise and of 7&(-7) is x00000111 & x11111001 = x00000001 = 1

I would add a self-explanatory example to the Mark Randsom's wonderful exposition.
010010000 | +144 ~
----------|-------
101101111 | -145 +
1 |
----------|-------
101110000 | -144
101110000 | -144 &
010010000 | +144
----------|-------
000010000 | 16`

Because x & -x = {0, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 16, 1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1, 32} for x from 0 to 32. It is used to jumpy in the for sequences for some applications. The applications can be to store accumulated records.
for(;x < N;x += x&-x) {
// do something here
++tr[x];
}
The loop traverses very fast because it looks for the next power of two to jump.

As #aestrivex has mentioned, it is a way of writing 1.Even i encountered this
for (int y = x; y > 0; y -= y & -y)
and it just means y=y-1 because
7&(-7) is x00000111 & x11111001 = x00000001 = 1

Related

How can I simulate binary values using a vector of booleans in C++?

I want to be able to retain the same amount of bits to my vector whilst still performing binary addition. For example.
int numOfBits = 4;
int myVecVal = 3;
vector< bool > myVec;
GetBinaryVector(&myVec,myVecVal, numOfBits);
and its output would be:
{0, 0, 1, 1}
I don't know how to make a function of GetBinaryVector though, any ideas?
This seems to work (although the article I added in initial comment seem to suggest you only have byte level access):
void GetBinaryVector(vector<bool> *v, int val, int bits) {
v->resize(bits);
for(int i = 0; i < bits; i++) {
(*v)[bits - 1 - i] = (val >> i) & 0x1;
}
}
The left hand side sets the i'th least significant bit which is index bits - 1 - i. The right hand side isolates the i'th least significant bit by bit shifting the value down i'th bit and masking everything but the least significant bit.
In your example val = 8, bits = 15. In the first iteration i = 0: we have (*v)[15 - 1 - 0] = (8 >> 0) & 0x1. 8 is binary 1000 and shifting it down 0 is 1000. 1000 & 0x1 is 0. Let's jump to i = 4: (*v)[15 - 1 - 4] = (8 >> 4) & 0x1. 1000 >> 4 is 1 and 1 & 0x1 is 1, so we set (*v)[10] = 1. The resulting vector is { 0, ..., 0, 1, 0, 0, 0 }

Why does "number & (~(1 << 3))" not work for 0's?

I'm writing a program that exchanges the values of the bits on positions 3, 4 and 5 with bits on positions 24, 25 and 26 of a given 32-bit unsigned integer.
So lets say I use the number 15 and I want to turn the 4th bit into a 0, I'd use...
int number = 15
int newnumber = number & (~(1 << 3));
// output is 7
This makes sense because I'm exchanging the 4th bit from 1 to 0 so 15(1111) becomes 7(0111).
However this wont work the other way round (change a 0 to a 1), Now I know how to achieve exchanging a 0 to a 1 via a different method, but I really want to understand the code in this method.
So why wont it work?
The truth table for x AND y is:
x y Output
-----------
0 0 0
0 1 0
1 0 0
1 1 1
In other words, the output/result will only be 1 if both inputs are 1, which means that you cannot change a bit from 0 to 1 through a bitwise AND. Use a bitwise OR for that (e.g. int newnumber = number | (1 << 3);)
To summarize:
Use & ~(1 << n) to clear bit n.
Use | (1 << n) to set bit n.
To set the fourth bit to 0, you AND it with ~(1 << 3) which is the negation of 1000, or 0111.
By the same reasoning, you can set it to 1 by ORing with 1000.
To toggle it, XOR with 1000.

Distinguishing the values of three int's

I have three integer variables, that can take only the values 0, 1 and 2. I want to distinguish what combination of all three numbers I have, ordering doesn't count. Let's say the variables are called x, y and z. Then x=1, y=0, z=0 and x=0, y=1, z=0 and x=0, y=0, z=1 are all the same number in this case, I will refer to this combination as 001.
Now there are a hundred ways how to do this, but I am asking for an elegant solution, be it only for educational purposes.
I thought about bitwise shifting 001 by the amount of the value:
001 << 0 = 1
001 << 1 = 2
001 << 2 = 4
But then the numbers 002 and 111 would both give 6.
The shift idea is good, but you need 2 bits to count to 3. So try shifting by twice the number of bits:
1 << (2*0) = 1
1 << (2*1) = 4
1 << (2*2) = 16
Add these for all 3 numbers, and the first 2 bits will count how many 0 you have, the second 2 bits will count how many 1 and the third 2 bits will count how many 2.
Edit although the result is 6 bit long (2 bits per number option 0,1,2), you only need the lowest 4 bits for a unique identifier - as if you know how many 0 and 1 you have, then the number of 2 is determined also.
So instead of doing
res = 1<<(2*x);
res+= 1<<(2*y);
res+= 1<<(2*z);
you can do
res = x*x;
res+= y*y;
res+= z*z;
because then
0*0 = 0 // doesn't change result. We don't count 0
1*1 = 1 // we count the number of 1 in the 2 lower bits
2*2 = 4 // we count the number of 2 in the 2 higher bits
hence using only 4 bits instead of 6.
When the number of distinct possibilities is small, using a lookup table could be used.
First, number all possible combinations of three digits, like this:
Combinations N Indexes
------------- - ------
000 0 0
001, 010, 100 1 1, 3, 9
002, 020, 200 2 2, 6, 18
011, 101, 110 3 4, 10, 12
012, 021, 102, 120, 201, 210 4 5, 7, 11, 15, 19, 21
022, 202, 220 5 8, 20, 24
111 6 13
112, 121, 211 7 14, 16, 22
122, 212, 221 8 17, 23, 25
222 9 26
The first column shows identical combinations; the second column shows the number of the combination (I assigned them arbitrarily); the third column shows the indexes of each combination, computed as 9*<first digit> + 3*<second digit> + <third digit>.
Next, build a look-up table for each of these ten combinations, using this expression as an index:
9*a + 3*b + c
where a, b, and c are the three numbers that you have. The table would look like this:
int lookup[] = {
0, 1, 2, 1, 3, 4, 2, 4, 5, 1
, 3, 4, 3, 6, 7, 4, 7, 8, 2, 4
, 5, 4, 7, 8, 5, 8, 9
};
This is a rewrite of the first table, with values at the indexes corresponding to the value in the column N. For example, combination number 1 is founds at indexes 1, 3, and 9; combination 2 is at indexes 2, 6, and 18, and so on.
To obtain the number of the combination, simply check
int combNumber = lookup[9*a + 3*b + c];
For such small numbers, it would be easiest to just check them individually, instead of trying to be fancy, eg:
bool hasZero = false;
bool hasOne = false;
bool hasTwo = false;
// given: char* number or char[] number...
for(int i = 0; i < 3; ++i)
{
switch (number[i])
{
case '0': hasZero = true; break;
case '1': hasOne = true; break;
case '2': hasTwo = true; break;
default: /* error! */ break;
}
}
If I understand you correctly, you have some sequence of numbers that can either be 1, 2, or 3, where the permutation of them doesn't matter (just the different combinations).
That being the case:
std::vector<int> v{1, 2, 3};
std::sort(v.begin(), v.end());
That will keep all of the different combinations properly aligned, and you could easily write a loop to test for equality.
Alternatively, you could use a std::array<int, N> (where N is the number of possible values - in this case 3).
std::array<int, 3> a;
Where you would set a[0] equal to the number of 1s you have, a[1] equal to the number of '2's, etc.
// if your string is 111
a[0] = 3;
// if your string is 110 or 011
a[0] = 2;
// if your string is 100 or 010 or 001
a[0] = 1;
// if your string is 120
a[0] = 1;
a[1] = 1;
// if your string is 123
a[0] = 1;
a[1] = 1;
a[2] = 1;
If you are looking to store it in a single 32-bit integer:
unsigned long x = 1; // number of 1's in your string
unsigned long y = 1; // number of 2's in your string
unsigned long z = 1; // number of 3's in your string
unsigned long result = x | y << 8 | z << 16;
To retrieve the number of each, you would do
unsigned long x = result & 0x000000FF;
unsigned long y = (result >> 8) & 0x000000FF;
unsigned long z = (result >> 16) & 0x000000FF;
This is very similar to what happens in the RBG macros.
int n[3]={0,0,0};
++n[x];
++n[y];
++n[z];
Now, in the n array, you have a unique ordered combination of values for each unique unordered combination of x,y,z.
For example, both x=1,y=0,z=0 and x=0,y=0,z=1 will give you n={2,1,0}

Find rank of a number on basis of number of 1's

Let f(k) = y where k is the y-th number in the increasing sequence of non-negative integers with
the same number of ones in its binary representation as k, e.g. f(0) = 1, f(1) = 1, f(2) = 2, f(3) = 1, f(4)
= 3, f(5) = 2, f(6) = 3 and so on. Given k >= 0, compute f(k)
many of us have seen this question
1 solution to this problem to categorise numbers on basis of number of 1's and then find the rank.i did find some patterns going by this way but it would be a lengthy process. can anyone suggest me a better solution?
This is a counting problem. I think that if you approach it with this in mind, you can do much better than literally enumerating values and checking how many bits they have.
Consider the number 17. The binary representation is 10001. The number of 1s is 2. We can get smaller numbers with two 1s by (in this case) re-distributing the 1s to any of the four low-order bits. 4 choose 2 is 6, so 17 should be the 7th number with 2 ones in the binary representation. We can check this...
0 00000 -
1 00001 -
2 00010 -
3 00011 1
4 00100 -
5 00101 2
6 00110 3
7 00111 -
8 01000 -
9 01001 4
10 01010 5
11 01011 -
12 01100 6
13 01101 -
14 01110 -
15 01111 -
16 10000 -
17 10001 7
And we were right. Generalize that idea and you should get an efficient function for which you simply compute the rank of k.
EDIT: Hint for generalization
17 is special in that if you don't consider the high-order bit, the number has rank 1; that is, f(z) = 1 where z is everything except the higher order bit. For numbers where this is not the case, how can you account for the fact that you can get smaller numbers without moving the high-order bit?
f(k) are integers less than or equal to k that have the same number of ones in their binary representation as k.
For example, k needs m bits, that is k = 2^(m-1) + a, where a < 2^(m-1). The number of integers less than 2^(m-1) that have the same number of bits as k is choose(m-1, bitcount(k)), since you can freely redistribute the ones among the m-1 least significant bits.
Integers that are greater than or equal to 2^(m-1) have the same most significant bit as k (which is 1), so there are f(k - 2^(m-1)) of them. This implies f(k) = choose(m-1, bitcount(k)) + f(k-2^(m-1)).
See "Efficiently Enumerating the Subsets of a Set". Look at Table 3, the "Bankers sequence". This is a method to generate exactly the sequence you need (if you reverse the bit order). Just run K iterations for the word with K bits. There is code to generate it included in the paper.

How do I find permutations or combinations for byte?

A character (1 byte) can represent 255 characters but how do i actually find it?
(answering the comment)
There are 256 different combinations of 8 0s and 1s.
This is true because 256 = 28.
Each digit that you add doubles the number of combinations.
In a fixed width binary number, there are two choices for the first bit, two choices for the second bit, two choices for the third, and so on. The total number of combinations for an 8-bit byte is:
2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 = 28 = 256
do you mean
for (char c = " "; c <= "~"; c++) std::cout << c << std::endl;
?
This should show you printable characters in ASCII proper. To see all characters in your font, try c = 0 and c < 255 (be careful with 255 and infinite loop) - but this won't work with your terminal, most probably.
8 bits can represent permutations of ones and zeros from binary 00000000 to 11111111. Just like 3 decimal digits can represent permutations of decimal numbers (0-9) from decimal 000 to 999.
You just start counting: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and then after you reach the digit maximum, you carry over a 1 and start from 0: ..., 8, 9, 10. Right? And then continue this until you fill up all your digits with nines: ..., 997, 998, 999.
It's the same thing in binary: 0, 1 then carry over 1 and start from 0: 0, 1, 10. Continue: 10, 11, 100, 101, 110, 111, 1000, 1001 etc.
Simply counting from 0 to the maximum value than can be represented by your digits gives you all the permutations.