A solution is given to this question on geeksforgeeks website.
I wish to know does there exist a better and a simpler solution? This is a bit complicated to understand. Just an algorithm will be fine.
I am pretty sure this algorithm is as efficient and easier to understand than your linked algorithm.
The strategy here is to understand that the only way to make a number bigger without increasing its number of 1's is to carry a 1, but if you carry multiple 1's then you must add them back in.
Given a number 1001 1100
Right shift it until the value is odd, 0010 0111. Remember the number of shifts: shifts = 2;
Right shift it until the value is even, 0000 0100. Remember the number of shifts performed and bits consumed. shifts += 3; bits = 3;
So far, we have taken 5 shifts and 3 bits from the algorithm to carry the lowest digit possible. Now we pay it back.
Make the rightmost bit 1. 0000 0101. We now owe it 2 bits. bits -= 1
Shift left 3 times to add the 0's. 0010 1000. We do it three times because shifts - bits == 3 shifts -= 3
Now we owe the number two bits and two shifts. So shift it left twice, setting the leftmost bit to 1 each time. 1010 0011. We've paid back all the bits and all the shifts. bits -= 2; shifts -= 2; bits == 0; shifts == 0
Here's a few other examples... each step is shown as current_val, shifts_owed, bits_owed
0000 0110
0000 0110, 0, 0 # Start
0000 0011, 1, 0 # Shift right till odd
0000 0000, 3, 2 # Shift right till even
0000 0001, 3, 1 # Set LSB
0000 0100, 1, 1 # Shift left 0's
0000 1001, 0, 0 # Shift left 1's
0011 0011
0011 0011, 0, 0 # Start
0011 0011, 0, 0 # Shift right till odd
0000 1100, 2, 2 # Shift right till even
0000 1101, 2, 1 # Set LSB
0001 1010, 1, 1 # Shift left 0's
0011 0101, 0, 0 # Shift left 1's
There is a simpler, though definitely less efficient one. It follows:
Count the number of bits in your number (right shift your number until it reaches zero, and count the number of times the rightmost bit is 1).
Increment the number until you get the same result.
Of course it is extremely inefficient. Consider a number that's a power of 2 (having 1 bit set). You'll have to double this number to get your answer, incrementing the number by 1 in each iteration. Of course it won't work.
If you want a simpler efficient algorithm, I don't think there is one. In fact, it seems pretty simple and straightforward to me.
Edit: By "simpler", I mean it's mpre straightforward to implement, and possibly has a little less code lines.
Based on some code I happened to have kicking around which is quite similar to the geeksforgeeks solution (see this answer: https://stackoverflow.com/a/14717440/1566221) and a highly optimized version of #QuestionC's answer which avoids some of the shifting, I concluded that division is slow enough on some CPUs (that is, on my Intel i5 laptop) that looping actually wins out.
However, it is possible to replace the division in the g-for-g solution with a shift loop, and that turned out to be the fastest algorithm, again just on my machine. I'm pasting the code here for anyone who wants to test it.
For any implementation, there are two annoying corner cases: one is where the given integer is 0; the other is where the integer is the largest possible value. The following functions all have the same behaviour: if given the largest integer with k bits, they return the smallest integer with k bits, thereby restarting the loop. (That works for 0, too: it means that given 0, the functions return 0.)
Bit-hack solution with division:
template<typename UnsignedInteger>
UnsignedInteger next_combination_1(UnsignedInteger comb) {
UnsignedInteger last_one = comb & -comb;
UnsignedInteger last_zero = (comb + last_one) &~ comb;
if (last_zero)
return comb + last_one + ((last_zero / last_one) >> 1) - 1;
else if (last_one)
return UnsignedInteger(-1) / last_one;
else
return 0;
}
Bit-hack solution with division replaced by a shift loop
template<typename UnsignedInteger>
UnsignedInteger next_combination_2(UnsignedInteger comb) {
UnsignedInteger last_one = comb & -comb;
UnsignedInteger last_zero = (comb + last_one) &~ comb;
UnsignedInteger ones = (last_zero - 1) & ~(last_one - 1);
if (ones) while (!(ones & 1)) ones >>= 1;
comb += last_one;
if (comb) comb += ones >> 1; else comb = ones;
return comb;
}
Optimized shifting solution
template<typename UnsignedInteger>
UnsignedInteger next_combination_3(UnsignedInteger comb) {
if (comb) {
// Shift the trailing zeros, keeping a count.
int zeros = 0; for (; !(comb & 1); comb >>= 1, ++zeros);
// Adding one at this point turns all the trailing ones into
// trailing zeros, and also changes the 0 before them into a 1.
// In effect, this is steps 3, 4 and 5 of QuestionC's solution,
// without actually shifting the 1s.
UnsignedInteger res = comb + 1U;
// We need to put some ones back on the end of the value.
// The ones to put back are precisely the ones which were at
// the end of the value before we added 1, except we want to
// put back one less (because the 1 we added counts). We get
// the old trailing ones with a bit-hack.
UnsignedInteger ones = comb &~ res;
// Now, we finish shifting the result back to the left
res <<= zeros;
// And we add the trailing ones. If res is 0 at this point,
// we started with the largest value, and ones is the smallest
// value.
if (res) res += ones >> 1;
else res = ones;
comb = res;
}
return comb;
}
(Some would say that the above is yet another bit-hack, and I won't argue.)
Highly non-representative benchmark
I tested this by running through all 32-bit numbers. (That is, I create the smallest pattern with i ones and then cycle through all the possibilities, for each value of i from 0 to 32.):
#include <iostream>
int main(int argc, char** argv) {
uint64_t count = 0;
for (int i = 0; i <= 32; ++i) {
unsigned comb = (1ULL << i) - 1;
unsigned start = comb;
do {
comb = next_combination_x(comb);
++count;
} while (comb != start);
}
std::cout << "Found " << count << " combinations; expected " << (1ULL << 32) << '\n';
return 0;
}
The result:
1. Bit-hack with division: 43.6 seconds
2. Bit-hack with shifting: 15.5 seconds
3. Shifting algorithm: 19.0 seconds
Related
I want to generate a list of binary numbers with m digits, where n bits are set to 1 and all others are set to 0. For example, let's say m is 4. I want to generate a list of binary numbers with 4 bits. Of the 16 numbers, 6 of them have 2 bits set to 1, with the others being all 0s.
0000
0001
0010
0011 <--
0100
0101 <--
0110 <--
0111
1000
1001 <--
1010 <--
1011
1100 <--
1101
1110
1111
I want to generate a list for any m bits with n bits set to 1, at the very least for the case of where n = 2. But I'm not sure what process to follow. I could of course brute-force it and generate all numbers that are m bits then check each one individually, but for a larger number of bits that could take a while. I feel like there must be a neater mathematical trick to find out the answer.
I'd appreciate any pointers of where to start. I don't mind if answers are in pseudocode or any language, so long as they're clear.
The XY problem
I'm trying to solve a chess problem, where there are two pieces on the board. First I'm trying to generate all the valid combinations of two pieces on the board, which I'm planning to do by treating the chessboard as a 64-bit binary number (0000 0000 0000 0000 .. 0011) where an on-bit is a piece. To do this I need to find an elegant way to generate the list of binary numbers.
Edit: I've tried implementing the naive algorithm in Python just for demonstration. It's taking an awful long while to execute on my VS Code for m = 64, so definitely isn't the best solution:
n = 2
m = 64
combos = []
for i in range(2 ** m):
bin_s = str(format(i, f'0{m}b'))
if bin_s.count('1') == n:
combos.append(bin_s)
for c in combos:
print(c)
print(f"There are {len(combos)} combinations")
This is called lexicographically next permutation, which is available in many bit hacking sites.
https://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
Starting with x = 0b000000111 e.g. for 3 bits, one iterates until x & (1 << m) (or there's an overflow if m == word_size).
uint64_t next(uint64_t v) {
uint64_t t = v | (v - 1); // t gets v's least significant 0 bits set to 1
// Next set to 1 the most significant bit to change,
// set to 0 the least significant ones, and add the necessary 1 bits.
return (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(v) + 1));
}
uint64_t v = 15; // 4 bits
do {
my_func(v);
if (v == 0xf000000000000000ull) {
break;
}
v = next(v);
} while (true);
Use https://docs.python.org/3/library/itertools.html#itertools.combinations to produce the set of indexes at which you have a 1. Turning that into a binary number is straightforward.
If you want this in another language, the documentation has native Python code to solve the problem.
This question already has an answer here:
I want to pack the bits based on arbitrary mask
(1 answer)
Closed 5 years ago.
Problem
Suppose I have a bit mask mask and an input n, such as
mask = 0x10f3 (0001 0000 1111 0011)
n = 0xda4d (1101 1010 0100 1101)
I want to 1) isolate the masked bits (remove bits from n not in mask)
masked_n = 0x10f3 & 0xda4d = 0x1041 (0001 0000 0100 0001)
and 2) "flatten" them (get rid of the zero bits in mask and apply those same shifts to masked_n)?
flattened_mask = 0x007f (0000 0000 0111 1111)
bits to discard (___1 ____ 0100 __01)
first shift ( __ _1__ __01 0001)
second shift ( __ _101 0001)
result = 0x0051 (0000 0000 0101 0001)
Tried solutions
a) For this case, one could craft an ad hoc series of bit shifts:
result = (n & 0b10) | (n & 0b11110000) >> 2 | (n & 0b1000000000000) >> 6
b) More generically, one could also iterate over each bit of mask and calculate result one bit at a time.
for (auto i = 0, pos = 0; i < 16; i++) {
if (mask & (1<<i)) {
if (n & (1<<i)) {
result |= (1<<pos);
}
pos++;
}
}
Question
Is there a more efficient way of doing this generically, or at the very least, ad hoc but with a fixed number of operations regardless of bit placement?
A more efficient generic approach would be to loop over the bits but only process the number of bits that are in the mask, removing the if (mask & (1<<i)) test from your loop and looping only 7 times instead of 16 for your example mask. In each iteration of the loop find the rightmost bit of the mask, test it with n, set the corresponding bit in the result and then remove it from the mask.
int mask = 0x10f3;
int n = 0xda4d;
int result = 0;
int m = mask, pos = 1;
while(m != 0)
{
// find rightmost bit in m:
int bit = m & -m;
if (n & bit)
result |= pos;
pos <<= 1;
m &= ~bit; // remove the rightmost bit from m
}
printf("%04x %04x %04x\n", mask, n, result);
Output:
10f3 da4d 0051
Or, perhaps less readably but without the bit temp variable:
if (n & -m & m)
result |= pos;
pos <<= 1;
m &= m-1;
How does it work? First, consider why m &= m-1 clears the rightmost (least significant) set bit. Your (non-zero) mask m is going to be made up of a certain number of bits, then a 1 in the least significant set place, then zero or more 0s:
e.g:
xxxxxxxxxxxx1000
Subtracting 1 gives:
xxxxxxxxxxxx0111
So all the bits higher than the least significant set bit will be unchanged (so ANDing them together leaves them unchanged), the least significant set bit changes from a 1 to a 0, and the less significant bits were all 0s beforehand so ANDing them with all 1s leaves them unchanged. Net result: least significant set bit is cleared and the rest of the word stays the same.
To understand why m & -m gives the least significant set bit, combine the above with the knowledge that in 2s complement, -x = ~(x-1)
I am confused by the below question.
Flipping a bit means changing the bit from 0 to 1 and vice versa.An operation OP(i) would result in flipping of binary digit as follows.
Performing OP(i) would result in flipping of each ith bit from starting i>0
An n bit number is given as input and OP(j) and OP(k) are applied on it one after the other. Objective is to specify how many bits will remain the same after applying these two operations.
When I have applied the logic floor(n/i)+floor(n/j)-2 it doesn't give me the expected solution.
example:
binary number:10110101101
i:3
j:4
expected output:6
But I got 3.Please tell me how to approach this problem.
I have also checked this solution Filpping bits in binary number .But they have also mentioned the same logic.
Let the register comprises of N bits, bits 1 to N.
(1) OP(i) implies every ith bit is flipped. That is bits at i, 2*i, 3*i ...
are flipped. Total bits flipped = floor(N/i)
(2) OP(j) implies every ith bit is flipped. That is bits at j, 2*j, 3*j ...
are flipped. Total bits flipped = floor(N/j)
(3) Let L = LCM(i,j). Therefore, bits at L, 2*L, 3*L, ... will be
flipped twice, implies bits unchanged are floor(N/L)
So, after OP(i) and OP(j), the total bits changed will be
floor(N/i) + floor(N/j) - 2*floor(N/L)
Number of bits unchanged = N - floor(N/i) - floor(N/j) + 2*floor(N/L)
For N=11, i=4, j=3, L = LCM(3,4) = 12,
Number of unchanged bits = 11 - 11/4 - 11/3 + 11/12 = 11 - 2 - 3 + 0 = 6
public static int nochange_bits(String input1,int i1,int i2)
{
try{
int len=input1.length();
if(i1<1 || i2<1){
return -1;
}else if(i1>len && i2>len){
return len;
}else if(i1==i2){
return len;
}else{
return (int)(len-Math.floor(len/i1)-Math.floor(len/i2)+2*Math.floor(len/(Math.abs(i1*i2) / GCF(i1, i2))));
}
}catch(Exception e){
e.printStackTrace();
return -1;
}
}
public static int GCF(int a, int b) {
if (b == 0) return a;
else return (GCF (b, a % b));
}
a) First, we check for all the conditions and invalidity of inputs
b) Then we calculate the LCM to get the output
Explanation: It's similar to the flipping switches problem,
first turn we switch the i1 bits
second turn we switch the i2 bits
in the process, the bits which have the LCM(i1,i2) are turned back.
so we add the lcm back to the total
I had the following question in an interview and I believe I gave a working implementation but I was wondering if there was a better implementation that was quicker, or just a trick I missed.
Given 3 unsigned 30-bit integers, return the number of 30-bit integers that when compared with any of the original numbers have the same position bits set to 1. That is we enumerate all the 0s
Let me give an example, but lets use 4bit for clarity.
Given:
A = 1001
B = 0011
C = 0110
It should return 8 since there are 8 4bit ints that are in the set. The set being:
0011
0110
0111
1001
1011
1101
1110
1111
Now how I worked it out was to take each number and enumerate the set of possibilites and then count all the distinct values. How I enumerated the set was to start with the number, add one to it and then OR it with itself until I reached the mask. With the number itself being in the set and the mask (being all set to 1) also in the set. So for example to enumerate the set of 1001:
1001 = the start
1011 = (1001 + 1) | 1001
1101 = (1011 + 1) | 1001
1111 = (1101 + 1) | 1001 (this is the last value as we have reached our mask)
So do that for each number and then count the uniques.
This is it in python code (but language doesn't really matter as long as you can do bitwise operations, hence why this question is tagged for c/c++ as well):
MASK = 0x3FFFFFFF
def count_anded_bitmasks( A, B, C ):
andSets = set(
enumerate_all_subsets(A) +
enumerate_all_subsets(B) +
enumerate_all_subsets(C)
)
return len(andSets)
def enumerate_all_subsets( d ):
andSet = []
n = d
while n != MASK:
andSet.append(n)
n = (n + 1) | d
andSet.append(n)
return andSet
Now this works and gives the correct answer but I'm wondering if I have missed a trick. Since the question was to only ask the count and not enumerate all the values perhaps there is a much quicker way. Either by combining the numbers first, or getting a count without enumeration. I have a feeling there is. Since numbers that contain lots of zeros the enumeration rises exponentially and it can take quite a while.
If you have A B and C, the count of the set of numbers which has bits set to 1 where A or B or C has corresponding bits set to 1.
Some people don't understand the question (didn't help that I didn't ask it correctly first of). Let's use the given A B and C values above:
A:
1001
1011
1101
1111
B:
0011
0111
1011
1111
C:
0110
0111
1110
1111
Now combine those sets and count the distinct entries. That is the answer. Is there a way to do this without enumerating the values?
edit: Sorry for the mistake the question. Fixed now.
EDIT: updated requirement: Given 3 unsigned 30-bit integers, return the number of 30-bit integers that when compared with any of the original numbers have the same position bits set to 1. That is we enumerate all the 0s
OK that's a bit tougher. It's easy to calculate for one number, since in that case the number of possible integers depends only on the number of zero bits, like this:
// Count bits not set
const size_t NUMBITS=30;
size_t c;
size_t v = num;
for (c=NUMBITS; v; v >>= 1)
c -= v & 1;
return c;
Naively you might try extending this to three integers by doing it for each and summing the results, however that would be wrong because the possibilities need to be unique, e.g. given
A = 1001
B = 0011
C = 0110
You would count e.g. 1111 three times rather than once. You should subtract the number of combinations that are shared between any two numbers, but not subtract any combination twice.
This is simply a C++ translation of Winston Ewert's answer!
size_t zeroperms(size_t v)
{
// Count number of 0 bits
size_t c = 1;
for (c=NUMBITS; v; v >>= 1)
c -= v & 1;
// Return number of permutations possible with those bits
return 1 << c;
}
size_t enumerate(size_t a, size_t b, size_t c)
{
size_t total = zeroperms(a) + zeroperms(b) + zeroperms(c);
total -= zeroperms(a | b); // counted twice, remove one
total -= zeroperms(b | c); // counted twice, remove one
total -= zeroperms(a | c); // counted twice, remove one
total += zeroperms(a | b | c); // counted three times, removed three times, add one
return total;
}
N = 4
def supers(number):
zeros = sum(1 for bit in xrange(N) if (number >> bit) & 1 == 0)
return 2**zeros
def solve(a,b,c):
total = supers(a) + supers(b) + supers(c)
total -= supers(a | b) # counted twice, remove one
total -= supers(b | c) # counted twice, remove one
total -= supers(a | c) # counted twice, remove one
total += supers(a | b | c) # counted three times, removed three times, add one
return total
print solve(0b1001,0b0011,0b0110)
Explanation
Let S(n) be the set produce by the number n.
supers(n) returns |S(n)| the size of the set for the number n. supers is not a great name, but I had trouble coming up with a better one
The trick is to realize that S(a) ^ S(b) = S(a | b). As a result, using supers I can figure out the size of all those sets.
To figure out the rest, draw a venn diagram of the sets.
Trick 1:
The total answer is the union of each individual 30 bit number. This translates to the bitwise union operator, OR.
This means we can do D = A | B | C
With your 4 bit example, we arrive at D = 1111
Now we only need to work with 1 number
Trick 2:
A little bit of maths tells us that for every 1 we double our possible set of numbers.
This means all you need to do is raise 2 to the power of the number of 1s. Count the 1s with a loop, shifting down each time
bits = 0
D = 0b1111 #our number from trick 1
for i in range(4): #here 4 is the number of bits
if D & 1:
bits += 1
D >>= 1 #drop off the smallest number
print 2 ** bits
in this case it will print 16
In C#
Count the number of zeros. the many zero, the many way you can mask with given number. So, number of zero will make the variation of how many numbers need to consider=> this will be 2^(zero count).
Then accumulate total matches for each three numbers. Here some numbers may came double. So subtract that numbers. Add common numbers that got subtracted for previous number(follow the van diagram of three overlapping circles).
public static int count(int d)
{
int bits = 0;
for(int i = 0; i < 30; i++)
{
if(((d>>i)&1)==0) bits++;
//if ((d & 1) == 0) bits++;
//d >>= 1; //shift right
}
return (int)Math.Pow(2, bits);
}
public static int BitWiseConfirm(int A, int B, int C)
{
// write your code in C# 6.0 with .NET 4.5 (Mono)
int total = 0;
total += count(A);
//Console.WriteLine("total"+total);
total += count(B);
//Console.WriteLine("total"+total);
total += count(C);
//Console.WriteLine("total"+total);
//remove duplicates
total -= count(A | B);
//Console.WriteLine("total"+total);
total -= count(B | C);
//Console.WriteLine("total"+total);
total -= count(C | A);
//Console.WriteLine("total"+total);
total += count(A | B | C);
//Console.WriteLine("total"+total);
return total;
}
I've seen the operators >> and << in various code that I've looked at (none of which I actually understood), but I'm just wondering what they actually do and what some practical uses of them are.
If the shifts are like x * 2 and x / 2, what is the real difference from actually using the * and / operators? Is there a performance difference?
Here is an applet where you can exercise some bit-operations, including shifting.
You have a collection of bits, and you move some of them beyond their bounds:
1111 1110 << 2
1111 1000
It is filled from the right with fresh zeros. :)
0001 1111 >> 3
0000 0011
Filled from the left. A special case is the leading 1. It often indicates a negative value - depending on the language and datatype. So often it is wanted, that if you shift right, the first bit stays as it is.
1100 1100 >> 1
1110 0110
And it is conserved over multiple shifts:
1100 1100 >> 2
1111 0011
If you don't want the first bit to be preserved, you use (in Java, Scala, C++, C as far as I know, and maybe more) a triple-sign-operator:
1100 1100 >>> 1
0110 0110
There isn't any equivalent in the other direction, because it doesn't make any sense - maybe in your very special context, but not in general.
Mathematically, a left-shift is a *=2, 2 left-shifts is a *=4 and so on. A right-shift is a /= 2 and so on.
Left bit shifting to multiply by any power of two and right bit shifting to divide by any power of two.
For example, x = x * 2; can also be written as x<<1 or x = x*8 can be written as x<<3 (since 2 to the power of 3 is 8). Similarly x = x / 2; is x>>1 and so on.
Left Shift
x = x * 2^value (normal operation)
x << value (bit-wise operation)
x = x * 16 (which is the same as 2^4)
The left shift equivalent would be x = x << 4
Right Shift
x = x / 2^value (normal arithmetic operation)
x >> value (bit-wise operation)
x = x / 8 (which is the same as 2^3)
The right shift equivalent would be x = x >> 3
Left shift: It is equal to the product of the value which has to be shifted and 2 raised to the power of number of bits to be shifted.
Example:
1 << 3
0000 0001 ---> 1
Shift by 1 bit
0000 0010 ----> 2 which is equal to 1*2^1
Shift By 2 bits
0000 0100 ----> 4 which is equal to 1*2^2
Shift by 3 bits
0000 1000 ----> 8 which is equal to 1*2^3
Right shift: It is equal to quotient of value which has to be shifted by 2 raised to the power of number of bits to be shifted.
Example:
8 >> 3
0000 1000 ---> 8 which is equal to 8/2^0
Shift by 1 bit
0000 0100 ----> 4 which is equal to 8/2^1
Shift By 2 bits
0000 0010 ----> 2 which is equal to 8/2^2
Shift by 3 bits
0000 0001 ----> 1 which is equal to 8/2^3
Left bit shifting to multiply by any power of two.
Right bit shifting to divide by any power of two.
x = x << 5; // Left shift
y = y >> 5; // Right shift
In C/C++ it can be written as,
#include <math.h>
x = x * pow(2, 5);
y = y / pow(2, 5);
The bit shift operators are more efficient as compared to the / or * operators.
In computer architecture, divide(/) or multiply(*) take more than one time unit and register to compute result, while, bit shift operator, is just one one register and one time unit computation.
Some examples:
Bit operations for example converting to and from Base64 (which is 6 bits instead of 8)
doing power of 2 operations (1 << 4 equal to 2^4 i.e. 16)
Writing more readable code when working with bits. For example, defining constants using
1 << 4 or 1 << 5 is more readable.
Yes, I think performance-wise you might find a difference as bitwise left and right shift operations can be performed with a complexity of o(1) with a huge data set.
For example, calculating the power of 2 ^ n:
int value = 1;
while (exponent<n)
{
// Print out current power of 2
value = value *2; // Equivalent machine level left shift bit wise operation
exponent++;
}
}
Similar code with a bitwise left shift operation would be like:
value = 1 << n;
Moreover, performing a bit-wise operation is like exacting a replica of user level mathematical operations (which is the final machine level instructions processed by the microcontroller and processor).
Here is an example:
#include"stdio.h"
#include"conio.h"
void main()
{
int rm, vivek;
clrscr();
printf("Enter any numbers\t(E.g., 1, 2, 5");
scanf("%d", &rm); // rm = 5(0101) << 2 (two step add zero's), so the value is 10100
printf("This left shift value%d=%d", rm, rm<<4);
printf("This right shift value%d=%d", rm, rm>>2);
getch();
}