I had the following question in an interview and I believe I gave a working implementation but I was wondering if there was a better implementation that was quicker, or just a trick I missed.
Given 3 unsigned 30-bit integers, return the number of 30-bit integers that when compared with any of the original numbers have the same position bits set to 1. That is we enumerate all the 0s
Let me give an example, but lets use 4bit for clarity.
Given:
A = 1001
B = 0011
C = 0110
It should return 8 since there are 8 4bit ints that are in the set. The set being:
0011
0110
0111
1001
1011
1101
1110
1111
Now how I worked it out was to take each number and enumerate the set of possibilites and then count all the distinct values. How I enumerated the set was to start with the number, add one to it and then OR it with itself until I reached the mask. With the number itself being in the set and the mask (being all set to 1) also in the set. So for example to enumerate the set of 1001:
1001 = the start
1011 = (1001 + 1) | 1001
1101 = (1011 + 1) | 1001
1111 = (1101 + 1) | 1001 (this is the last value as we have reached our mask)
So do that for each number and then count the uniques.
This is it in python code (but language doesn't really matter as long as you can do bitwise operations, hence why this question is tagged for c/c++ as well):
MASK = 0x3FFFFFFF
def count_anded_bitmasks( A, B, C ):
andSets = set(
enumerate_all_subsets(A) +
enumerate_all_subsets(B) +
enumerate_all_subsets(C)
)
return len(andSets)
def enumerate_all_subsets( d ):
andSet = []
n = d
while n != MASK:
andSet.append(n)
n = (n + 1) | d
andSet.append(n)
return andSet
Now this works and gives the correct answer but I'm wondering if I have missed a trick. Since the question was to only ask the count and not enumerate all the values perhaps there is a much quicker way. Either by combining the numbers first, or getting a count without enumeration. I have a feeling there is. Since numbers that contain lots of zeros the enumeration rises exponentially and it can take quite a while.
If you have A B and C, the count of the set of numbers which has bits set to 1 where A or B or C has corresponding bits set to 1.
Some people don't understand the question (didn't help that I didn't ask it correctly first of). Let's use the given A B and C values above:
A:
1001
1011
1101
1111
B:
0011
0111
1011
1111
C:
0110
0111
1110
1111
Now combine those sets and count the distinct entries. That is the answer. Is there a way to do this without enumerating the values?
edit: Sorry for the mistake the question. Fixed now.
EDIT: updated requirement: Given 3 unsigned 30-bit integers, return the number of 30-bit integers that when compared with any of the original numbers have the same position bits set to 1. That is we enumerate all the 0s
OK that's a bit tougher. It's easy to calculate for one number, since in that case the number of possible integers depends only on the number of zero bits, like this:
// Count bits not set
const size_t NUMBITS=30;
size_t c;
size_t v = num;
for (c=NUMBITS; v; v >>= 1)
c -= v & 1;
return c;
Naively you might try extending this to three integers by doing it for each and summing the results, however that would be wrong because the possibilities need to be unique, e.g. given
A = 1001
B = 0011
C = 0110
You would count e.g. 1111 three times rather than once. You should subtract the number of combinations that are shared between any two numbers, but not subtract any combination twice.
This is simply a C++ translation of Winston Ewert's answer!
size_t zeroperms(size_t v)
{
// Count number of 0 bits
size_t c = 1;
for (c=NUMBITS; v; v >>= 1)
c -= v & 1;
// Return number of permutations possible with those bits
return 1 << c;
}
size_t enumerate(size_t a, size_t b, size_t c)
{
size_t total = zeroperms(a) + zeroperms(b) + zeroperms(c);
total -= zeroperms(a | b); // counted twice, remove one
total -= zeroperms(b | c); // counted twice, remove one
total -= zeroperms(a | c); // counted twice, remove one
total += zeroperms(a | b | c); // counted three times, removed three times, add one
return total;
}
N = 4
def supers(number):
zeros = sum(1 for bit in xrange(N) if (number >> bit) & 1 == 0)
return 2**zeros
def solve(a,b,c):
total = supers(a) + supers(b) + supers(c)
total -= supers(a | b) # counted twice, remove one
total -= supers(b | c) # counted twice, remove one
total -= supers(a | c) # counted twice, remove one
total += supers(a | b | c) # counted three times, removed three times, add one
return total
print solve(0b1001,0b0011,0b0110)
Explanation
Let S(n) be the set produce by the number n.
supers(n) returns |S(n)| the size of the set for the number n. supers is not a great name, but I had trouble coming up with a better one
The trick is to realize that S(a) ^ S(b) = S(a | b). As a result, using supers I can figure out the size of all those sets.
To figure out the rest, draw a venn diagram of the sets.
Trick 1:
The total answer is the union of each individual 30 bit number. This translates to the bitwise union operator, OR.
This means we can do D = A | B | C
With your 4 bit example, we arrive at D = 1111
Now we only need to work with 1 number
Trick 2:
A little bit of maths tells us that for every 1 we double our possible set of numbers.
This means all you need to do is raise 2 to the power of the number of 1s. Count the 1s with a loop, shifting down each time
bits = 0
D = 0b1111 #our number from trick 1
for i in range(4): #here 4 is the number of bits
if D & 1:
bits += 1
D >>= 1 #drop off the smallest number
print 2 ** bits
in this case it will print 16
In C#
Count the number of zeros. the many zero, the many way you can mask with given number. So, number of zero will make the variation of how many numbers need to consider=> this will be 2^(zero count).
Then accumulate total matches for each three numbers. Here some numbers may came double. So subtract that numbers. Add common numbers that got subtracted for previous number(follow the van diagram of three overlapping circles).
public static int count(int d)
{
int bits = 0;
for(int i = 0; i < 30; i++)
{
if(((d>>i)&1)==0) bits++;
//if ((d & 1) == 0) bits++;
//d >>= 1; //shift right
}
return (int)Math.Pow(2, bits);
}
public static int BitWiseConfirm(int A, int B, int C)
{
// write your code in C# 6.0 with .NET 4.5 (Mono)
int total = 0;
total += count(A);
//Console.WriteLine("total"+total);
total += count(B);
//Console.WriteLine("total"+total);
total += count(C);
//Console.WriteLine("total"+total);
//remove duplicates
total -= count(A | B);
//Console.WriteLine("total"+total);
total -= count(B | C);
//Console.WriteLine("total"+total);
total -= count(C | A);
//Console.WriteLine("total"+total);
total += count(A | B | C);
//Console.WriteLine("total"+total);
return total;
}
Related
I want to generate a list of binary numbers with m digits, where n bits are set to 1 and all others are set to 0. For example, let's say m is 4. I want to generate a list of binary numbers with 4 bits. Of the 16 numbers, 6 of them have 2 bits set to 1, with the others being all 0s.
0000
0001
0010
0011 <--
0100
0101 <--
0110 <--
0111
1000
1001 <--
1010 <--
1011
1100 <--
1101
1110
1111
I want to generate a list for any m bits with n bits set to 1, at the very least for the case of where n = 2. But I'm not sure what process to follow. I could of course brute-force it and generate all numbers that are m bits then check each one individually, but for a larger number of bits that could take a while. I feel like there must be a neater mathematical trick to find out the answer.
I'd appreciate any pointers of where to start. I don't mind if answers are in pseudocode or any language, so long as they're clear.
The XY problem
I'm trying to solve a chess problem, where there are two pieces on the board. First I'm trying to generate all the valid combinations of two pieces on the board, which I'm planning to do by treating the chessboard as a 64-bit binary number (0000 0000 0000 0000 .. 0011) where an on-bit is a piece. To do this I need to find an elegant way to generate the list of binary numbers.
Edit: I've tried implementing the naive algorithm in Python just for demonstration. It's taking an awful long while to execute on my VS Code for m = 64, so definitely isn't the best solution:
n = 2
m = 64
combos = []
for i in range(2 ** m):
bin_s = str(format(i, f'0{m}b'))
if bin_s.count('1') == n:
combos.append(bin_s)
for c in combos:
print(c)
print(f"There are {len(combos)} combinations")
This is called lexicographically next permutation, which is available in many bit hacking sites.
https://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
Starting with x = 0b000000111 e.g. for 3 bits, one iterates until x & (1 << m) (or there's an overflow if m == word_size).
uint64_t next(uint64_t v) {
uint64_t t = v | (v - 1); // t gets v's least significant 0 bits set to 1
// Next set to 1 the most significant bit to change,
// set to 0 the least significant ones, and add the necessary 1 bits.
return (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(v) + 1));
}
uint64_t v = 15; // 4 bits
do {
my_func(v);
if (v == 0xf000000000000000ull) {
break;
}
v = next(v);
} while (true);
Use https://docs.python.org/3/library/itertools.html#itertools.combinations to produce the set of indexes at which you have a 1. Turning that into a binary number is straightforward.
If you want this in another language, the documentation has native Python code to solve the problem.
Suppose I have two numbers(minimum and maximum) . `
for example (0 and 9999999999)
maximum could be so huge. now I also have some other number. it could be between those minimum and maximum number. Let's say 15. now What I need to do is get all the multiples of 15(15,30,45 and so on, until it reaches the maximum number). and for each these numbers, I have to count how many 1 bits there are in their binary representations. for example, 15 has 4(because it has only 4 1bits).
The problem is, I need a loop in a loop to get the result. first loop is to get all multiples of that specific number(in our example it was 15) and then for each multiple, i need another loop to count only 1bits. My solution takes so much time. Here is how I do it.
unsigned long long int min = 0;
unsigned long long int max = 99999999;
unsigned long long int other_num = 15;
unsigned long long int count = 0;
unsigned long long int other_num_helper = other_num;
while(true){
if(other_num_helper > max) break;
for(int i=0;i<sizeof(int)*4;i++){
int buff = other_num_helper & 1<<i;
if(buff != 0) count++; //if bit is not 0 and is anything else, then it's 1bits.
}
other_num_helper+=other_num;
}
cout<<count<<endl;
Look at the bit patterns for the numbers between 0 and 2^3
000
001
010
011
100
101
110
111
What do you see?
Every bit is one 4 times.
If you generalize, you find that the numbers between 0 and 2^n have n*2^(n-1) bits set in total.
I am sure you can extend this reasoning for arbitrary bounds.
Here's how I do it for a 32 bit number.
std::uint16_t bitcount(
std::uint32_t n
)
{
register std::uint16_t reg;
reg = n - ((n >> 1) & 033333333333)
- ((n >> 2) & 011111111111);
return ((reg + (reg >> 3)) & 030707070707) % 63;
}
And the supporting comments from the program:
Consider a 3 bit number as being 4a + 2b + c. If we shift it right 1 bit, we have 2a + b. Subtracting this from the original gives 2a + b + c. If we right-shift the original 3-bit number by two bits, we get a, and so with another subtraction we have a + b + c, which is the number of bits in the original number.
The first assignment statement in the routine computes 'reg'. Each digit in the octal representation is simply the number of 1’s in the corresponding three bit positions in 'n'.
The last return statement sums these octal digits to produce the final answer. The key idea is to add adjacent pairs of octal digits together and then compute the remainder modulus 63.
This is accomplished by right-shifting 'reg' by three bits, adding it to 'reg' itself and ANDing with a suitable mask. This yields a number in which groups of six adjacent bits (starting from the LSB) contain the number of 1’s among those six positions in n. This number modulo 63 yields the final answer. For 64-bit numbers, we would have to add triples of octal digits and use modulus 1023.
I am confused by the below question.
Flipping a bit means changing the bit from 0 to 1 and vice versa.An operation OP(i) would result in flipping of binary digit as follows.
Performing OP(i) would result in flipping of each ith bit from starting i>0
An n bit number is given as input and OP(j) and OP(k) are applied on it one after the other. Objective is to specify how many bits will remain the same after applying these two operations.
When I have applied the logic floor(n/i)+floor(n/j)-2 it doesn't give me the expected solution.
example:
binary number:10110101101
i:3
j:4
expected output:6
But I got 3.Please tell me how to approach this problem.
I have also checked this solution Filpping bits in binary number .But they have also mentioned the same logic.
Let the register comprises of N bits, bits 1 to N.
(1) OP(i) implies every ith bit is flipped. That is bits at i, 2*i, 3*i ...
are flipped. Total bits flipped = floor(N/i)
(2) OP(j) implies every ith bit is flipped. That is bits at j, 2*j, 3*j ...
are flipped. Total bits flipped = floor(N/j)
(3) Let L = LCM(i,j). Therefore, bits at L, 2*L, 3*L, ... will be
flipped twice, implies bits unchanged are floor(N/L)
So, after OP(i) and OP(j), the total bits changed will be
floor(N/i) + floor(N/j) - 2*floor(N/L)
Number of bits unchanged = N - floor(N/i) - floor(N/j) + 2*floor(N/L)
For N=11, i=4, j=3, L = LCM(3,4) = 12,
Number of unchanged bits = 11 - 11/4 - 11/3 + 11/12 = 11 - 2 - 3 + 0 = 6
public static int nochange_bits(String input1,int i1,int i2)
{
try{
int len=input1.length();
if(i1<1 || i2<1){
return -1;
}else if(i1>len && i2>len){
return len;
}else if(i1==i2){
return len;
}else{
return (int)(len-Math.floor(len/i1)-Math.floor(len/i2)+2*Math.floor(len/(Math.abs(i1*i2) / GCF(i1, i2))));
}
}catch(Exception e){
e.printStackTrace();
return -1;
}
}
public static int GCF(int a, int b) {
if (b == 0) return a;
else return (GCF (b, a % b));
}
a) First, we check for all the conditions and invalidity of inputs
b) Then we calculate the LCM to get the output
Explanation: It's similar to the flipping switches problem,
first turn we switch the i1 bits
second turn we switch the i2 bits
in the process, the bits which have the LCM(i1,i2) are turned back.
so we add the lcm back to the total
A solution is given to this question on geeksforgeeks website.
I wish to know does there exist a better and a simpler solution? This is a bit complicated to understand. Just an algorithm will be fine.
I am pretty sure this algorithm is as efficient and easier to understand than your linked algorithm.
The strategy here is to understand that the only way to make a number bigger without increasing its number of 1's is to carry a 1, but if you carry multiple 1's then you must add them back in.
Given a number 1001 1100
Right shift it until the value is odd, 0010 0111. Remember the number of shifts: shifts = 2;
Right shift it until the value is even, 0000 0100. Remember the number of shifts performed and bits consumed. shifts += 3; bits = 3;
So far, we have taken 5 shifts and 3 bits from the algorithm to carry the lowest digit possible. Now we pay it back.
Make the rightmost bit 1. 0000 0101. We now owe it 2 bits. bits -= 1
Shift left 3 times to add the 0's. 0010 1000. We do it three times because shifts - bits == 3 shifts -= 3
Now we owe the number two bits and two shifts. So shift it left twice, setting the leftmost bit to 1 each time. 1010 0011. We've paid back all the bits and all the shifts. bits -= 2; shifts -= 2; bits == 0; shifts == 0
Here's a few other examples... each step is shown as current_val, shifts_owed, bits_owed
0000 0110
0000 0110, 0, 0 # Start
0000 0011, 1, 0 # Shift right till odd
0000 0000, 3, 2 # Shift right till even
0000 0001, 3, 1 # Set LSB
0000 0100, 1, 1 # Shift left 0's
0000 1001, 0, 0 # Shift left 1's
0011 0011
0011 0011, 0, 0 # Start
0011 0011, 0, 0 # Shift right till odd
0000 1100, 2, 2 # Shift right till even
0000 1101, 2, 1 # Set LSB
0001 1010, 1, 1 # Shift left 0's
0011 0101, 0, 0 # Shift left 1's
There is a simpler, though definitely less efficient one. It follows:
Count the number of bits in your number (right shift your number until it reaches zero, and count the number of times the rightmost bit is 1).
Increment the number until you get the same result.
Of course it is extremely inefficient. Consider a number that's a power of 2 (having 1 bit set). You'll have to double this number to get your answer, incrementing the number by 1 in each iteration. Of course it won't work.
If you want a simpler efficient algorithm, I don't think there is one. In fact, it seems pretty simple and straightforward to me.
Edit: By "simpler", I mean it's mpre straightforward to implement, and possibly has a little less code lines.
Based on some code I happened to have kicking around which is quite similar to the geeksforgeeks solution (see this answer: https://stackoverflow.com/a/14717440/1566221) and a highly optimized version of #QuestionC's answer which avoids some of the shifting, I concluded that division is slow enough on some CPUs (that is, on my Intel i5 laptop) that looping actually wins out.
However, it is possible to replace the division in the g-for-g solution with a shift loop, and that turned out to be the fastest algorithm, again just on my machine. I'm pasting the code here for anyone who wants to test it.
For any implementation, there are two annoying corner cases: one is where the given integer is 0; the other is where the integer is the largest possible value. The following functions all have the same behaviour: if given the largest integer with k bits, they return the smallest integer with k bits, thereby restarting the loop. (That works for 0, too: it means that given 0, the functions return 0.)
Bit-hack solution with division:
template<typename UnsignedInteger>
UnsignedInteger next_combination_1(UnsignedInteger comb) {
UnsignedInteger last_one = comb & -comb;
UnsignedInteger last_zero = (comb + last_one) &~ comb;
if (last_zero)
return comb + last_one + ((last_zero / last_one) >> 1) - 1;
else if (last_one)
return UnsignedInteger(-1) / last_one;
else
return 0;
}
Bit-hack solution with division replaced by a shift loop
template<typename UnsignedInteger>
UnsignedInteger next_combination_2(UnsignedInteger comb) {
UnsignedInteger last_one = comb & -comb;
UnsignedInteger last_zero = (comb + last_one) &~ comb;
UnsignedInteger ones = (last_zero - 1) & ~(last_one - 1);
if (ones) while (!(ones & 1)) ones >>= 1;
comb += last_one;
if (comb) comb += ones >> 1; else comb = ones;
return comb;
}
Optimized shifting solution
template<typename UnsignedInteger>
UnsignedInteger next_combination_3(UnsignedInteger comb) {
if (comb) {
// Shift the trailing zeros, keeping a count.
int zeros = 0; for (; !(comb & 1); comb >>= 1, ++zeros);
// Adding one at this point turns all the trailing ones into
// trailing zeros, and also changes the 0 before them into a 1.
// In effect, this is steps 3, 4 and 5 of QuestionC's solution,
// without actually shifting the 1s.
UnsignedInteger res = comb + 1U;
// We need to put some ones back on the end of the value.
// The ones to put back are precisely the ones which were at
// the end of the value before we added 1, except we want to
// put back one less (because the 1 we added counts). We get
// the old trailing ones with a bit-hack.
UnsignedInteger ones = comb &~ res;
// Now, we finish shifting the result back to the left
res <<= zeros;
// And we add the trailing ones. If res is 0 at this point,
// we started with the largest value, and ones is the smallest
// value.
if (res) res += ones >> 1;
else res = ones;
comb = res;
}
return comb;
}
(Some would say that the above is yet another bit-hack, and I won't argue.)
Highly non-representative benchmark
I tested this by running through all 32-bit numbers. (That is, I create the smallest pattern with i ones and then cycle through all the possibilities, for each value of i from 0 to 32.):
#include <iostream>
int main(int argc, char** argv) {
uint64_t count = 0;
for (int i = 0; i <= 32; ++i) {
unsigned comb = (1ULL << i) - 1;
unsigned start = comb;
do {
comb = next_combination_x(comb);
++count;
} while (comb != start);
}
std::cout << "Found " << count << " combinations; expected " << (1ULL << 32) << '\n';
return 0;
}
The result:
1. Bit-hack with division: 43.6 seconds
2. Bit-hack with shifting: 15.5 seconds
3. Shifting algorithm: 19.0 seconds
I'm trying to find the number of overlapping 1 bits between 2 given numbers.
For example, given 5 and 6:
5 // 101
6 // 110
There is 1 overlapping 1 bit (the first bit)
I have following code
#include <iostream>
using namespace std;
int main() {
int a,b;
int count = 0;
cin >> a >> b;
while (a & b != 0) {
count++;
a >>= 1;
b >>= 1;
}
cout << count << endl;
return 0;
}
But when I entered 335 and 123 it returned 7 but I think it is not correct
Can someone see a problem with my code?
The problem is that you're just printing out the number of times any of the bits match, as you lop off the least significant bit for each iteration (which will at max be the number of bits set for the smaller number). You're comparing all bits of [a] BITWISE AND [b] each iteration. You could rectify this by masking with 1: a & b & 1, so that while you're shift thing bits rightward each time, you're only checking if the least significant bit is being checked:
while (a && b){
count += a & b & 1;
a>>=1;
b>>=1;
}
Your existing algorithm counts each bit as long as any bit in the remaining bits to test matches; since 123 and 335 have a common MSB, it's true as long as either number is non-zero. 123 is the smaller with 7 bits, so it's true 7 times until that number is completely processed. As an extreme case, 128 (10000000) and 255 (11111111) would return 8 with your method, even though it's actually 1.
You want to AND the two numbers together to start with and then count the number of 1s in that result
You want to count the number of bits that are set. Instead, your code is sort of computing the binary logarithm.
Only increment the count if the lowest order bit is set.
for (int c = a & b; c != 0; c >>= 1) {
if (c & 1)
++count;
}
Slightly shorter form:
int countCommonBits(int a,int b) {
int n = 0;
for (unsigned v = (unsigned)(a & b); v; v >>= 1) {
n += 1 & v;
}
return n;
}
If you know both numbers are positive, you can omit the use of the unsigned type. Note when using "int" that sign extension on a right shift of a negative number would give you a bit of an overcount (i.e. an infinite loop).
Much later...
Reviewing an old answer, came across this. The current "accepted" answer is 1) inefficient, and 2) an infinite loop if the numbers are negative. FWIW.